The bots were introduced by researchers at Facebook Artificial Intelligence Research (FAIR).
"Similar to how people have differing goals, run into conflicts, and then negotiate to come to an agreed-upon compromise, the researchers have shown that it's possible for dialogue agents with differing goals to engage in start-to-finish negotiations with other bots or people while arriving at common decisions or outcomes," said Facebook in a blog post.
Facebook trained the bots by showing them negotiation dialogues between real people and then having the bots imitate people's actions, a process called supervised learning.
In the training, the bots were shown a collection of items each having a particular point value and were instructed to divide them between themselves and another agent by negotiating a split of the items so that the bots maximised their points.
To train it to achieve its goals, the researchers had the model practice thousands of negotiations against itself, and used reinforcement learning to reward the bot when it achieved a good outcome.
Facebook claims the bots got smart enough to negotiate with humans such that they did not realise they were dealing with a machine.
"Interestingly, in the experiments, people did not realise they were talking to a bot and not another person, showing that the bots had learned to hold fluent conversations in English in this domain," Facebook said.
The bots even learned to bluff by initially feigning interest in a valueless item, only to later "compromise" by conceding it, it said.
"This behaviour was not programmed by the researchers but was discovered by the bot as a method for trying to achieve its goals," said Facebook.
The researchers believe this is an important step for the research community and developers toward creating chatbots that can reason, converse and negotiate, all key steps in building a personalised digital assistant.