Connect with us

Robotics

Study Suggests Robots Are More Persuasive When They Pretend To Be Human

mm

Published

 on

Advances in artificial intelligence have created bots and machines that can potentially pass as humans if they interact with people exclusively through a digital medium. Recently, a team of computer science researchers have studied how robots/machines and humans interact when the humans believe that the robots are also human. As reported by ScienceDaily, the results of the study found that people find robots/chatbots more persuasive when they believe the bots are human.

Talal Rahwan, the associate professor of Computer Science at NYU Abu Dhabi, has recently led a study that examined how robots and humans interact with each other. The results of the experiment were published in Nature Machine Intelligence in a report called Transparency-Efficiency Tradeoff in Human-Machine Cooperation. During the course of the study, test subjects were instructed to play a cooperative game with a partner, and the partner may be either a human or a bot.

The game was a twist on the classic Prisoner’s Dilemma, where participants must decide whether or not to cooperate or betray the other on every round. In a prisoner’s dilemma, one side may choose to defect and betray their partner to achieve a benefit at cost to the other player, and only by cooperating can both sides assure themselves of gain.

The researchers manipulated their test subjects by providing them with either correct or incorrect information about the identity of their partner. Some of the participants were told that they were playing with a bot, even though their partner was actually human. Other participants were in the inverse situation. Over the course of the experiment, the research team was able to quantify if people treated partners differently when they were told their partners were bots. The researchers tracked the degree to which any prejudice against the bots existed, and how these attitudes impacted interactions with bots who identified themselves.

The results of the experiment demonstrated that bots were more effective at engendering cooperation from their partners when the human believed that the bot was also a human. However, when it was revealed that the bot was a bot, cooperation levels dropped. Rahwan explained that while many scientists and ethicists agree that AI should be transparent regarding how decisions are made, it’s less clear that they should also be transparent about their nature when communicating with others.

Last year, Google Duplex made a splash when a stage demo showed that it was capable of making phone calls and booking appointments on behalf of its use, generating human-like speech so sophisticated that many people would have mistaken it for a real person had they not been told they were speaking to a bot. Since the debut of Google Duplex, many AI and robot ethicists voiced their concerns over the technology, prompting Google to say that it would have the agent identify itself as a bot in the future. Currently, Google Duplex is only being used in a very limited capacity. It will soon see use in New Zealand, but only to check for the operating hours of businesses. Ethicists are still worried about the degree to which the technology could be misused.

Rahawan argues that the recent study demonstrates that we should consider what costs we are willing to pay in return for transparency:

“Is it ethical to develop such a system? Should we prohibit bots from passing as humans, and force them to be transparent about who they are? If the answer is ‘Yes', then our findings highlight the need to set standards for the efficiency cost that we are willing to pay in return for such transparency.”