Connect with us


New Study Shows Benefit of Robots Expressing Vulnerability



A new study out of Yale University regarding robots’ effects on human-to-human interactions showed the benefits that are possible when a robot expresses vulnerability. 

The study found that there was a more positive group experience when humans were teamed up with a robot expressing vulnerability, compared to the people teamed with silent or neutral robots. Among the team with the vulnerable robot, there was more communication between each other. 

The work was published on March 9 in the Proceedings of the National Academy of Sciences. 

Margaret L. Traeger is a Ph.D. candidate in sociology at the Yale Institute for Network Science (YINS) and the study’s lead author. 

“We know that robots can influence the behavior of humans they interact with directly, but how robots affect the way humans engage with each other is less well understood,” said Traeger. “Our study shows that robots can affect human-to-human interactions.”

It is becoming increasingly important to understand the influence social robots have on human behavior. They are being implemented throughout society in places such as stores and hospitals. 

“In this case,” Traeger said, “we show that robots can help people communicate more effectively as a team.”

The experiment conducted by the researchers included 153 people that were divided into 51 groups. Each group composed of three humans and one robot, and they played a tablet-based game where members collaborated to create the most efficient railroad routes over 30 rounds. The robots exhibited three different types of behavior at the end of each round: robots either remained silent, responded with a neutral, task-related subject like the number of rounds completed, or acknowledged a mistake. 

For the humans that were teamed up with robots making vulnerable statements, there was twice as much communication time between each other during the game. According to the humans, the experience was more enjoyable compared to the other two groups. 

Conversation among the humans increased during the game when the robots made vulnerable statements, compared to those who made neutral statements. When the robot was silent, the conversation among humans was less evenly distributed. 

There was also more equal verbal participation among humans in groups with the vulnerable and neutral robots, compared to those in the groups with a silent robot. This could mean that a speaking robot leads to more equal communication among people. 

Nicholas A. Christakis is a Sterling Professor of Social and Natural Science.

“We are interested in how society will change as we add forms of artificial intelligence to our midst,” said Christakis. “As we create hybrid social systems of humans and machines, we need to evaluate how to program the robotic agents so that they do not corrode how we treat each other.”

According to Sarah Strohkorb Sebo, a Ph.D. candidate in the Department of Computer Science and a co-author of the study, there is great importance in understanding the social influence of robots in human spaces.

“Imagine a robot in a factory whose task is to distribute parts to workers on an assembly line,” she said. “If it hands all the pieces to one person, it can create an awkward social environment in which the other workers question whether the robot believes they’re inferior at the task. Our findings can inform the design of robots that promote social engagement, balanced participation, and positive experiences for people working in teams.”

The study’s co-authors also included Yale’s Brian Scassellati, professor of computer science, cognitive science, and mechanical engineering; and Cornell’s Malte Jung, assistant professor in information science.

The research was supported by the Robert Wood Johnson Foundation and the National Science Foundation.


Alex McFarland is a Brazil-based writer who covers the latest developments in artificial intelligence & blockchain. He has worked with top AI companies and publications across the globe.