New research from the American Psychological Association suggests that when robots appear to engage with people and display human-like emotions, people might perceive them as capable of “thinking.” In other words, they are believed to be acting on their own beliefs and desires instead of just their programs.
The research was published in the journal Technology, Mind, and Behavior.
AI and Human-Robot Interaction
Agnieszka Wykowska, PhD, is the study author and a principal investigator at the Italian Institute of Technology.
“As artificial intelligence increasingly becomes a part of our lives, it is important to understand how interacting with a robot that displays human-like behaviors might induce higher likelihood of attribution of international agency to the robot,” Wykowska says.
The team carried out three experiments involving 119 patients, with the individuals being examined on how they would perceive a human-like robot called iCub after socializing and watching videos with it. Participants completed a questionnaire before and after interacting with the robot. The questionnaire displayed pictures of the robot in different situations and asked the participants to choose whether the robot’s motivation in each was mechanical or intentional.
The first two experiments involved the researchers remotely controlling iCub’s actions so it would behave gregariously. It greeted each individual, introduced itself, and asked for the participants’ names. The robot’s eyes had cameras that could recognize the participants’ faces and maintain eye contact. The individuals were then asked to watch three short documentary videos with the robot, which was programmed to respond with sounds and facial expressions of sadness, happiness, or awe.
Moving on to the third experiment, the team programmed iCub to behave more like a machine while it watched videos with the participants. The cameras were deactivated so it could not maintain eye contact, and it only spoke recorded sentences about the calibration process it was undergoing. Instead of emotional responses to the videos, the robot only responded with a “beep” and repetitive movements of its torso, head, and neck.
Importance of Human-Like Behavior
The research demonstrated that participants who watched videos with the human-like robot were more likely to rate the robot’s actions as intentional, not programmed. But for those who only interacted with the machine-like robot, they were more likely to rate the actions as programmed. These results suggest that exposure to a human-like robot is not enough to make people believe it is capable of thought and emotion, but instead, it is human-like behavior that helps the robot be perceived as an intentional agent.
Wykowska says that the findings show that people might be more likely to believe artificial intelligence is capable of independent thought if it demonstrates human-like behavior.
“Social bonding with robots might be beneficial in some contexts, like with socially assistive robots. For example, in elderly care, social bonding with robots might induce a higher degree of compliance with respect to following recommendations regarding taking medication,” Wykowska said. “Determining contexts in which social bonding and attribution of intentionality is beneficial for the well-being of humans is the next step of research in this area.”