New research out of Cornell University shows how artificial intelligence (AI) can play a role in mediating conversations. This comes during a time of social distancing and remote conversations due to a pandemic.
According to the new study, humans trusted artificial intelligent systems more than the actual people they were talking to when having difficult conversations. The artificial intelligent systems were “smart” reply suggestions in texts.
The new study is titled “AI as a Moral Crumple Zone: The Effects of Mediated AI Communication on Attribution and Trust.” It was published online in the journal Computers in Human Behavior.
Jess Hohenstein is a doctoral student in the field of information science. He is the paper’s first author.
“We find that when things go wrong, people take the responsibility that would otherwise have been designated to their human partner and designate some of that to the artificial intelligence system,” said Hohenstein. “This introduces a potential to take AI and use it as a mediator in our conversations.”
Detect When Things Go Bad
During a conversation, the algorithm can analyze language to detect the moment when things are going bad. It can then suggest certain conflict-resolution strategies, according to Hohenstein.
The study’s main goal was to look at the different subtle and significant ways that AI systems, like smart replies, can alter how humans interact. According to the researchers, something as small as selecting a reply that is not completely accurate can drastically change the different aspects of a conversation. That language is often selected to save time typing, and it can have a direct effect on relationships.
Malte Jung is co-author of the study and assistant professor of information science. He is also director of the Robots in Groups lab, which studies how robots change group dynamics.
“Communication is so fundamental to how we form perceptions of each other, how we form and maintain relationships, or how we're able to accomplish anything working together,” said Jung.
“This study falls within the broader agenda of understanding how these new AI systems mess with our capacity to interact,” Jung continued. “We often think about how the design of systems affects how we interact with them, but fewer studies focus on the question of how the technologies we develop affect how people interact with each other.”
Better Understanding of Human Interaction
The study can help understand the ways in which people perceive and interact with computers. It can also help improve human communication, through the use of subtle guidance and AI reminders.
Hohenstein and Jung wanted to find out if the AI system could absorb the “crash” of a conversation.
“There's a physical mechanism in the front of the car that's designed to absorb the force of the impact and take responsibility for minimizing the effects of the crash,” Hohenstein said. “Here we see the AI system absorb some of the moral responsibility.”
The research was supported in part by the National Science Foundation.
- The Black Box Problem in LLMs: Challenges and Emerging Solutions
- Alex Ratner, CEO & Co-Founder of Snorkel AI – Interview Series
- Circleboom Review: The Best AI-Powered Social Media Tool?
- Stable Video Diffusion: Latent Video Diffusion Models to Large Datasets
- Donny White, CEO & Co-Founder of Satisfi Labs – Interview Series