stub New Insight Into Lack of Trust for Artificial Intelligence - Unite.AI
Connect with us

Artificial Intelligence

New Insight Into Lack of Trust for Artificial Intelligence

Published

 on

Recent research is providing new insight into what determines individuals’ level of trust in artificial intelligence (AI). A team from the University of Kansas, which was led by relationship psychologist Omri Gillath, detailed how that relationship is impacted by the individuals’ real-life relationship or attachment style.

The team consisted of a variety of experts from different disciplines, such as psychology, engineering, business, and medicine. 

The paper was published in the journal Computers in Human Behavior

According to the research, people are more likely to be less trustful of AI systems if they are anxious about their real-life relationships with humans. The paper also details how it is possible for trust in artificial intelligence to be increased by reminding individuals of their stable human relationships.

There is still a high level of untrust among new artificial intelligence technologies, despite the estimated global artificial-intelligence market expected to hit $39.9 billion in 2019. 

Increasing Trust

The research team did not just identify the problem of a lack of trust for AI systems, but they also came up with ways to increase trust. The studies on human relationships suggested a few different things.

First, people who have attachment anxiety are predicted to have less trust in artificial intelligence. Secondly, by enhancing attachment anxiety, trust in artificial intelligence was reduced. Lastly, trust in artificial intelligence increases when attachment security is enhanced. 

Gillath is a professor of psychology at KU. 

“Most research on trust in artificial intelligence focuses on cognitive ways to boost trust. Here we took a different approach by focusing on a ‘relational affective’ route to boost trust, seeing AI as a partner or a team member rather than a device,” Gillath said. 

“Finding associations between one’s attachment style — an individual difference representing the way people feel, think and behave in close relationships — and her trust in AI paves the way to new understandings and potentially new interventions to induce trust.”

With their research, the team is bringing forth a new way of looking at artificial intelligence and trust surrounding it, specifically what goes into affecting that trust. It could play a role in easing the introduction of AI into the workplace and new environments. 

“The findings show you can predict and increase people’s trust levels in non-humans based on their early relationships with humans,” Gillath said. “This has the potential to improve adoption of new technologies and the integration of AI in the workplace.” 

Mistrust of AI

The mistrust of AI among the population is nothing new. For the past few years, there has been a lot of skepticism surrounding the technology and its implementation. This is by no means unwarranted, as various issues have developed throughout the years to cause this mistrust. 

Just recently, international scientists coming from some of the world’s top institutions criticized the lack of transparency within AI research. Back in June of 2019, the United States saw its first case of a wrongful arrest due to a bad algorithm. There are many more examples such as bias in computer vision and the use of AI by governments for combat and surveillance. 

While all these examples may seem far removed from an individual's personal experience with AI, they undoubtedly play a role in forming the overall perception of the technology. New research like what is coming out of the University of Kansas provides much needed insight into addressing some of these issues. 

 

Alex McFarland is an AI journalist and writer exploring the latest developments in artificial intelligence. He has collaborated with numerous AI startups and publications worldwide.