stub New Research Sheds Light on Human-Robot Trust - Unite.AI
Connect with us

Robotics

New Research Sheds Light on Human-Robot Trust

Updated on

New research, led by the U.S. Army Research Laboratory along with the University of Central Florida Institute for Simulations and Training, is shedding light on the level of trust humans have for robots. The new project focused on the relationship between humans and robots, and whether humans give more value to a robot’s reasoning or its mistakes. 

The new research looked into human-agent teaming, or HAT, and how human trust, workload, and perceptions of an agent are influenced by the transparency of those agents such as robots, unmanned vehicles, and software agents. Agent transparency is when a human is able to identify the intent, reasoning process, and future plans of agents. 

The new research suggests that human confidence in robots decreases whenever the robot makes a mistake. This is regardless of whether or not the robot has been transparent with its reasoning process. 

The new research was published in the August edition of IEEE-Transactions on Human-Machine Systems. The paper was titled “Agent Transparency and Reliability in Human-Robot Interaction: The Influence on User Confidence and Perceived Reliability.” 

Traditional research dealing with human-agent teaming uses completely reliable intelligent agents that make no mistakes. However, this new study was one of the few that explored how agent transparency interacts with agent reliability. The study involved a robot that made mistakes while humans were watching, and the humans were then asked if they viewed the robot as less reliable. During the entire process, the humans were given insight into the robot’s reasoning process. 

Dr. Julia Wright is the principal investigator for the project, and she is a researcher at U.S. Army Combat Capabilities Development Command’s Army Research Laboratory, or ARL. 

“Understanding how the robot's behavior influences their human teammates is crucial to the development of effective human-robot teams, as well as the design of interfaces and communication methods between team members,” she said “This research contributes to the Army's Multi-Domain Operations efforts to ensure overmatch in artificial intelligence-enabled capabilities. But it is also interdisciplinary, as its findings will inform the work of psychologists, roboticists, engineers, and system designers who are working toward facilitating better understanding between humans and autonomous agents in the effort to make autonomous teammates rather than simply tools.”

This new project was part of a larger one known as the Autonomous Squad Member (ASM) project that is sponsored by the Office of Secretary of Defense’s Autonomy Research Pilot Initiative. The ASM is an actual small ground robot that is used within an infantry squad. It is able to communicate and interact with the squad. 

The study involved participants observing human-agent soldier teams in a simulated environment. The ASM was part of the team, and it moved through a training course. The task for the observers was to monitor the team and evaluate the robot. Throughout the training course, the team was presented with various different events and obstacles. The soldiers were able to navigate each one correctly, but there were times when the robot could not understand the obstacle and made mistakes. The robot then sometimes shared its reasoning behind certain actions as well as the expected outcome. 

The study found that the participants were more concerned with the robot’s mistakes compared to the underlying logic and reasoning behind them. The robot’s reliability played a major role in the participant’s trust and perceptions. Whenever the robot made a mistake, the observers rated it’s reliability lower. 

The reliability and trust increased whenever the agent transparency was increased, or when the robot shared details and reasoning behind its decision. However, the reliability and trust was still lower than robots that never suffered an error. This suggested that the sharing of reasoning and underlying logic could help with some of the trust and reliability issues surrounding robots. 

“Earlier studies suggest that context matters in determining the usefulness of transparency information,” Wright said. “We need to better understand which tasks require more in-depth understanding of the agent's reasoning, and how to discern what that depth would entail. Future research should explore ways to deliver transparency information based on the tasking requirements.”

This new research will play a critical role in the field because of the increasing interaction that is taking place between humans and robots. One of the areas which will be the most important is the military. As seen in these exercises, robots and soldiers are eventually going to be side by side. Just as a soldier has to have trust in another soldier, the same will apply to robots. If that is able to be achieved and robots became commonplace in infantry squads, it will be another instance of artificial intelligence penetrating the defense industry.

 

Alex McFarland is an AI journalist and writer exploring the latest developments in artificial intelligence. He has collaborated with numerous AI startups and publications worldwide.