Connect with us

Robotics

Researchers Gain Insight Into Brain Activity During Human-Robot Collaboration

Published

 on

Image: Texas A&M University

A team of researchers at Texas A&M University has used functional near-infrared spectroscopy to capture functional brain activity during human-robot collaboration on a manufacturing task. 

Collaboration between humans and robots is becoming more commonplace throughout many industries, which highlights the need to ensure an effective and smooth relationship between the two. A fundamental aspect of achieving this relationship is human willingness to trust robot behavior, but it has proven difficult to track this due to subjectivity. 

Human-Autonomy Trust Research

Dr. Ranjana, who is an associate professor and director of the NeuroErgonomics Lab, said her lab’s human-autonomy trust research branched off from other projects focused on human-robot interactions in safety-critical work domains. 

“While our focus so far was to understand how operator states of fatigue and stress impact how humans interact with robots, trust became an important construct to study,” Mehta said. “We found that as humans get tired, they let their guards down and become more trusting of automation than they should. However, why that is the case becomes an important question to address.” 

The new research was published in Human Factors: The Journal of the Human Factors and Ergonomics Society

It focuses on understanding the brain-behavior relationships involving an operator’s trusting behaviors, which can be influenced by both human and robot factors.

Capturing Functional Brain Activity 

The lab relied on functional near-infrared spectroscopy to capture functional brain activity as operators collaborated with robots on manufacturing tasks. The research found that faulty robot actions decreased the operator’s trust in the robot, and the distrust was associated with increased activation of regions in the frontal, motor and visual cortices. These changes indicated an increasing workload and heightened situational awareness. The team found that the distrusting behavior was also associated with the decoupling of these brain regions working together. According to Mehta, the decoupling was greater at higher robot autonomy levels.

“What we found most interesting was that the neural signatures differed when we compared brain activation data across reliability conditions (manipulated using normal and faulty robot behavior) versus operator’s trust levels (collected via surveys) in the robot,” Mehta said. “This emphasized the importance of understanding and measuring brain-behavior relationships of trust in human-robot collaborations since perceptions of trust alone is not indicative of how operators’ trusting behaviors shape up.”

According to Dr. Sarah Hopko, who is lead author of the research and a recent industrial engineering student, neural responses and perceptions of trust are symptoms of trusting and distrusting behaviors. They relay information on how trust is built, breached, and repaired with different robot behaviors. She also said that the strengths of multimodal trust metrics, such as neural activity and eye tracking, can reveal new perspectives. 

The team will now look to expand the research into other areas, such as emergency response. They will also look to understand how trust in multi-human robot teams can impact teamwork and taskwork in safety-critical environments. 

“The work is critical, and we are motivated to ensure that humans-in-the-loop robotics design, evaluation and integration into the workplace are supportive and empowering of human capabilities,” Mehta concluded.

Alex McFarland is a Brazil-based writer who covers the latest developments in artificial intelligence & blockchain. He has worked with top AI companies and publications across the globe.