stub Risk-Promoting Robots Override Direct Experiences and Instincts in New Study - Unite.AI
Connect with us

Ethics

Risk-Promoting Robots Override Direct Experiences and Instincts in New Study

Updated on

New research coming out of the University of Southampton demonstrates how robots can encourage people to take greater risks when compared to if there was nothing influencing their behavior. The experiments, which were carried out in a simulated gambling scenario, help experts better understand risk-taking behavior and robotics both ethically and practically.

The research was led by Dr. Yaniv Hanoch, Associate Professor in Risk Management at the University of Southampton. It was published in the journal Cybersecurity, Behavior, and Social Networking.

“We know that peer pressure can lead to higher risk-taking behavior,” Dr. Hanoch said. “With the ever-increasing scale of interaction between humans and technology, both online and physically, it is crucial that we understand more about whether machines can have a similar impact.”

The Experiment

The study involved 180 undergraduate students that took the Balloon Analogue Risk Task (BART), which is a computer assessment requiring users to inflate a balloon on the screen by pressing the spacebar on a keyboard. The balloon inflates slightly with each press, and players receive one penny each time in their “temporary money bank.” The balloons can explode at any time, resulting in the player losing any money, but they have the option to “cash-in” before inflating the balloon more.

One-third of the participants, which were the control group, took the test alone in their rooms. Another one-third took the test while accompanied by a robot that only provided instruction, otherwise completely silent. The final group, which was the experimental group, took the test alongside a robot that both provided instruction and spoke encouraging statements, such as “why did you stop pumping.”

This third group that was encouraged by the robot exhibited risky behavior more than the other groups, blowing up the balloons more. At the same time, they earned more money compared to the other groups. As for the groups accompanied by the silent robot and without any robot, there was no significant difference between the two.

“We saw participants in the control condition scale back their risk-taking behavior following a balloon explosion, whereas those in the experimental condition continued to take as much risk as before,” Dr. Hanoch said. “So, receiving direct encouragement from a risk-promoting robot seems to override participants’ direct experiences and instincts.”

The researchers will conduct further studies to have a better understanding of human interaction with other artificial intelligence (AI) systems, including digital assistants.

“With the wide spread of AI technology and its interactions with humans, this is an area that needs urgent attention from the research community,” Dr. Hanoch continued.

“On the one hand, our results might raise alarms about the prospect of robots causing harm by increasing risky behavior. On the other hand, our data points to the possibility of using robots and AI in preventive programs, such as anti-smoking campaigns in schools, and with hard to reach populations, such as addicts.”

Alex McFarland is an AI journalist and writer exploring the latest developments in artificial intelligence. He has collaborated with numerous AI startups and publications worldwide.