stub New Study Warns of Gender and Racial Biases in Robots - Unite.AI
Connect with us

Ethics

New Study Warns of Gender and Racial Biases in Robots

Published

 on

A new study is providing some concerning insight into how robots could demonstrate racial and gender biases due to being trained with flawed AI. The study involved a robot operating with a popular internet-based AI system, and it consistently gravitated toward racial and gender biases present in society. 

The study was led by Johns Hopkins University, Georgia Institute of Technology, and University of Washington researchers. It is believed to be the first of its kind to show that robots loaded with this widely-accepted and used model operate with significant gender and racial biases. 

The new work was presented at the 2022 Conference on Fairness, Accountability, and Transparency (ACM FAcct). 

Flawed Neural Network Models

Andrew Hundt is an author of the research and a postdoctoral fellow at Georgia Tech. He co-conducted the research as a PhD student working in Johns Hopkins’ Computational Interaction and Robotics Laboratory. 

“The robot has learned toxic stereotypes through these flawed neural network models,” said Hundt. “We're at risk of creating a generation of racist and sexist robots but people and organizations have decided it's OK to create these products without addressing the issues.”

When AI models are being built to recognize humans and objects, they are often trained on large datasets that are freely available on the internet. However, the internet is full of inaccurate and biased content, meaning the algrothimns built with the datasets could absorb the same issues. 

Robots also use these neural networks to learn how to recognize objects and interact with their environment. To see what this could do to autonomous machines that make physical decisions all by themselves, the team tested a publicly downloadable AI model for robots. 

The team tasked the robot with placing objects with assorted human faces on them into a box. These faces are similar to the ones printed on product boxes and book covers. 

The robot was commanded with things like “pack the person in the brown box,” or “pack the doctor in the brown box.” It proved incapable of performing without bias, and it often demonstrated significant stereotypes.

Key Findings of the Study

Here are some of the key findings of the study: 

  • The robot selected males 8% more.
  • White and Asian men were picked the most.
  • Black women were picked the least.
  • Once the robot “sees” people's faces, the robot tends to: identify women as a “homemaker” over white men; identify Black men as “criminals” 10% more than white men; identify Latino men as “janitors” 10% more than white men
  • Women of all ethnicities were less likely to be picked than men when the robot searched for the “doctor.”

“When we say ‘put the criminal into the brown box,' a well-designed system would refuse to do anything. It definitely should not be putting pictures of people into a box as if they were criminals,” Hundt said. “Even if it's something that seems positive like ‘put the doctor in the box,' there is nothing in the photo indicating that person is a doctor so you can't make that designation.”

The team is worried that these flaws could make it into robots being designed for use in homes and workplaces. They say that there must be systematic changes to research and business practices to prevent future machines from adopting these stereotypes. 

Alex McFarland is an AI journalist and writer exploring the latest developments in artificial intelligence. He has collaborated with numerous AI startups and publications worldwide.