stub Kids, AI, and Ethics: How Children Perceive and Treat Alexa and Roomba - Unite.AI
Connect with us

Ethics

Kids, AI, and Ethics: How Children Perceive and Treat Alexa and Roomba

Published

 on

Photo by Brandon Romanchuk on Unsplash

A recent study conducted by Duke developmental psychologists explored how children perceive the intelligence and emotions of AI devices, specifically comparing the smart speaker Alexa to the autonomous vacuum Roomba. Researchers found that children aged four to eleven tended to view Alexa as having more human-like thoughts and emotions compared to Roomba.

The findings of the study were published online on April 10 in the journal Developmental Psychology.

Lead author Teresa Flanagan was partly inspired by Hollywood portrayals of human-robot interactions, such as those seen in HBO's “Westworld.” The study involved 127 children aged four to eleven, who watched a 20-second clip of each technology and then answered questions about the devices.

“In Westworld and the movie Ex Machina, we see how adults might interact with robots in these very cruel and horrible ways,” said Flanagan. “But how would kids interact with them?”

Treating AI Devices with Respect

Despite the differences in perceived intelligence between Alexa and Roomba, children across all age groups agreed that it was wrong to hit or yell at the machines. However, as children grew older, they reported that it was slightly more acceptable to attack technology.

“Four- and five-year-olds seem to think you don't have the freedom to make a moral violation, like attacking someone,” Flanagan said. “But as they get older, they seem to think it's not great, but you do have the freedom to do it.”

The study revealed that children generally believed that Alexa and Roomba did not have the ability to feel physical sensations like humans do. They attributed mental and emotional capabilities to Alexa, such as being able to think or get upset, while they did not think the same of Roomba.

“Even without a body, young children think the Alexa has emotions and a mind,” Flanagan said. “And it's not that they think every technology has emotions and minds — they don't think the Roomba does — so it's something special about the Alexa's ability to communicate verbally.”

Flanagan and her graduate advisor Tamar Kushnir, a Duke Institute for Brain Sciences faculty member, are currently trying to understand why children think it is wrong to assault home technology.

Implications and Ethical Questions

The study's findings provide insights into the evolving relationship between children and technology, raising important ethical questions regarding the treatment of AI devices and machines. For example, should parents model good behavior for their children by thanking AI devices like Siri or ChatGPT for their help?

The research also highlights the need to explore whether children believe that treating AI devices poorly is morally wrong, or simply because it might damage someone's property.

“It's interesting with these technologies because there's another aspect: it's a piece of property,” Flanagan said. “Do kids think you shouldn't hit these things because it's morally wrong, or because it's somebody's property and it might break?”

Alex McFarland is an AI journalist and writer exploring the latest developments in artificial intelligence. He has collaborated with numerous AI startups and publications worldwide.