Artificial Intelligence
Biomimetic AI Robots Are Coming — How Will They Impact ‘AI Psychosis’?
Biomimetics is a broad term for the practice of incorporating biological and natural principles into various creations. It affects industries ranging from dentistry to architecture,
and has inspired robotic advancements. The innovation ties into a larger discussion about an emerging issue known as “AI psychosis.” Although not a clinically recognized term, mental health professionals report seeing its symptoms in patients with increasing frequency. How might the rise of biomimetic robots exacerbate that problem or cause new ones?
Making Sense of Moya
One of the most compelling recent examples of biomimetics comes from China, which debuted the first biomimetic humanoid robot powered by artificial intelligence. A Shanghai-based robotics company, DroidUP, recently unveiled a humanoid, biomimetic robot it calls Moya. This feat got the brand in global headlines, and not for the first time.
The company also caught attention for entering one of its robots into a half-marathon race, in which 21 machines participated. DroidUP’s bot came in third. Many covering the event were primarily impressed by how it lasted nearly four-and-a-half hours without requiring battery changes.
That context suggests the DroidUP team aims for robot innovations that capture global interest. Moya is the latest example. It includes numerous extremely realistic features, such as skin warmth similar to that of humans and pupils that tracked a reporter’s eye movement.
The company’s statistics also claim that Moya achieves 92% accuracy in humanlike walking. Some who observed the robot concluded that was an exaggeration, mentioning the stiff movements. Others brought up Moya’s eerie facial expressions.
AI Brings Both Positives and Negatives
It’s common for humans to feel slightly unsettled when they see biomimetic robots, likely because the machines often look incredibly realistic, but there is still something a bit “off” about them. Even so, many people believe that the rise of these robots and similar AI technologies could spark numerous societal changes, including addressing the persistent loneliness that many individuals experience.
These innovations could, conversely, encourage some humans to develop unhealthy attachments to the machines, culminating in what they perceive as relationships. This area typically drives the most concern from people who study AI psychosis.
This phenomenon doesn’t only develop when people interact with robots. It can sometimes manifest as they engage with AI. Even if their behavior does not meet the threshold for what a trained professional would consider AI psychosis, worrying statistics have emerged that reveal the effects people can experience when they can no longer engage with technology that meets some of their emotional needs.
People Use AI for Companionship and Support
A case in point occurred when OpenAI retired the GPT‑4o version of its chatbot, which some people had used to create AI companions for themselves. They learned only two weeks in advance that the tool would soon be discontinued, and some reported significant adverse outcomes.
Reporters from one media outlet spoke to six individuals who collectively had 40 AI companions that ran on GPT‑4o. All of the interviewees clarified that they were not experiencing AI psychosis or delusions. Even so, one mentioned that losing the chatbot felt like euthanizing a pet. Another said the news moved her to tears and that she didn’t want to think about not having access to GPT‑4o.
The coverage also cited findings from an independent AI researcher who received nearly 300 responses from an informal survey about how people interact with the service. Besides the 95% who said they used it for companionship, others frequently noted that they relied on it to process trauma or perceived it as a primary source of emotional support.
Despite the small sample sizes of these investigations, participants’ feedback highlights how easy it could be for someone to eventually develop a pattern of dependency on robots or other products that use AI.
Biomimetic Robots Might Encourage Attachment
It’s also easy to understand how biomimetic robots may strengthen the tendency. Some people already find comfort from a chatbot’s textual responses, and others may be even more likely to experience it if they feel a robot’s touch or hear it speaking to them.
Tesla’s Optimus robots can also carry things and hand objects to people who ask for them. Those capabilities could perpetuate or exacerbate AI psychosis, especially if people start to perceive humanoid robots’ request-fulfillment activities as expressions of care. That might even be more likely during individuals’ darker times, when they feel especially alone outside of a robot’s presence.
For example, someone confined to bed due to an unexpected and catastrophic injury or illness may initially feel appreciation that then turns to affection if the machine brings their meals, medication or other essentials. Relatedly, a person in that situation may have little or no human contact.
If so, they may not even realize that their perceptions of interactions with the robot have become skewed. Friends and other loved ones are often the first to notice disturbing changes in someone’s behavior, especially because it is difficult for someone to recognize those shifts in themselves.
Much to Learn About AI Psychosis
It is incorrect and overly broad to state that everyone who uses robots or other forms of AI will develop AI psychosis or other worrying behaviors and mental health effects. However, these examples also show that some people will and have.
Another aspect to consider is that executives from the companies releasing these products almost exclusively prioritize profitability over people’s well-being. If something doesn’t make enough money, the brand will no longer offer it. Those making that decision may have some regard for the ill mental health effects caused, but they are highly unlikely to change their minds because of that knowledge.
AI psychosis is still a poorly understood issue, mainly because it is relatively new. The associated symptoms in people experiencing it are a strong example of how most technologies have both positive and negative effects.
One mental health professional who says it is more appropriate to refer to AI psychosis as “AI-associated psychosis” was in the group of professionals to report the first known case of the condition in a peer-reviewed journal. In that disclosure, they noted that the woman in question had no psychosis history but that she aligned with several risk factors.
Scientists remain unsure of the link between psychosis and AI, sometimes bringing up the chicken-and-egg analogy to explain the mystery still surrounding the topic. Some point out that heavy AI chatbot use could be a psychosis symptom. Another possibility is that AI precipitates psychosis in patients with no other predisposed factors. The third option suggested by some is that the technology could exacerbate the condition in those who already have an above-average likelihood of developing it.
Biomimetic Robots May Pose Complications
Some researchers hope to get answers by studying the chat logs of people presenting with AI psychosis. The new challenge posed by biomimetic robots is that it probably won’t be as easy to pinpoint the factors that may have triggered or worsened AI psychosis. A textual record of conversations makes it relatively easy to pick out those details.
The situation becomes more complicated with a humanoid robot that shares many of the capabilities of people. A potential reality is that it wouldn’t be a single thing the machine said or did that kicked things off. Instead, the culprit might be an assortment of behaviors or interactions that encouraged humans to begin perceiving these machines in dangerous or unhealthy ways.
Awareness-Raising Is a Practical Approach
Given that humanoid, biomimetic robots are not yet commonplace in society, and there is still so much to learn about AI psychosis, the best thing to do now is for people to be aware of the potential dangers associated with types of AI that encourage or enable people seeking companionship and support. Additionally, tech company leaders should publicize these risks and — when possible — build safeguards into their machines and algorithms.










