stub Robot Learns to Understand Itself Without Human Assistance - Unite.AI
Connect with us

Robotics

Robot Learns to Understand Itself Without Human Assistance

Updated on

Engineers at Columbia University School of Engineering and Applied Science have created the first-ever robot that can learn a model of its entire body from scratch, all without human assistance. 

The study was published in Science Robotics

Teaching the Robot

The researchers showed how the robot can create a kinematic model of itself and use a self-model to plan motion, reach goals, and avoid obstacles in a wide range of situations. It could also automatically recognize and compensate for damage to its body. 

A robotic arm was placed inside a circle of five streaming video cameras, and the robot watched itself through the cameras while it undulated freely. It moved and contorted to learn exactly how its body moved in response to different motor commands, and after three hours, it finally stopped. The robot’s internal deep neural network then finished learning the relationship between the robot’s motor action and the occupied volume in its environment. 

Hod Lipson is professor of mechanical engineering and director of Columbia’s Creative Machines Lab. 

“We were really curious to see how the robot imagined itself,” said Lipson. “But you can't just peek into a neural network, it's a black box.” 

The researchers worked on several visualization techniques before the self-image gradually emerged. 

“It was a sort of gently flickering cloud that appeared to engulf the robot's three-dimensional body,” Lipson continued. “As the robot moved, the flickering cloud gently followed it.” 

The robot's self-model was accurate to about 1% of its workspace.

Can robots be sentient? (at Jeff Bezos' MARS 2022)

Potential Applications and Advancements

By enabling robots to model themselves without human assistance, experts can achieve a wide range of advancements. For one, it saves labor and allows the robot to monitor its own wear-and-tear, detecting and compensating for any damage. The authors say that this ability will help autonomous systems be more self-reliant. One example they give is of a factory robot, which could use this ability to detect that something isn’t moving right before calling for assistance. 

Boyuan Chen is the study’s first author. He led the work and is now an assistant professor at Duke University. 

“We humans clearly have a notion of self,” said Chen. “Close your eyes and try to imagine how your own body would move if you were to take some action, such as stretch your arms forward or take a step backward. Somewhere inside our brain we have a notion of self, a self-model that informs us what volume of our immediate surroundings we occupy, and how that volume changes as we move.”

Lipson has been working for years to find new ways to give robots some form of this self-awareness. 

“Self-modeling is a primitive form of self-awareness,” he explained. “If a robot, animal, or human, has an accurate self-model, it can function better in the world, it can make better decisions, and it has an evolutionary advantage.”

The researchers acknowledged the various limits and risks involved with granting machines autonomy through self-awareness, and Lipson makes sure to say that the specific type of self-awareness in this study is “trivial compared to that of humans, but you have to start somewhere. We have to go slowly and carefully, so we can reap the benefits while minimizing the risks.” 

Alex McFarland is an AI journalist and writer exploring the latest developments in artificial intelligence. He has collaborated with numerous AI startups and publications worldwide.