Connect with us


Robot Teaches Itself To Walk Through Reinforcement Learning



While Boston Dynamics and dancing robots usually receive most of the attention, there are some major developments taking place behind the scenes that don’t receive enough coverage. One of those developments comes from a Berkeley lab, where a robot named Cassie was able to teach itself to walk through reinforcement learning. 

After trial and error, the pair of robotic legs learned to navigate in a simulated environment before being put to test in the real world. Initially, the robot demonstrated an ability to walk in all directions, walk while squatting down, reposition itself when being pushed off balance, and adjust to different types of surfaces. 

The Cassie robot is the first instance of a two-legged robot successfully using reinforcement learning to walk. 

The Awe of Dancing Robots

While robots such as those from Boston Dynamics are extremely impressive and awe nearly everyone who watches them, there are a few key factors. Most notably, these robots are hand programmed and choreographed in order to achieve the result, but this is not the preferred method in real-world situations. 

Outside of the lab, robots must be robust, resilient, flexible, and much more. On top of all of that, they need to be able to encounter and handle unexpected situations, which can only be done by enabling them to handle such situations themselves. 

Zhongyu Li was part of the team working on Cassie at the University of Berkeley. 

“These videos may lead some people to believe that this is a solved and easy problem,” Li says. “But we still have a long way to go to have humanoid robots reliably operate and live in human environments.” 

Reinforcement Learning for Robust Parameterized Locomotion Control of Bipedal Robots


Reinforcement Learning

In order to create such a robot, the Berkeley team relied on reinforcement learning, which has been used by companies like DeepMind to train algorithms to beat human beings at the world’s most complex games. Reinforcement learning is based on trial and error, with the robot learning from its mistakes. 

The Cassie robot used reinforcement learning to learn how to walk in a simulation, which isn't the first time this approach has been used. However, this does not normally make it out of the simulated environment and into the real-world. Even a small difference can result in the robot failing to walk. 

The researchers used two simulations rather than one, with the first being an open source training environment called MuJoCo. In this first simulation, the algorithm tried and learned from a library of possible movements, and in the second simulation called SimMechanics, the robot tested them out in more real-world conditions.

After being developed in the two simulations, the algorithm did not need to be fine tuned. It was already ready to go in the real-world. Not only was it able to walk, but it was able to do much more. According to the researchers, Cassie was able to recover after two motors in the robot’s knee malfunctioned.

While Cassie may not have all of the bells and whistles as some of the other robots, it is in many ways far more impressive. It also has greater implications for the technology when it comes to real-world use, as such a walking robot could be used in many different sectors.  


Alex McFarland is an AI journalist and writer exploring the latest developments in artificial intelligence. He has collaborated with numerous AI startups and publications worldwide.