Scientists from the University of California, Los Angeles (UCLA) and Carnegie Mellon University have adapted sophisticated computer graphics technology for soft robotics. They used the same technology that motion-picture animators and video game developers rely on to create very detailed images, such as hair and fabric in animated films. It is now being used by the scientists to simulate soft, limbed robots and their movements.
The work was published in Nature Communications on May 6. The paper is titled “Dynamic Simulation of Articulated Soft Robots.”
Khalid Jawed is the study author and an assistant professor of mechanical and aerospace engineering at UCLA Samueli School of Engineering.
“We have achieved faster than real-time simulation of soft robots, and this is a major step toward such robots that are autonomous and can plan out their actions on their own,” said Jawed. “Soft robots are made of flexible material which makes them intrinsically resilient against damage and potentially much safer in interaction with humans. Prior to this study, predicting the motion of these robots has been challenging because they change shape during operation.”
DER and FEM Technologies
An algorithm called discrete elastic rods (DER) is often used in movie-making in order to animate free-flowing objects. In just a fraction of a second, DER is capable of predicting hundreds of movements.
The researchers set out to use DER to develop a physics engine capable of simulating the movements of bio-inspired robots. They also wanted to use it for robots that exist in difficult environments, like those developed for Mars or underwater.
Finite element method (FEM) is also an algorithm-based technology, and it is able to simulate the movements of solid and rigid robots. However, FEM is not ideal when it comes to soft, natural movements and the required level of detail. Besides that, FEM relies on a lot of computational power and requires long periods of time.
In order to develop and simulate soft robots, roboticists have relied on trial-and-error methods.
Carmel Majidi is an associate professor of mechanical engineering in Carnegie Mellon’s College of Engineering.
“Robots made out of hard and inflexible materials are relatively easy to model using existing computer simulation tools,” said Majidi. “Until now, there haven’t been good software tools to simulate robots that are soft and squishy. Our work is one of the first to demonstrate how soft robots can be successfully simulated using the same computer graphics software that has been used to model hair and fabrics in blockbuster films and animated movies.”
The researchers began to collaborate in Majidi’s Soft Machines Lab over three years ago. Their most recent project involved Jawed running simulations in his research lab at UCLA and Majidi performing physical experiments to confirm the simulation results.
The simulation tool drastically reduces the time it takes to get a soft robot to the point of application.
Support from the Army Research Office
The research was partly funded by the Army Research Office, which is a part of the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory.
Dr. Samuel Stanton is a program manager with the Army Research Office.
“Experimental advances in soft-robotics have been outpacing theory for several years,” said Stanton. “This effort is a significant step in our ability to predict and design for dynamics and control in highly deformable robots operating in confined spaces with complex contacts and constantly changing environments.”
The technology is now being explored and tried on other kinds of soft robots. One of those areas is robots that are based on the movements of bacteria and starfish, which could be utilized in oceanography tasks such as monitoring seawater conditions or inspecting marine life.
- Do Conversational Agents Like Alexa Affect How Children Communicate?
- Hobbling Computer Vision Datasets Against Unauthorized Use
- Faisal Ahmed. Co-Founder & CTO at Knockri – Interview Series
- The Shortcomings of Amazon Mechanical Turk May Threaten Natural Language Generation Systems
- AI Chipmaker Deep Vision Raises $35 Million in Series B Funding