stub Human Language Accelerates Robotic Learning - Unite.AI
Connect with us

Robotics

Human Language Accelerates Robotic Learning

Published

 on

Image: Princeton University

A team of researchers at Princeton has found that human-language descriptions of tools can accelerate the learning of a simulated robotic arm that can lift and use various tools.

The new research supports the idea that AI training can make autonomous robots more adaptive in new situations, which in turn improves their effectiveness and safety.

By adding descriptions of a tool’s form and function to the robot’s training process, the robot’s ability to manipulate new tools was improved.

ATLA Method for Training

The new method is called Accelerated Learning of Tool Manipulation with Language, or ATLA.

Anirudha Majumdar is an assistant professor of mechanical and aerospace engineering at Princeton and head of the Intelligent Robot Motion Lab.

“Extra information in the form of language can help a robot learn to use the tools more quickly,” Majumdar said.

The team queried the language model GPT-3 to obtain tool descriptions. After trying out various prompts, they decided to use “Describe the [feature] of [tool] in a detailed and scientific response,” with the feature being the shape or purpose of the tool.

Karthik Narasimhan is an assistant professor of computer science and coauthor of the study. Narasimhan is also a lead faculty member in Princeton’s natural language processing (NLP) group and contributed to the original GPT language model as a visiting research scientist at OpenAI.

“Because these language models have been trained on the internet, in some sense you can think of this as a different way of retrieving that information more efficiently and comprehensively than using crowdsourcing or scraping specific websites for tool descriptions,” Narasimhan said.

Simulated Robot Learning Experiments

The team selected a training set of 27 tools for their simulated robot learning experiments, with the tools ranging from an axe to a squeegee. The robotic arm was given four different tasks: push the tool, lift the tool, use it to sweep a cylinder along a table, or hammer a peg into a hole.

The team then developed a suite of policies by using machine learning approaches with and without language information. The policies’ performances were compared on a separate test of nine tools with paired descriptions.

The approach, which is called meta-learning, imrpovdes the robot’s ability to learn with each successive task.

According to Narasimhan, the robot is not only learning to use each tool, but also “trying to learn to understand the descriptions of each of these hundred different tools, so when it sees the 101st tool it’s faster in learning to use the new tool.”

In most of the experiments, the language information provided significant advantages for the robot’s ability to use new tools.

Allen Z. Ren is a Ph.D. student in Majumdar’s group and lead author of the research paper.

“With the language training, it learns to grasp at the long end of the crowbar and use the curved surface to better constrain the movement of the bottle,” Ren said. “Without the language, it grasped the crowbar close the curved surface and it was harder to control.”

“The broad goal is to get robotic systems — specifically, ones that are trained using machine learning — to generalize to new environments,” Majumdar added.

Alex McFarland is an AI journalist and writer exploring the latest developments in artificial intelligence. He has collaborated with numerous AI startups and publications worldwide.