Researchers at the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory and the University of Texas at Austin have developed an algorithm that could have big implications for autonomous vehicles. With the algorithm, autonomous ground vehicles are able to improve their own navigation systems by watching a human drive.
The approach developed by the researchers is called adaptive planner parameter learning from demonstration, or APPLD. It was tested on an Army experimental autonomous ground vehicle.
The research was published in IEEE Robotics and Automation Letters. The work is titled “APPLD: Adaptive Planner Parameter Learning From Demonstration.”
Dr. Garrett Warnell is an Army researcher.
“Using approaches like APPLD, current soldiers in existing training facilities will be able to contribute to improvements in autonomous systems simply by operating their vehicles as normal,” Warnell said. “Techniques like these will be an important contribution to the Army’s plans to design and field next-generation combat vehicles that are equipped to navigate autonomously in off-road deployment environments.”
In order to develop the new system, the researchers combined machine learning from demonstration algorithms and classical autonomous navigation systems. One of the best features of this approach is that it allows APPLD to improve an existing system in order to behave more like a human, rather than replacing the entire classical system.
Because of this, the deployed system is able to keep features like optimality, explainability and safety, which are present in classical navigation systems, while also creating a more flexible system that can adapt to new environments.
“A single demonstration of human driving, provided using an everyday Xbox wireless controller, allowed APPLD to learn how to tune the vehicle's existing autonomous navigation system differently depending on the particular local environment,” Warnell said. “For example, when in a tight corridor, the human driver slowed down and drove carefully. After observing this behavior, the autonomous system learned to also reduce its maximum speed and increase its computation budget in similar environments. This ultimately allowed the vehicle to successfully navigate autonomously in other tight corridors where it had previously failed.”
The results demonstrated that the trained APPLD system could navigate the test environments more efficiently and with fewer mistakes compared to the classical system. On top of that, it could also navigate the environment faster than the human responsible for training it.
Dr. Peter Stone is a professor and chair of the Robotics Consortium at UT Austin.
“From a machine learning perspective, APPLD contrasts with so called end-to-end learning systems that attempt to learn the entire navigation system from scratch,” Stone said. “These approaches tend to require a lot of data and may lead to behaviors that are neither safe nor robust. APPLD leverages the parts of the control system that have been carefully engineered, while focusing its machine learning effort on the parameter tuning process, which is often done based on a single person's intuition.”
The new system allows non-experts in the field of robotics to train and improve autonomous vehicle navigation. For example, an unlimited number of users could provide the data needed for the system to improve itself, rather than relying on a group of expert engineers to manually alter the system.
Dr. Jonathan Fink is an Army Researcher.
“Current autonomous navigation systems typically must be re-tuned by hand for each new deployment environment,” said Fink. “This process is extremely difficult — it must be done by someone with extensive training in robotics, and it requires a lot of trial and error until the right systems settings can be found. In contrast, APPLD tunes the system automatically by watching a human drive the system — something that anyone can do if they have experience with a video game controller. During deployment, APPLD also allows the system to re-tune itself in real-time as the environment changes.”
This system would be of use to the Army, which is currently working on developing modernized optionally manned fighting vehicles and robotic combat vehicles. As of right now, many of the environments are too complex for even the best autonomous navigation systems.
Dr, Xuesu Xiao is a postdoctoral researcher at UT Austin and lead author of the paper.
“In addition to the immediate relevance to the Army, APPLD also creates the opportunity to bridge the gap between traditional engineering approaches and emerging machine learning techniques, to create robust, adaptive, and versatile mobile robots in the real-world,” said Xiao
The APPLD system will now be tested in various different outdoor environments. The team of researchers will also see if additional sensor information can help the systems learn more complex behaviors.
- Vianai’s New Open-Source Solution Tackles AI’s Hallucination Problem
- AI & AR are Driving Data Demand – Open Source Hardware is Meeting the Challenge
- What is a ChatGPT Persona?
- PyCharm vs. Spyder: Choosing the Right Python IDE
- “Brainless” Soft Robot Navigates Complex Environments in Robotics Breakthrough