stub Algorithm Enables Visual Terrain-Relative Navigation in Autonomous Vehicles - Unite.AI
Connect with us

Artificial Intelligence

Algorithm Enables Visual Terrain-Relative Navigation in Autonomous Vehicles

Updated on

A new deep learning algorithm developed by researchers at California Institute of Technology (Caltech) enables autonomous systems to recognize where they are by observing the terrain around them. For the first time ever, this technology can work no matter the seasonal changes to the terrain.

The research was published on June 23 in the journal Science Robotics by the American Association for the Advancement of Science (AAAS).

Visual Terrain-Relative Navigation

The process is called visual terrain-relative navigation (VTRN), and it was first developed in the 1960s. Autonomous systems can locate themselves through VTRN by comparing nearby terrain to high-resolution satellite images.

However, the current generation of VTRN requires the terrain it's observing to closely match the images in the database. Any alterations in the terrain, such as snow or falling leaves, can cause the system to fail from non-matching images. This means VTRN systems can be easily confused unless there is a database of the landscape images under every conceivable condition. 

The team involved with this project comes from the lab of Soon-Jo Chung, Bren Professor of Aerospace and Control and Dynamical systems and research scientist at JPL. The team utilized deep learning and artificial intelligence (AI) to remove seasonal content that can be troublesome to VTRN systems. 

Anthony Fragoso is a lecturer and staff scientist, as well as lead author of the Science Robotics paper.

“The rule of thumb is that both images — the one from the satellite and the one from the autonomous vehicle — have to have identical content for current techniques to work. The differences that they can handle are about what can be accomplished with an Instagram filter that changes an image's hues,” says Fragoso. “In real systems, however, things change drastically based on season because the images no longer contain the same objects and cannot be directly compared.”

Autonomous Navigation with Improved Visual Terrain Recognition

Self-Supervised Learning

The process was developed by Chung and Fragoso in collaboration with graduate student Connor Lee and undergraduate student Austin McCoy, and it uses “self-supervised learning.”

Instead of relying on human annotators to curate large datasets in order to teach an algorithm how to recognize something, as is the case with most computer-vision strategies, this process enables the algorithm to teach itself. The AI detects patterns in images by teasing out details and features that the human eye would miss. 

By supplementing the current generation of VTRN with the new system, it yields more accurate localization. One experiment involved the researchers attempting to localize images of  summer foliage against winter leaf-off imagery using a correlation-based VTRN technique. They found that 50 percent of attempts resulted in navigation failures, but when they inserted the new algorithm into the VTRN, 92 percent of attempts were correctly matched, and the other 8 percent could be identified as problematic in advance. 

“Computers can find obscure patterns that our eyes can't see and can pick up even the smallest trend,” says Lee. “VTRN was in danger of turning into an infeasible technology in common but challenging environments. We rescued decades of work in solving this problem.”

Applications in Space

The new system does not only have use for autonomous drones on Earth, but it can also be used for space missions. JPL’s Mars 2020 Perseverance rover mission used VTRN during the entry, descent, and landing at the Jezero Crater, which was previously considered too dangerous for safe entry.

According to Chung, for rovers like Perseverance, “a certain amount of autonomous driving is necessary since transmissions take seven minutes to travel between Earth and Mars, and there is no GPS on Mars.” 

The team believes the new system could also be used in the Martian polar regions, which have intense seasonal changes. It could enable improved navigation to support scientific objectives, such as the search for water.

The team will now expand the technology to account for weather changes, such as fog, rain, and snow. This work could lead to improved navigation systems for self-driving cars.

Alex McFarland is an AI journalist and writer exploring the latest developments in artificial intelligence. He has collaborated with numerous AI startups and publications worldwide.