A team of researchers at Cornell University has developed a new method enabling autonomous vehicles to create “memories” of previous experiences, which can then be used in future navigation. This will be especially useful when these self-driving cars can’t rely on sensors in bad weather environments.
Learning From the Past
Current self-driving cars that use artificial neural networks have no memory of the past, meaning they are constantly “seeing” things for the first time. And this is true regardless of how many times they’ve driven the exact same road.
Killian Weinberger is senior author of the research and a professor of computer science.
“The fundamental question is, can we learn from repeated traversals?” said Weinberger. “For example, a car may mistake a weirdly shaped tree for a pedestrian the first time its laser scanner perceives it from a distance, but once it is close enough, the object category will become clear. So, the second time you drive past the very same tree, even in fog or snow, you would hope that the car has now learned to recognize it correctly.”
Led by doctoral student Carlos Diaz-Ruiz, the group created a dataset by driving a car equipped with LiDAR sensors. It was driven around a 15-kilometer loop a total of 40 times over an 18-month period. The various test drives captured different environments, weather conditions, and times of day. All of this created a dataset with more than 600,000 scenes.
“It deliberately exposes one of the key challenges in self-driving cars: poor weather conditions,” said Diaz-Ruiz. “If the street is covered by snow, humans can rely on memories, but without memories a neural network is heavily disadvantaged.”
HINDSIGHT and MODEST
One of the approaches, termed HINDSIGHT, uses neural networks to compute descriptors of objects as the car passes them. These descriptions, termed SQuaSH, are then compressed and stored on a virtual map, creating a type of “memory” similar to how we store our own memories in the brain.
When the self-driving car traverses the same location in the future, it queries the local SQuaSH database of every LiDAR point along the route, “remembering” what it learned. The continuously updated database is shared across vehicles, helping improve recognition by providing more information.
Yurong You is a doctoral student.
“This information can be added as features to any LiDAR-based 3D object detector,” said You. “Both the detector and the SQuaSH representation can be trained jointly without any additional supervision, or human annotation, which is time- and labor-intensive.
HINDSIGHT is going to help the team with additional research they are conducting, which is called MODEST (Mobile Object Detection with Ephemerality and Self-Training). MODEST would advance this process and enable the car to learn the entire perception pipeline.
HINDSIGHT assumes that the artificial neural network is already trained to detect objects and augments with the ability to create memories, while MODEST assumes the artificial neural network has never been exposed to any objects or streets. After multiple traversals of the same route, it learns which parts of the environment are stationary or moving objects. This process enables the system to teach itself what it should be paying attention to as other traffic participants.
The algorithm demonstrated an ability to reliably detect the objects even on roads that did not make up the initial traversals.
The team believes these new approaches could reduce the development cost of autonomous vehicles, as well as make them more efficient.