David Lindell, a graduate student in electrical engineering at Stanford University, along with his team, developed a camera that can watch moving objects around corners. When they tested the new technology, Lindell wore a high visibility tracksuit as he moved around an empty room. They had a camera that was aimed at a blank wall away from Lindell, and the team was able to watch all of his movements with the use of a high powered laser. The laser reconstructed the images through the use of single particles of light that were reflected onto the walls around Lindell. The newly developed camera used advanced sensors and a processing algorithm.
Gordon Wetzstein, assistant professor of electrical engineering at Stanford, spoke about the newly developed technology.
“People talk about building a camera that can see as well as humans for applications such as autonomous cats and robots, but we want to build systems that go well beyond that,” he said. “We want to see things in 3D, around corners and beyond the visible light spectrum.”
The camera system that was tested will be presented at the SIGGRAPH 2019 conference on August 1.
The team has already developed similar around-the-corner cameras in the past, but this one is able to capture more light from more surfaces. It can also see wider and farther as well as monitor out-of-sight movement. They are hoping that these “superhuman vision systems” will be able to be used in autonomous cars and robots so that they will operate more safely than when controlled by a human.
One of the team’s main goals is to keep the system practical. They use hardware, scanning and image processing speeds, and styles of imaging that are already used in autonomous car vision systems. One difference is that the new system is able to capture light bouncing off of a variety of different surfaces with different textures. Before, the systems that were used to see things outside of a camera’s line of sight were only able to do so with objects that reflected even and strong light.
One of the developments that helped them create this technology was a laser that is 10,000 times more powerful than the one they used last year. It scans a wall on the opposite side of the point of interest. The light bounces off the wall, hits the objects in the scene, and returns back to the wall and camera sensors. The sensor is then able to pick up small specks of the laser light and sends them to an algorithm that was also developed by the team. The algorithm deciphers the specks to reconstruct the images.
“When you’re watching the laser scanning it out, you don’t see anything,” Lindell said. “With this hardware, we can basically slow down time and reveal these tracks of light. It almost looks like magic.”
The new system is able to scan at four frames per second and reconstruct scenes up to 60 frames per second with a computer graphics processing unit that enhance the capabilities.
The teams drew inspiration from other fields such as seismic imaging systems. Those bounce soundwaves off underground layers of Earth, and they are able to see what’s beneath the surface. The algorithm is reconfigured to decipher light that bounces off of hidden objects.
Matthew O’Toole, assistant professor at Carnegie Mellon University and previous postdoctoral fellow in Wetzstein’s lab, spoke about the new technology.
“There are many ideas being used in other spaces — seismology, imaging with satellites, synthetic aperture radar — that are applicable to looking around corners,” he said. We’re trying to take a little bit from these fields and we’ll hopefully be able to give something back to them at some point.”
The team’s next step is testing the system on autonomous research cars. They also want to see if it will be applicable in other areas such as medical imaging and to help combat problems of visual conditions that drivers encounter such as fog, rain, sandstorms, and snow.