stub Silicon Image Sensor Speeds Up, Simplifies Image Processing for Autonomous Vehicles - Unite.AI
Connect with us

Artificial Intelligence

Silicon Image Sensor Speeds Up, Simplifies Image Processing for Autonomous Vehicles

Updated on
Image: Donhee Ham Research Group/Harvard SEAS

A team of researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences has developed the first in-sensor processor that could be integrated into commercial silicon imaging sensor chips. These sensors are known as complementary metal-oxide-semiconductor (CMOS) image sensors, and they are used in a wide range of commercial devices that capture visual information. 

The new device speeds up and simplifies processing for autonomous vehicles and other applications. 

Autonomous Vehicles and Visual Processing

In autonomous vehicles, the time between a system taking an image and that data being delivered to the microprocessor for image processing can have major implications. It is a crucial time period that can mean the difference between avoiding an obstacle or getting involved in an accident. 

Visual processing can be sped up by in-sensor image processing, which involves important features being extracted from raw data by the image sensor itself, rather than a separate microprocessor. With that said, in-sensor processing has proven limited to emerging research materials, which are difficult to incorporate into commercial systems. 

This is what makes the new development such a big deal. 

The team published their research in Nature Electronics

In-Sensor Computing

Donhee Ham is the Gordon McKay Professor of Electrical Engineering and Applied Physics at SEAS and senior author of the paper. 

“Our work can harness the mainstream semiconductor electronics industry to rapidly bring in-sensor computing to a wide variety of real-world applications,” Ham said. 

The team developed a silicon photodiode array, which is also used in commercially-available image sensing chips to capture images. But the team’s photodiodes are electrostatically doped, which means the sensitivity of individual photodiodes to incoming light can be tuned by voltages. 

When an array connects multiple voltage-tunable photodiodes together, it can perform an analog version of multiplication and addition operations that are important for image processing pipelines. This helps extract relevant visual information right when the image is captured. 

Houk Jang is a postdoctoral fellow at SEAS and first author of the paper. 

“These dynamic photodiodes can concurrently filter images as they are captured, allowing for the first stage of vision processing to be moved from the microprocessor to the sensor itself,” Jang said. 

To remove unnecessary details or noise for various applications, the silicon photodiode array is programmed into different image filters. When used in an imaging system in a self-driving vehicle, it calls for a high-pass filter that tracks lane marking. 

Henry Hinton is a graduate student at SEAS and co-first author of the paper. 

“Looking ahead, we foresee the use of this silicon-based in-sensor processor not only in machine vision applications, but also in bio-inspired applications, wherein early information processing allows for the co-location of sensor and compute units, like in the brain,” Hinton said. 

The team will now look to increase the density of photodiodes and integrate them with silicon integrated circuits. 

“By replacing the standard non-programmable pixels in commercial silicon image sensors with the programmable ones developed here, imaging devices can intelligently trim out unneeded data. This could be made more efficient in both energy and bandwidth to address the demands for the next generation of sensory applications,” Jang said.

Alex McFarland is an AI journalist and writer exploring the latest developments in artificial intelligence. He has collaborated with numerous AI startups and publications worldwide.