Researchers at Oregon State University have demonstrated the potential of artificial intelligence to mimic humans with a new optical sensor. This optical sensor is better at mimicking the human eye’s ability to perceive changes in its visual field.
The development has big implications for fields like image recognition, robotics, and AI.
The research, which was led by OSU College of Engineering researcher John Labram and graduate student Cinthya Trujillo Herrera, was published earlier this month in Applied Physics Letters.
Previous Human-Eye Devices
Researchers have previously attempted to develop types of human-eye devices, also called retinomorphic sensors, and they often used software or complex hardware. However, this new device uses ultrathin layers of perovskite semiconductors, which have attracted attention in the past due to their potential for solar energy use. When exposed to light, these ultrathin layers change from strong electrical insulators to strong conductors.
Labram is an assistant professor of electrical engineering and computer science, and he is leading the research with support from the National Science Foundation.
“You can think of it as a single pixel doing something that would currently require a microprocessor,” Labram said.
The next generation of AI is expected to be powered by neuromorphic computers, specifically in applications like autonomous vehicles, robotics, and advanced image recognition. Neuromorphic computers mimic the parallel networks in the human brain, while traditional computers process information sequentially.
“People have tried to replicate this in hardware and have been reasonably successful,” Labram said. “However, even though the algorithms and architecture designed to process information are becoming more and more like a human brain, the information these systems receive is still decidedly designed for traditional computers.”
All of this means that a computer needs an image sensor to act as the human eye, which consists of about 100 million photoceptors. Despite this massive number, the optic nerve only contains 1 million connections to the brain, meaning the retina witnesses a lot of preprocessing and dynamic compression before an image is ever transmitted.
The retinomorphic sensor developed by the researchers doesn’t react strongly under static conditions, but it registers short and sharp signals when there is a change in illumination. It then quickly returns to baseline, which is all due to the perovskites.
“The way we test it is, basically, we leave it in the dark for a second, then we turn the lights on and just leave them on,” Labram said. “As soon as the light goes on, you get this big voltage spike, then the voltage quickly decays, even though the intensity of the light is constant. And that's what we want.”
The team simulated various retinomorphic sensors, which allowed them to predict how a retinomorphic video camera would react to input stimulus.
“We can convert video to a set of light intensities and then put that into our simulation,” Labram said. “Regions where a higher-voltage output is predicted from the sensor light up, while the lower-voltage regions remain dark. If the camera is relatively static, you can clearly see all the things that are moving respond strongly. This stays reasonably true to the paradigm of optical sensing in mammals.”
“The good thing is that, with this simulation, we can input any video into one of these arrays and process that information in essentially the same way the human eye would,” Labram continued. “For example, you can imagine these sensors being used by a robot tracking the motion of objects. Anything static in its field of view would not elicit a response, however a moving object would be registering a high voltage. This would tell the robot immediately where the object was, without any complex image processing.”