stub Neuroscientists Design Model to Mirror Human Visual Learning - Unite.AI
Connect with us

Artificial Intelligence

Neuroscientists Design Model to Mirror Human Visual Learning



By programming computer-based artificial intelligence (AI) to use a faster technique for learning new objects, the AI begins to function more like human intelligence. This comes as two neuroscientists designed a model to mirror human visual learning.

The research by Maximillian Riesenhuber, PhD, professor of neuroscience at Georgetown University Medical Center, and Joshua Rule, PhD, postdoctoral scholar at UC Berkeley, was published in the journal Frontiers in Computational Neuroscience. 

AI Learning New Visual Concepts

The neuroscientists demonstrated how the new approach improves AI software’s ability to quickly learn new visual concepts.

“Our model provides a biologically plausible way for artificialneural networks to learn new visual concepts from a small number of examples,” says Riesenhuber. “We can get computers to learn much better from few examples by leveraging prior learning in a way that we think mirrors what the brain is doing.”

Humans have the ability to learn new visual concepts from sparse data very quickly and accurately. We possess this ability at a very young age, as low as three months old. However, computers require many examples of the same object to finally know what it is.

“The computational power of the brain's hierarchy lies in the potential to simplify learning by leveraging previously learned representations from a databank, as it were, full of concepts about objects,” Riesenhuber says.

Artificial Neural Networks vs Human Visual System

Riesenhuber and Rule found that artificial neural networks can learn new visual concepts much faster, approaching the level of human ability.

“Rather than learn high-level concepts in terms of low-level visual features, our approach explains them in terms of other high-level concepts,” Rule says. “It is like saying that a platypus looks a bit like a duck, a beaver, and a sea otter.”

Human visual concept learning heavily relies on the neural networks involved in the object recognition process, and the anterior temporal lobe of the brain is believed to have the ability to go beyond shape in regard to concept representations. Because these neural hierarchies involved in visual recognition are so complex, humans can learn new tasks and leverage prior learning.

“By reusing these concepts, you can more easily learn new concepts, new meaning, such as the fact that a zebra is simply a horse of a different stripe,” Riesenhuber says.

AI has still not reached the same level as the human visual system, which has a superior ability to generalize from few examples, deal with image variations, and comprehend scenes. However, advancements are bringing it closer.

“Our findings not only suggest techniques that could help computers learn more quickly and efficiently, they can also lead to improved neuroscience experiments aimed at understanding how people learn so quickly, which is not yet well understood,” Riesenhuber says.







Alex McFarland is an AI journalist and writer exploring the latest developments in artificial intelligence. He has collaborated with numerous AI startups and publications worldwide.