stub Human Brain Project, Intel Work Together to Advance Neuromorphic Technology - Unite.AI
Connect with us

Artificial Intelligence

Human Brain Project, Intel Work Together to Advance Neuromorphic Technology

Updated on

A team of researchers at the Human Brain Project (HBP) are working with Intel to advance neuromorphic technology and bring AI closer to the energy efficiency of the human brain. Neuromorphic technology is more energy efficient for large deep learning networks when compared to other AI systems. 

Researchers in the HBP and Intel carried out a set of experiments demonstrating this efficiency. The experiments involved a new Intel chip that relies on neurons similar to those in the human brain. This was the first time that such results were demonstrated. 

The research was published in Nature Machine Intelligence. 

Intel’s Loihi Chips

The group focused on algorithms that work with temporal processes, and the system had to answer questions about a previously told story while understanding the relationships between objects or people from the context. The hardware consisted of 32 Loihi chips, which are Intel’s neuronal research chips. 

Phillip Plank is a doctoral student at TU Graz’s Institute of Theoretical Computer Science and an employee at Intel. 

“Our system is two to three times more economical here than other AI models,” Plank says. 

Plank believes that as the new Loihi generation is introduced, it will have more efficiency gains and improve energy-intensive chip-to-chip communication. Measurements showed that the consumption was 1000 times more efficient since there were no required action potentials that had to be sent back and forth between the chips. 

The group reproduced a presumed method of the human brain. 

Wolfgang Maass is Philipp Plank’s doctoral supervisor and professor emeritus at the Institute of Theoretical Computer Science. 

“Experimental studies have shown that the human brain can store information for a short period of time even without neuronal activity, namely in so-called ‘internal variables’ of neurons,” Maass says. “Simulations suggest that a fatigue mechanism of a subset of neurons is essential for this short-term memory.”

Linking Deep Learning Networks

To achieve this, the researchers link two types of deep learning networks. The feedback neuronal networks are responsible for “short-term memory,” and recurrent modules filter out possible relevant information from the input signal and store it. A feed-forward network determines which of the relationships found are important for solving the current task. Relationships that are meaningless are filtered out, and the neurons only fire in those modules where relevant information has been found. This entire process is what leads to dramatic energy savings. 

Steve Furber is leader of the HBP neuromorphic computing division and a professor of Computer Engineering at the University of Manchester. 

“This advance brings the promise of energy-efficient event-based AI on neuromorphic platforms an important step closer to fruition. The new mechanism is well-suited to neuromorphic computing systems such as the Intel Loihi and SpiNNaker that are able to support multi-compartment neuron models,” said Furber.

Alex McFarland is a tech writer who covers the latest developments in artificial intelligence. He has worked with AI startups and publications across the globe.