stub Artificial Nanowire Network Acts Like Brain When Electrically Stimulated  - Unite.AI
Connect with us

Artificial Intelligence

Artificial Nanowire Network Acts Like Brain When Electrically Stimulated 

Published

 on

Scientists at the University of Sydney and Japan’s National Institute for Material Science (NIMS) have discovered how to make an artificial network of nanowires act in a brain-like way when electrically stimulated. 

The study was published in Nature Communications

The international team was led by Joel Hochstetter, who was joined by Professor Zdenka Kuncic and Professor Tomonobu Nakayama. 

The team found that they can keep a network of nanowires in a brain-like state “at the edge of chaos” to perform tasks at an optimal level. 

According to the researchers, this suggests that the underlying nature of neural intelligence is physical, and it could lead to new developments in artificial intelligence. 

Joel Hochstetter is a doctoral candidate in the University of Sydney Nano Institute and School of Physics and lead author of the paper.

“We used wires 10 micrometres long and no thicker than 500 nanometres arranged randomly on a two-dimensional plane,” said Hochstetter.

“Where the wires overlap, they form an electrochemical junction, like the synapses between neurons,” he said. “We found that electrical signals put through this network automatically find the best route for transmitting information. And this architecture allows the network to ‘remember' previous pathways through the system.”

Testing the Nanowire Network

The research team used simulations to test the random nanowire network in order to learn how it could best perform and solve simple tasks. 

Whenever the signal stimulating the network was too low, the pathway did not produce complex enough outputs because they were too predictable. On the other hand, if the network was overwhelmed by the signal, the output was too chaotic.

This meant that the optimal signal was at the edge of this chaotic state, according to the team.

Professor Kuncic is from the University of Sydney. 

“Some theories in neuroscience suggest the human mind could operate at this edge of chaos, or what is called the critical state,” said Professor Kuncic. “Some neuroscientists think it is in this state where we achieve maximal brain performance.”

“What's so exciting about this result is that it suggests that these types of nanowire networks can be tuned into regimes with diverse, brain-like collective dynamics, which can be leveraged to optimise information processing,” she continued. 

The nanowire network is able to incorporate memory and operations into a single system due to the junctions between the wires. This is different from standard computers, which rely on separated memory and operations. 

“These junctions act like computer transistors but with the additional property of remembering that signals have travelled that pathway before. As such, they are called ‘memristors',” Hochstetter said.

The memory is in physical form with the junctions at the crossing points between nanowires acting like switches. Their behavior depends on historic response to electrical signals, and when signals are applied across the junctions, they are activated as current flows through them.

“This creates a memory network within the random system of nanowires,” he said.

The team developed a simulation of the physical network to demonstrate its ability to solve very simple tasks. 

“For this study we trained the network to transform a simple waveform into more complex types of waveforms,” Hochstetter said.

The team adjusted the amplitude and frequency of the electrical signal to see where the best performance took place.

“We found that if you push the signal too slowly the network just does the same thing over and over without learning and developing. If we pushed it too hard and fast, the network becomes erratic and unpredictable,” he said.

Real-World Advantages

According to Professor Kuncic, uniting memory and operations has major benefits for artificial intelligence. 

“Algorithms needed to train the network to know which junction should be accorded the appropriate ‘load' or weight of information chew up a lot of power,” she said.

“The systems we are developing do away with the need for such algorithms. We just allow the network to develop its own weighting, meaning we only need to worry about signal in and signal out, a framework known as ‘reservoir computing'. The network weights are self-adaptive, potentially freeing up large amounts of energy.”

Kuncic says that this means future AI systems that rely on these networks would have far lower energy footprints.

 

Alex McFarland is an AI journalist and writer exploring the latest developments in artificial intelligence. He has collaborated with numerous AI startups and publications worldwide.