Connect with us

Artificial Intelligence

Researchers Develop World’s Most Powerful Neuromorphic Processor for AI

Updated on

In what is a major leap forward in the field of artificial intelligence (AI), an international team of researchers led by Swinburne University of Technology has developed the world’s most powerful neuromorphic processor for AI. It operates at an astonishing rate of more than 10 trillion operations per second (TeraOps/s), meaning it can process ultra-large-scale data.

The work was published in the journal Nature.

Led by Swinburne’s Professor David Moss, Dr. Xingyuan Xu, and Distinguished Professor Arnan Mitchell from RMIT University, the team accelerated computing speed and processing power. They were able to create an optical neuromorphic processor capable of operating over 1,000 times faster than any previous ones. The system can also process ultra-large-scale images, which is important for facial recognition as previous optical processors have failed in this regard.

Professor Moss is Director of Swinburne’s Optical Sciences Centre, and he was named a top Australian researcher in physics and mathematics in the field of optics and photonics by The Australian.

“This breakthrough was achieved with ‘optical micro-combs,’ as was our world-record internet data speed reported in May 2020,” he said.

Other Top Processors and Micro-combs

 Top electronic processors like the Google TPU can operate over 100 TeraOps/s. However, it requires tens of thousands of parallel processors, whereas the team’s optical system only relies on a single processor. They achieved this by using a new technique that involved simultaneously interleaving the data in time, wavelength, and spatial dimensions through an integrated micro-comb source.

For those who are unaware of micro-combs, they are new devices consisting of hundreds of high-quality infrared lasers on a single chip. Compared to other optical sources, micro-combs are far faster, lighter, and cheaper.

“In the 10 years since I co-invented them, integrated micro-comb chips have become enormously important and it is truly exciting to see them enabling these huge advances in information communication and processing,” Professor Moss says. “Micro-combs offer enormous promise for us to meet the world’s insatiable need for information.”

Processor of the Future

Dr. Xu was co-lead author of the study and is a Swinburne alum and post-doctoral fellow with the Electrical and Computer Systems Engineering Department at Monash University.

“This processor can serve as a universal ultrahigh bandwidth front end for any neuromorphic hardware – optical or electronic based – bringing massive-data machine learning for real-time ultrahigh bandwidth data within reach,” Dr. Xu says.

“We’re currently getting a sneak-peak of how the processors of the future will look. It’s really showing us how dramatically we can scale the power of our processors through the innovative use of microcombs,” he continues.

According to RMIT’s Professor Mitchell, “This technology is applicable to all forms of processing and communications — it will have a huge impact. Long term we hope to realise fully integrated systems on a chip, greatly reducing cost and energy consumption.”

Professor Damien Hicks supports the research team and is from Swinburne and the Walter and Elizabeth Hall Institute.

“Convolutional neural networks have been central to the artificial intelligence revolution, but existing silicon technology increasingly presents a bottleneck in processing speed and energy efficiency,” says Professor Hicks.

“This breakthrough shows how a new optical technology makes such networks faster and more efficient and is a profound demonstration of the benefits of cross-disciplinary thinking, in having the inspiration and courage to take an idea from one field and using it to solve a fundamental problem in another,” he continues.

Alex McFarland is an AI journalist and writer exploring the latest developments in artificial intelligence. He has collaborated with numerous AI startups and publications worldwide.