stub Advance in Microchips Brings Us Closer to AI Edge Computing - Unite.AI
Connect with us

Artificial Intelligence

Advance in Microchips Brings Us Closer to AI Edge Computing

Updated on

Researchers from Princeton University have developed co-designed hardware and software meant to further increase speed and efficiency of specialized artificial intelligence (AI) systems. 

Naveen Verma is a professor of electrical and computer engineering at Princeton who leads the research team.

“Software is a critical part of enabling new hardware,” Verma said. “The hope is that designers can keep using the same software system — and just have it work ten times faster and more efficiently.”

The systems developed by the researchers reduce power demand and the amount of data needed to be exchanged from remote servers. With this, AI applications like piloting software for drones can take place on the edge of computing infrastructure. 

Verma is also the director of the University’s Keller Center for Innovation in Engineering Education. 

“To make AI accessible to the real-time and often personal process all around us, we need to address latency and privacy by moving the computation itself to the edge” Verma said. “And that requires both energy efficiency and performance.”

New Chip Designs

The team from Princeton developed a new chip design two years back, and it was meant to improve the performance of neural networks. The chip could perform tens to hundreds of times better than other microchips on the market.

“The chip’s major drawback is that it uses a very unusual and disruptive architecture,” Verma said in 2018. “That needs to be reconciled with the massive amount of infrastructure and design methodology that we have and use today.”

The chip was continuously refined over the next two years, and a software system was created to allow AI systems to utilize the new technology efficiently. The idea was that the new chips could allow systems to be scalable in hardware and execution of software.

“It is programmable across all these networks,” Verma said. “The networks can be very big, and they can be very small.”

The best scenario for computation is for it to take place on the technology itself, rather than in a remote network computer. However, this requires massive amounts of power and memory storage, which makes it difficult to design such a system. 

To overcome these limitations, the researchers designed a chip that conducts computation and stores data in the same area, which is called in-memory computing. This technique cuts the energy and time needed to exchange information with dedicated memory. 

In order to get around analog operation, which is required for in-memory computing and is sensitive to corruption, the team relied on capacitors instead of transistors in the chip design. Capacitors do not face the same effect by shifts in voltage, and they are more precise. 

Despite various other challenges surrounding analog systems, they carry many advantages when used for applications like neural networks. The researchers are now looking to combine the two types of systems, as digital systems are central while neural networks relying on analog chips are able to run fast and efficient specialized operations. 

Alex McFarland is an AI journalist and writer exploring the latest developments in artificial intelligence. He has collaborated with numerous AI startups and publications worldwide.