A team of researchers at MIT is working on bringing deep learning neural networks to microcontrollers. The advance means artificial intelligence (AI) could be implemented into tiny computer chips in wearable medical devices, household appliances, and the other 250 billion objects that make up the “internet of things” (IoT). The IoT is a network of physical objects embedded with sensors, software, and other technologies, which help connect and exchange data with other devices and systems.
The research is set to be presented at the Conference on Neural Information Processing Systems in December. The lead author of the research is Ji Lin, a Ph.D. student in Song Han's lab in MIT's Department of Electrical Engineering and Computer Science. Co-authors include MIT's Han and Yujun Lin, Wei-Ming Chen of MIT and National University Taiwan, and John Cohn and Chuan Gan of the MIT-IBM Watson Lab.
The system is called MCUNet, and it designs compact neural networks capable of extreme speed and accuracy on IoT devices, even with limited memory and processing power. This system can be more energy-efficient and enhance data security.
The team developed the “tiny deep learning” system by combining two components – the operation of neural networks and microcontrollers. The first component is TinyEngine, an interface engine acting as an operating system by directing resource management. TinyEngine is optimized to run a specific neural network structure selected by TinyNAS, which is the other component. TinyNAS is a neural architecture search algorithm.
Lin developed TinyNAS because of the difficulty of applying existing neural architecture search techniques to tiny microcontrollers. These existing techniques eventually find the most accurate and cost-efficient network structure after starting with many possible ones based on a predefined template.
“It can work pretty well for GPUs or smartphones,” says Lin. “But it's been difficult to directly apply these techniques to tiny microcontrollers, because they are too small.”
TinyNAS can create custom-sized networks.
“We have a lot of microcontrollers that come with different power capacities and different memory sizes,” Lin says. “So we developed the algorithm [TinyNAS] to optimize the search space for different microcontrollers.”
Because TinyNAS can be customized, it can generate the best possible compact neural networks for microcontrollers.
“Then we deliver the final, efficient model to the microcontroller,” Lin continues.
A clean and slim interface engine is required for a microcontroller to run the tiny neural network. Many interface engines have instructions for rarely run tasks, which could hinder a microcontroller.
“It doesn't have off-chip memory, and it doesn't have a disk,” says Han. “Everything put together is just one megabyte of flash, so we have to really carefully manage such a small resource.”
TinyEngine generates the code needed to run the customized neural network developed by TinyNAS. The compile-time is cut down by discarding deadweight code.
“We keep only what we need,” Han says. “And since we designed the neural network, we know exactly what we need. That's the advantage of system-algorithm codesign.”
Tests demonstrated that the TinyEngine's compiled binary code was 1.9 to five times smaller than similar microcontroller engines, including those from Google and ARM. Peak memory usage was also cut nearly in half.
The first tests for MCUNet revolved around image classification. The ImageNet database was used to train the system with labeled images, and its ability was then tested on novel ones.
When MCUNet was tested on a commercial microcontroller, it successfully classified 70.7 percent of the novel images. This is far better than the previous best neural network and interference engine pairing, which was 54 percent accurate.
“Even a 1 percent improvement is considered significant,” Lin says. “So this is a giant leap for microcontroller settings.”
According to Kurt Keutzer, a computer scientist at the University of California at Berkeley, this “extends the frontier of deep neural network design even further into the computational domain of small energy-efficient microcontrollers.” MCUNet could “bring intelligent computer-vision capabilities to even the simplest kitchen appliances, or enable more intelligent motion sensors.”
MCUNet also enhances data security.
“A key advantage is preserving privacy,” Han says. “You don't need to transmit the data to the cloud.”
By analyzing data locally, there is a lesser chance of personal information being compromised.
Besides that, MCUNet could analyze and provide insight into information like heartbeat, blood pressure, and oxygen level readings, bring deep learning to IoT devices in vehicles and other places with limited internet access, and reduce the carbon footprint by only using a small fraction of the energy required for large neural networks.
- The Black Box Problem in LLMs: Challenges and Emerging Solutions
- Alex Ratner, CEO & Co-Founder of Snorkel AI – Interview Series
- Circleboom Review: The Best AI-Powered Social Media Tool?
- Stable Video Diffusion: Latent Video Diffusion Models to Large Datasets
- Donny White, CEO & Co-Founder of Satisfi Labs – Interview Series