stub Grow And Prune AI Strategy Seems To Reduce AI Energy Usage - Unite.AI
Connect with us

Artificial Intelligence

Grow And Prune AI Strategy Seems To Reduce AI Energy Usage

mm

Published

 on

The human brain operates with a “grow and prune” strategy, initially starting off with a massive amount of neural connections and then pruning away the unused connections over time. Recently, a team of AI researches has applied this approach to AI systems and found that it could substantially reduce the amount of energy required to train an AI.

A team of researchers from Princeton University recently created a new method of training artificial intelligence systems. This new training method seems able to meet or surpass the industry standards for accuracy, but it’s able to accomplish this while consuming much less computational power, and therefore less energy, than traditional machine learning models. Over the course of two different papers, the Princeton researchers demonstrated how to grow a network by adding neurons and connections to it. The unused connections were then pruned away over time, leaving just the most effective and efficient portions of the model.

Niraj Jha, professor of Electrical Engineering at Princeton, explained to Princeton news that the model developed by the researchers operates on a “row-and-prune paradigm”. Jha explained that a human's brain is the most complex it ever will be at around three years of age, and after this point the brain begins trimming away unneeded synaptic connections. The result is that the fully developed brain is able to carry out all the extraordinarily complex tasks we do every day, but it uses about half of all the synapses it had at its peak. Jha and the other researchers mimicked this strategy to enhance the training of AI.

Jha explained:

“Our approach is what we call a grow-and-prune paradigm. It’s similar to what a brain does from when we are a baby to when we are a toddler. In its third year, the human brain starts snipping away connections between brain cells. This process continues into adulthood, so that the fully developed brain operates at roughly half its synaptic peak. The adult brain is specialized to whatever training we’ve provided it. It’s not as good for general-purpose learning as a toddler brain.”

Thanks to the growing and pruning technique, equally good predictions can be made about patterns in data using just a fraction of the computational power that was previously required. Researchers are aiming to find methods of reducing energy consumption and computational cost, as doing so is key to bringing machine learning to small devices like phones and smartwatches. Reducing the amount of energy consumed by machine learning algorithms can also help the industry reduce its carbon footprint. Xiaoliang Dai, the first author on the papers, explained that the models need to be trained locally due to transmission to the cloud requiring a lot of energy.

During the course of the first study, The researchers tried to develop a neural network creation tool that they could use to engineer neural networks and recreate some of the highest performing networks from scratch. he tool was called NeST (Neural network Synthesis Tool), and when it is provided with just a few neurons and connections it rapidly increases in complexity by adding more neurons to the network. Once the network meets a selected benchmark it begins pruning itself over time. While previous network models have used pruning techniques, the method engineered by the Princeton researchers was the first to take a network and simulate stages of development, going from “baby” to “toddler” and finally to “adult brain”.

During the second paper, the researchers collaborated with a team from the University of California-Berkely and Facebook to improve upon their technique using a tool called Chameleon. Chameleon is capable of starting with the desired endpoint, the wanted outcomes, and working backward to construct the right type of neural network. This eliminates much of the guesswork involved in tweaking a network manually, giving engineers starting points that are likely to be immediately useful. Chameleon predicts the performance of different architectures under different conditions. Combining Chameleon and the NeST framework could help research organizations who lack heavy computation resources take advantage of the power of neural networks.