New research coming from Los Alamos National Laboratory suggests that artificial brains almost certainly benefit from periods of rest like living brains.
The research will be presented at the Women in Computer Vision Workshop in Seattle on June 14.
Yijing Watkins is a Los Alamos National Laboratory computer scientist.
“We study spiking neural networks, which are systems that learn much as living brains do,” said Watkins. “We were fascinated by the prospect of training a neuromorphic processor in a manner analogous to how humans and other biological systems learn from their environment during childhood development.”
Solving Instability in Network Simulations
Watkins and the team found that continuous periods of unsupervised learning led to instability in the network simulations. However, once the team introduced the networks to states that are a result of the waves that living brains experience during sleep, stability was able to be restored.
“It was as though we were giving the neural networks the equivalent of a good night’s rest,” said Watkins.
The team made the discovery when they were working on developing neural networks based on how humans and other biological systems learn to see. The team faced some challenges when it came to stabilizing simulated neural networks that were undergoing unsupervised dictionary training. Unsupervised dictionary training involves classifying objects without having previous examples to use for comparison.
Garrett Kenyon is a computer scientist at Los Alamos and study coauthor.
“The issue of how to keep learning systems from becoming unstable really only arises when attempting to utilize biologically realistic, spiking neuromorphic processors or when trying to understand biology itself,” said Kenyon. “The vast majority of machine learning, deep learning, and AI researchers never encounter this issue because in the very artificial systems they study they have the luxury of performing global mathematical operations that have the effect of regulating the overall dynamical gain of the system.”
Sleep as a Last Resort Solution
According to the researchers, exposing the networks to an artificial analog of sleep was their last resort to stabilizing them. After experimenting with different types of noise, which is similar to the static between stations on a radio, the best results came from the waves of Gaussian noise. This type of noise includes a wide variety and ranges of frequencies and amplitudes.
The researchers came up with the hypothesis that during slow-wave sleep the noise mimics the input received by biological neurons. The results suggested that slow-wave sleep could play a role in ensuring that cortical neurons do not suffer from hallucinations and maintain their stability.
The team will now work on implementing the algorithm on Intel’s Loihi neuromorphic chip, hoping that the sleep will help it stably process information from a silicon retina camera in real-time. If the research does determine that artificial brains benefit from sleep, the same is likely true for androids and other intelligent machines.
Source: Using Sinusoidally-Modulated Noise as a Surrogate for Slow-Wave Sleep to Accomplish Stable Unsupervised Dictionary Learning in a Spike-Based Sparse Coding Model, CVPR Women in Computer Vision Workshop, 2020-06-14 (Seattle, Washington, United States)
- Nanoantenna Enables Advanced Quantum Communication and Data Storage
- Human Brain Project Releases New Paper on Exascale Computing Power
- AI Researchers Estimate 97% Of EU Websites Fail GDPR Privacy Requirements- Especially User Profiling
- 5 Best Machine Learning & AI Books of All Time
- Earkick Raises $1M for Real-Time Mental Health Tracker