Connect with us

Artificial Neural Networks

Researchers Create First-Of-Its-Kind Artificial Neural Network

Published

 on

Researchers have created a multilayer all-optical artificial neural network, something that hadn’t been successfully demonstrated to this point. There is a huge desire to create practical optical artificial neural networks since they are faster and consume far less power than those networks based on traditional computers. These new developments could enable parallel computation with light.

The researchers from The Hong Kong University of Science and Technology, Hong Kong, laid out their two-layer all-optical neural network in Optica, The Optical Society’s journal that includes high-impact research. The researchers also showed how they could apply the network to complex classification tasks.

“Our all-optical scheme could enable a neural network that performs optical parallel computation at the speed of light while consuming little energy,” said Junwei Liu, a member of the research team. “Large-scale, all-optical neural networks could be used for applications ranging from image recognition to scientific research.”

These all-optical networks operate differently than the conventional hybrid optical neural networks that are used currently. In those, optical components are normally used for linear operations. In nonlinear activation functions, those that simulate the way neurons in the human brain respond, the optical components are often implemented electronically. This is because nonlinear optics require high-power lasers which are difficult to implement in an optical neural network.

To get around this, the researchers utilized cold atoms with electro-magnetically induced transparency in order to perform nonlinear functions.

Shengwang Du, a member of the research team, spoke about the new developments.

“This light-induced effect can be achieved with very weak laser power,” he said. “Because this effect is based on nonlinear quantum interference, it might be possible to extend our system into a quantum neural network that could solve problems intractable by classical methods.”

In order to test out their new approach, the team created a two-layer fully-connected all optical neural network. The network has 16 inputs and two outputs. They then used their all-optical network to classify the order and disorder phases of a statistical model of magnetism. They were able to conclude that the all-optical neural network was just as accurate as a trained computer-based neural network.

The next step for the research team is to expand this to large-scale all-optical deep neural networks. These can have complex architectures that are designed for specific applications like image recognition. By doing this, they can demonstrate that this system works at much bigger scales.

“Although our work is a proof-of-principle demonstration, it shows that it may become possible in the future to develop optical versions of artificial intelligence,” said Du.

“The next generation of artificial intelligence hardware will be intrinsically much faster and exhibit lower power consumption compared to today’s computer-based artificial intelligence,” added Liu.

To see more of these kinds of developments in science and technology, The Optical Society (OSA) provides publications, meetings and membership initiatives, research, and dedicated resources. They have an extensive network of experts in the optics and photonics field. The organization supports scientists, engineers, students, and business leaders responsible for scientific discoveries, applications, and applications. Their website provides various news and research updates.

Spread the love

Artificial Neural Networks

AI Model Can Take Blurry Images And Enhance Resolution By 60 Times

mm

Published

on

Researchers from Duke University have developed an AI model capable of taking highly blurry, pixellated images and rendering them with high detail.  According to TechXplore, the model is capable of taking relatively few pixels and scaling the images up to create realistic looking faces that are approximately 64 times the resolution of the original image. The model hallucinates, or imagines, features that are between the lines of the original image.

The research is an example of super-resolution. As Cynthia Rudin from Duke University’s computer science team explained to TechXplore, this research project sets a record for super-resolution, as never before have images been created with such feal from such a small sample of starting pixels. The researchers were careful to emphasize that the model doesn’t actually recreate the face of the person in the original, low-quality image. Instead, it generates new faces, filling in details that weren’t there before. For this reason, the model couldn’t be used for anything like security systems, as it wouldn’t be able to turn out of focus images into images of a real person.

Traditional super-resolution techniques operate by making guesses about what pixels are needed to turn the image into a high-resolution image, based on images that the model has learned about beforehand. Because the added pixels are the result of guesses, not all the pixels will match with their surrounding pixels and certain regions of the image may look fuzzy or warped. The researchers from Duke University used a different method of training their AI model. The model created by the Duke researchers operates by first taking low-resolution images and adding detail to the image over time, referencing high-resolution AI-generated faces as examples. The model references the AI-generated faces and tries to finds ones that resemble the target images when the generated faces are scaled down to the size of the target image.

The research team created a Generative Adversarial Network model to handle the creation of new images. GANs are actually two neural networks that are both trained on the same dataset and pitted against one another. One network is responsible for generating fake images that mimic the real images in the training dataset, while the second network is responsible for detecting the fake images from the genuine ones. The first network is notified when its images have been identified as fake, and it improves until the fake images are hopefully indistinguishable from the genuine images.

The researchers have dubbed their super-resolution model PULSE, and the model consistently produces high-quality images even if given images so blurry that other super-resolution methods can’t create high-quality images from them. The model is even apable of making realistic looking faces from images where the features of the face are almost indistinguishable. For instance, when given an image of a face with 16×16 resolution, it can create a 1024 x 1024 image. More than a million pixels are added during this process, filling in details like strands of hair, wrinkles, and even lighting. When the researchers had people rate 1440 PULSE generated images against images generated by other super-resolution techniques, the PULSE generated images consistently scored the best.

While the researchers used their model on images of people’s faces, the same techniques they use could be applied to almost any object. Low-resolution images of various objects could be used to create high-resolution images of that set of objects, opening-up possible applications for a variety of different industries and fields from microscopy, satellite imagery, education, manufacturing, and medicine.

Spread the love
Continue Reading

Artificial Neural Networks

New Research Suggests Artificial Brains Could Benefit From Sleep

Published

on

New research coming from Los Alamos National Laboratory suggests that artificial brains almost certainly benefit from periods of rest like living brains. 

The research will be presented at the Women in Computer Vision Workshop in Seattle on June 14. 

Yijing Watkins is a Los Alamos National Laboratory computer scientist. 

“We study spiking neural networks, which are systems that learn much as living brains do,” said Watkins. “We were fascinated by the prospect of training a neuromorphic processor in a manner analogous to how humans and other biological systems learn from their environment during childhood development.”

Solving Instability in Network Simulations

Watkins and the team found that continuous periods of unsupervised learning led to instability in the network simulations. However, once the team introduced the networks to states that are a result of the waves that living brains experience during sleep, stability was able to be restored. 

“It was as though we were giving the neural networks the equivalent of a good night’s rest,” said Watkins.

The team made the discovery when they were working on developing neural networks based on how humans and other biological systems learn to see. The team faced some challenges when it came to stabilizing simulated neural networks that were undergoing unsupervised dictionary training. Unsupervised dictionary training involves classifying objects without having previous examples to use for comparison.

Garrett Kenyon is a computer scientist at Los Alamos and study coauthor.

“The issue of how to keep learning systems from becoming unstable really only arises when attempting to utilize biologically realistic, spiking neuromorphic processors or when trying to understand biology itself,” said Kenyon. “The vast majority of machine learning, deep learning, and AI researchers never encounter this issue because in the very artificial systems they study they have the luxury of performing global mathematical operations that have the effect of regulating the overall dynamical gain of the system.”

Sleep as a Last Resort Solution

According to the researchers, exposing the networks to an artificial analog of sleep was their last resort to stabilizing them. After experimenting with different types of noise, which is similar to the static between stations on a radio, the best results came from the waves of Gaussian noise. This type of noise includes a wide variety and ranges of frequencies and amplitudes. 

The researchers came up with the hypothesis that during slow-wave sleep the noise mimics the input received by biological neurons. The results suggested that slow-wave sleep could play a role in ensuring that cortical neurons do not suffer from hallucinations and maintain their stability. 

The team will now work on implementing the algorithm on Intel’s Loihi neuromorphic chip, hoping that the sleep will help it stably process information from a silicon retina camera in real-time. If the research does determine that artificial brains benefit from sleep, the same is likely true for androids and other intelligent machines.

Source: Using Sinusoidally-Modulated Noise as a Surrogate for Slow-Wave Sleep to Accomplish Stable Unsupervised Dictionary Learning in a Spike-Based Sparse Coding Model, CVPR Women in Computer Vision Workshop, 2020-06-14 (Seattle, Washington, United States)

 

Spread the love
Continue Reading

Artificial Neural Networks

AI Model Used To Map Dryness Of Forests, Predict Wildfires

mm

Published

on

A new deep learning model designed by researchers from Stanford University leverages moisture levels across 12 different states in order to assist in the prediction of wildfires and help fire management teams get ahead of potentially destructive wildfires.

Fire management teams aim to predict where the worst blazes might occur, in order that preventative measures like prescribed burns can be carried out. Predicting points of origin and spreading patterns for wildfires mandates information regarding fuel amounts and moisture levels for the target region. Collecting this data and analyzing it at the speed required to be useful to wildfire management teams is difficult, but deep learning models could help automate these critical processes.

As Futurity recently reported, researchers from Stanford University collected climate data and designed a model intended to render detailed maps of moisture levels across 12 western states, including the Pacific Coast states, Texas, Wyoming, Montana, and the southwest states. According to the researchers, although the model is still undergoing refinement it is already capable of revealing areas at high-risk for forest fires where the landscape is unusually dry.

The typical method of collecting data regarding fuel and moisture levels for a target region is by painstakingly comparing dried out vegetation to more moist vegetation. Specifically, researchers collect vegetation samples from trees and weigh them. Afterwards, the vegetation samples are dried out and reweighted. Comparisons are made between the weight of the dry samples and the wet samples to determine the amount of moisture in the vegetation. This process is a long, complex one that is only viable in certain areas and for some species of vegetation. However, the data collected from decades of this process has been used to create the National Fuel Moisture Database, comprised of over 200,000 records. The fuel-moisture content of a region is well known to be linked to the risk of wildfire, though it’s still unknown just how much of a role it plays between ecosystems and from one plant to other plants.

Krishna Rao, PhD student in earth systems science at Stanford was the lead author or the new study, and Rao explained to Futurity that machine learning affords researchers the ability to test assumptions about links between live fuel moisture and weather for different ecosystems. Rao and colleagues trained a recurrent neural network model on data from the National Fuel Moisture Database. The model was then tested by estimating fuel moisture levels based on measurements collected by space sensors. The data included signals from synthetic aperture radar (SAR), which is microwave radar signals that penetrate to the surface, and visible light bouncing off the planet’s surface. The training and validation data for the model consisted of  three years of data for approximately 240 sites across the western US starting in 2015.

The researchers ran analyses on various types of land coverage, including sparse vegetation, grasslands, shrublands, needleleaf evergreen forests, and broadleaf deciduous forests. The model’s predictions were the most accurate, most reliably matched the NFMD measurement, on shrubland regions. This is fortunate, as shrublands comprise approximately 45% of the ecosystems found throughout the US west. Shrublands, particularly chaparral shrublands, are often uniquely susceptible to fire, as seen in many of the fires that burned throughout California over recent years.

The predictions generated by the model have been used to create an interactive map that fire management agencies could one day use to prioritize regions for fire control and discern other relevant patterns. The researchers believe that with further training and refinement the model could.

As Alexandra Konings, assistant professor of earth systems science at Stanford, explained to Futurity:

“Creating these maps was the first step in understanding how this new fuel moisture data might affect fire risk and predictions. Now we’re trying to really pin down the best ways to use it for improved fire prediction.”

Spread the love
Continue Reading