Connect with us

Facial Recognition

AI is Moving Deeper Into Human Emotion

Published

 on

Researchers at the University of Colorado and Duke University have developed a neural network to accurately decode images into 11 different human emotion categories. The research team at the universities included Phillip A. Kragel, Marianne C. Reddan, Kevin S. LaBar, and Tor D. Wagner. 

Phillip Kragel explains neural networks as computer models that are able to map input signals to an output of interest by learning a series of filters. Whenever a network is trained to detect a certain image or thing, it learns the different features that are unique to it like shape, color, and size.

The new convolutional neural network has been named EmoNet, and it was trained on visual images. The research team used a database that had 2,185 videos and included 27 different emotion categories. From the collection of videos, they extracted 137,482 frames that were divided into training and testing samples. They were not just basic emotions, but they included many complex ones as well. The different emotion categories included anxiety, awe, boredom, confusion, craving, disgust, empathetic pain, entrancement, excitement, fear, horror, interest, joy, romance, sadness, sexual desire, and surprise. 

The model was able to detect some emotions like craving and sexual desire at a high confidence interval, but it had trouble with other emotions such as confusion and surprise. To categorize the different images and emotions, the neural network used things such as color, spatial power spectra, and the presence of objects and faces in the images. 

In order to build on the research and the neural network, the team studied 18 different people and their brain activity after showing them 112 different images. After showing the real humans the images, the researchers showed the same ones to the EmoNet network to compare the results between the two. 

We already use certain apps and programs every day that read our faces and expressions for things like facial recognition, photo manipulation through AI, and to unlock our smartphones. This new development takes that a lot further with the possibility of not only reading a face’s physical features, but now reading a person’s emotions and feelings through their faces. It is an exciting but also concerning development as privacy concerns will surely arise. We already worry about facial recognition and what can happen with that data. 

Aside from the dangerous potential regarding privacy concerns, this new technological development can help in many areas. For one, many researchers often rely on participants reporting on their own emotions. Now, researchers can use the image of that participant’s face to learn their emotions. This will reduce the errors in the research and data. 

“When it comes to measuring emotions, we’re typically still limited only to asking people how they feel,” said Tor Wagner, one of the researchers on the team. “Our work can help move us towards direct measures of emotion – related brain processes.” 

This new research can also help transition mental health labels like “anxiety” to brain processes. 

“Moving away from subjective labels such as ‘anxiety’ and ‘depression’ towards brain processes could lead to new targets for therapeutics, treatments, and interventions.” said Phillip Kragel, another one of the researchers. 

This new neural network is just one of the new and exciting developments in artificial intelligence. Researchers are constantly pushing this technology further, and it will make an impact in every area of our lives. The new developments in AI are taking it deeper into the different areas of human behavior and emotion. While we mostly know of AI dealing in the physical realm including muscles, arms, and other parts of the body, we are now going into the human psyche with the technology. 

 

Spread the love

Alex McFarland is a historian and journalist covering the newest developments in artificial intelligence.

Artificial Neural Networks

AI Model Can Take Blurry Images And Enhance Resolution By 60 Times

mm

Published

on

Researchers from Duke University have developed an AI model capable of taking highly blurry, pixellated images and rendering them with high detail.  According to TechXplore, the model is capable of taking relatively few pixels and scaling the images up to create realistic looking faces that are approximately 64 times the resolution of the original image. The model hallucinates, or imagines, features that are between the lines of the original image.

The research is an example of super-resolution. As Cynthia Rudin from Duke University’s computer science team explained to TechXplore, this research project sets a record for super-resolution, as never before have images been created with such feal from such a small sample of starting pixels. The researchers were careful to emphasize that the model doesn’t actually recreate the face of the person in the original, low-quality image. Instead, it generates new faces, filling in details that weren’t there before. For this reason, the model couldn’t be used for anything like security systems, as it wouldn’t be able to turn out of focus images into images of a real person.

Traditional super-resolution techniques operate by making guesses about what pixels are needed to turn the image into a high-resolution image, based on images that the model has learned about beforehand. Because the added pixels are the result of guesses, not all the pixels will match with their surrounding pixels and certain regions of the image may look fuzzy or warped. The researchers from Duke University used a different method of training their AI model. The model created by the Duke researchers operates by first taking low-resolution images and adding detail to the image over time, referencing high-resolution AI-generated faces as examples. The model references the AI-generated faces and tries to finds ones that resemble the target images when the generated faces are scaled down to the size of the target image.

The research team created a Generative Adversarial Network model to handle the creation of new images. GANs are actually two neural networks that are both trained on the same dataset and pitted against one another. One network is responsible for generating fake images that mimic the real images in the training dataset, while the second network is responsible for detecting the fake images from the genuine ones. The first network is notified when its images have been identified as fake, and it improves until the fake images are hopefully indistinguishable from the genuine images.

The researchers have dubbed their super-resolution model PULSE, and the model consistently produces high-quality images even if given images so blurry that other super-resolution methods can’t create high-quality images from them. The model is even apable of making realistic looking faces from images where the features of the face are almost indistinguishable. For instance, when given an image of a face with 16×16 resolution, it can create a 1024 x 1024 image. More than a million pixels are added during this process, filling in details like strands of hair, wrinkles, and even lighting. When the researchers had people rate 1440 PULSE generated images against images generated by other super-resolution techniques, the PULSE generated images consistently scored the best.

While the researchers used their model on images of people’s faces, the same techniques they use could be applied to almost any object. Low-resolution images of various objects could be used to create high-resolution images of that set of objects, opening-up possible applications for a variety of different industries and fields from microscopy, satellite imagery, education, manufacturing, and medicine.

Spread the love
Continue Reading

Artificial Neural Networks

New Research Suggests Artificial Brains Could Benefit From Sleep

Published

on

New research coming from Los Alamos National Laboratory suggests that artificial brains almost certainly benefit from periods of rest like living brains. 

The research will be presented at the Women in Computer Vision Workshop in Seattle on June 14. 

Yijing Watkins is a Los Alamos National Laboratory computer scientist. 

“We study spiking neural networks, which are systems that learn much as living brains do,” said Watkins. “We were fascinated by the prospect of training a neuromorphic processor in a manner analogous to how humans and other biological systems learn from their environment during childhood development.”

Solving Instability in Network Simulations

Watkins and the team found that continuous periods of unsupervised learning led to instability in the network simulations. However, once the team introduced the networks to states that are a result of the waves that living brains experience during sleep, stability was able to be restored. 

“It was as though we were giving the neural networks the equivalent of a good night’s rest,” said Watkins.

The team made the discovery when they were working on developing neural networks based on how humans and other biological systems learn to see. The team faced some challenges when it came to stabilizing simulated neural networks that were undergoing unsupervised dictionary training. Unsupervised dictionary training involves classifying objects without having previous examples to use for comparison.

Garrett Kenyon is a computer scientist at Los Alamos and study coauthor.

“The issue of how to keep learning systems from becoming unstable really only arises when attempting to utilize biologically realistic, spiking neuromorphic processors or when trying to understand biology itself,” said Kenyon. “The vast majority of machine learning, deep learning, and AI researchers never encounter this issue because in the very artificial systems they study they have the luxury of performing global mathematical operations that have the effect of regulating the overall dynamical gain of the system.”

Sleep as a Last Resort Solution

According to the researchers, exposing the networks to an artificial analog of sleep was their last resort to stabilizing them. After experimenting with different types of noise, which is similar to the static between stations on a radio, the best results came from the waves of Gaussian noise. This type of noise includes a wide variety and ranges of frequencies and amplitudes. 

The researchers came up with the hypothesis that during slow-wave sleep the noise mimics the input received by biological neurons. The results suggested that slow-wave sleep could play a role in ensuring that cortical neurons do not suffer from hallucinations and maintain their stability. 

The team will now work on implementing the algorithm on Intel’s Loihi neuromorphic chip, hoping that the sleep will help it stably process information from a silicon retina camera in real-time. If the research does determine that artificial brains benefit from sleep, the same is likely true for androids and other intelligent machines.

Source: Using Sinusoidally-Modulated Noise as a Surrogate for Slow-Wave Sleep to Accomplish Stable Unsupervised Dictionary Learning in a Spike-Based Sparse Coding Model, CVPR Women in Computer Vision Workshop, 2020-06-14 (Seattle, Washington, United States)

 

Spread the love
Continue Reading

Artificial Neural Networks

AI Model Used To Map Dryness Of Forests, Predict Wildfires

mm

Published

on

A new deep learning model designed by researchers from Stanford University leverages moisture levels across 12 different states in order to assist in the prediction of wildfires and help fire management teams get ahead of potentially destructive wildfires.

Fire management teams aim to predict where the worst blazes might occur, in order that preventative measures like prescribed burns can be carried out. Predicting points of origin and spreading patterns for wildfires mandates information regarding fuel amounts and moisture levels for the target region. Collecting this data and analyzing it at the speed required to be useful to wildfire management teams is difficult, but deep learning models could help automate these critical processes.

As Futurity recently reported, researchers from Stanford University collected climate data and designed a model intended to render detailed maps of moisture levels across 12 western states, including the Pacific Coast states, Texas, Wyoming, Montana, and the southwest states. According to the researchers, although the model is still undergoing refinement it is already capable of revealing areas at high-risk for forest fires where the landscape is unusually dry.

The typical method of collecting data regarding fuel and moisture levels for a target region is by painstakingly comparing dried out vegetation to more moist vegetation. Specifically, researchers collect vegetation samples from trees and weigh them. Afterwards, the vegetation samples are dried out and reweighted. Comparisons are made between the weight of the dry samples and the wet samples to determine the amount of moisture in the vegetation. This process is a long, complex one that is only viable in certain areas and for some species of vegetation. However, the data collected from decades of this process has been used to create the National Fuel Moisture Database, comprised of over 200,000 records. The fuel-moisture content of a region is well known to be linked to the risk of wildfire, though it’s still unknown just how much of a role it plays between ecosystems and from one plant to other plants.

Krishna Rao, PhD student in earth systems science at Stanford was the lead author or the new study, and Rao explained to Futurity that machine learning affords researchers the ability to test assumptions about links between live fuel moisture and weather for different ecosystems. Rao and colleagues trained a recurrent neural network model on data from the National Fuel Moisture Database. The model was then tested by estimating fuel moisture levels based on measurements collected by space sensors. The data included signals from synthetic aperture radar (SAR), which is microwave radar signals that penetrate to the surface, and visible light bouncing off the planet’s surface. The training and validation data for the model consisted of  three years of data for approximately 240 sites across the western US starting in 2015.

The researchers ran analyses on various types of land coverage, including sparse vegetation, grasslands, shrublands, needleleaf evergreen forests, and broadleaf deciduous forests. The model’s predictions were the most accurate, most reliably matched the NFMD measurement, on shrubland regions. This is fortunate, as shrublands comprise approximately 45% of the ecosystems found throughout the US west. Shrublands, particularly chaparral shrublands, are often uniquely susceptible to fire, as seen in many of the fires that burned throughout California over recent years.

The predictions generated by the model have been used to create an interactive map that fire management agencies could one day use to prioritize regions for fire control and discern other relevant patterns. The researchers believe that with further training and refinement the model could.

As Alexandra Konings, assistant professor of earth systems science at Stanford, explained to Futurity:

“Creating these maps was the first step in understanding how this new fuel moisture data might affect fire risk and predictions. Now we’re trying to really pin down the best ways to use it for improved fire prediction.”

Spread the love
Continue Reading