Connect with us

Artificial Neural Networks

AI Engineers Develop Method That Can Detect Intent Of Those Spreading Misinformation

mm

Published

 on

Dealing with misinformation in the digital age is a complex problem. Not only does misinformation have to be identified, tagged, and corrected, but the intent of those responsible for making the claim should also be distinguished. A person may unknowingly spread misinformation, or just be giving their opinion on an issue even though it is later reported as fact. Recently, a team of AI researchers and engineers at Dartmouth created a framework that can be used to derive opinion from “fake news” reports.

As ScienceDaily reports, the Dartmouth team’s study was recently published in the Journal of Experimental & Theoretical Artificial Intelligence. While previous studies have attempted to identify fake news and fight deception, this might be the first study that aimed to identify the intent of the speaker in a news piece. While a true story can be twisted into various deceptive forms, it’s important to distinguish whether or not deception was intended. The research team argues that intent matters when considering misinformation, as deception is only possible if there was intent to mislead. If an individual didn’t realize they were spreading misinformation or if they were just giving their opinion, there can’t be deception.

Eugene Santos Jr., an engineering professor at Dartmouth’s Thayer School of Engineering, explained to ScienceDaily why their model attempts to distinguish deceptive intent:

“Deceptive intent to mislead listeners on purpose poses a much larger threat than unintentional mistakes. To the best of our knowledge, our algorithm is the only method that detects deception and at the same time discriminates malicious acts from benign acts.”

In order to construct their model, the research team analyzed the features of deceptive reasoning. The resulting algorithm could distinguish intent to deceive from other forms of communication by focusing on discrepancies between a person’s past arguments and their current statements. The model constructed by the research team needs large amounts of data that can be used to measure how a person deviates from past arguments. The training data the team used to train their model consisted of data taken from a survey of opinions on controversial topics. Over 100 people gave their opinion on these controversial issues. Data was also pulled from reviews of 20 different hotels, consisting of 400 fictitious reviews and 800 real reviews.

According to Santo, the framework developed by the researchers could be refined and applied by news organizations and readers, in order to let them analyze the content of “fake news” articles. Readers could examine articles for the presence of opinions and determine for themselves if a logical argument has been used. Santos also said that the team wants to examine the impact of misinformation and the ripple effects that it has.

Popular culture often depicts non-verbal behaviors like facial expressions as indicators that someone is lying, but the authors of the study note that these behavioral hints aren’t always reliable indicators of lying. Deqing Li, co-author on the paper, explained that their research found that models based on reasoning intent are better indicators of lying than behavioral and verbal differences. Li explained that reasoning intent models “are better at distinguishing intentional lies from other types of information distortion”.

The work of the Dartmouth researchers isn’t the only recent advancement when it comes to fighting misinformation with AI. News articles with clickbait titles often mask misinformation. For example, they often imply one thing happened when another event actually occurred.

As reported by AINews, a team of researchers from both Arizona State University and Penn State University collaborated in order to create an AI that could detect clickbait. The researchers asked people to write their own clickbait headlines and also wrote a program to generate clickbait headlines. Both forms of headlines were then used to train a model that could effectively detect clickbait headlines, regardless of whether they were written by machines or people.

According to the researchers, their algorithm was around 14.5% more accurate, when it came to detecting clickbait titles than other AIs had been in the past. The lead researcher on the project and associate professor at the College of Information Sciences and Technology at Penn State, Dongwon Lee, explained how their experiment demonstrates the utility of generating data with an AI and feeding it back into a training pipeline.

“This result is quite interesting as we successfully demonstrated that machine-generated clickbait training data can be fed back into the training pipeline to train a wide variety of machine learning models to have improved performance,” explained Lee.

Spread the love

Artificial Neural Networks

AI Model Can Take Blurry Images And Enhance Resolution By 60 Times

mm

Published

on

Researchers from Duke University have developed an AI model capable of taking highly blurry, pixellated images and rendering them with high detail.  According to TechXplore, the model is capable of taking relatively few pixels and scaling the images up to create realistic looking faces that are approximately 64 times the resolution of the original image. The model hallucinates, or imagines, features that are between the lines of the original image.

The research is an example of super-resolution. As Cynthia Rudin from Duke University’s computer science team explained to TechXplore, this research project sets a record for super-resolution, as never before have images been created with such feal from such a small sample of starting pixels. The researchers were careful to emphasize that the model doesn’t actually recreate the face of the person in the original, low-quality image. Instead, it generates new faces, filling in details that weren’t there before. For this reason, the model couldn’t be used for anything like security systems, as it wouldn’t be able to turn out of focus images into images of a real person.

Traditional super-resolution techniques operate by making guesses about what pixels are needed to turn the image into a high-resolution image, based on images that the model has learned about beforehand. Because the added pixels are the result of guesses, not all the pixels will match with their surrounding pixels and certain regions of the image may look fuzzy or warped. The researchers from Duke University used a different method of training their AI model. The model created by the Duke researchers operates by first taking low-resolution images and adding detail to the image over time, referencing high-resolution AI-generated faces as examples. The model references the AI-generated faces and tries to finds ones that resemble the target images when the generated faces are scaled down to the size of the target image.

The research team created a Generative Adversarial Network model to handle the creation of new images. GANs are actually two neural networks that are both trained on the same dataset and pitted against one another. One network is responsible for generating fake images that mimic the real images in the training dataset, while the second network is responsible for detecting the fake images from the genuine ones. The first network is notified when its images have been identified as fake, and it improves until the fake images are hopefully indistinguishable from the genuine images.

The researchers have dubbed their super-resolution model PULSE, and the model consistently produces high-quality images even if given images so blurry that other super-resolution methods can’t create high-quality images from them. The model is even apable of making realistic looking faces from images where the features of the face are almost indistinguishable. For instance, when given an image of a face with 16×16 resolution, it can create a 1024 x 1024 image. More than a million pixels are added during this process, filling in details like strands of hair, wrinkles, and even lighting. When the researchers had people rate 1440 PULSE generated images against images generated by other super-resolution techniques, the PULSE generated images consistently scored the best.

While the researchers used their model on images of people’s faces, the same techniques they use could be applied to almost any object. Low-resolution images of various objects could be used to create high-resolution images of that set of objects, opening-up possible applications for a variety of different industries and fields from microscopy, satellite imagery, education, manufacturing, and medicine.

Spread the love
Continue Reading

Artificial Neural Networks

New Research Suggests Artificial Brains Could Benefit From Sleep

Published

on

New research coming from Los Alamos National Laboratory suggests that artificial brains almost certainly benefit from periods of rest like living brains. 

The research will be presented at the Women in Computer Vision Workshop in Seattle on June 14. 

Yijing Watkins is a Los Alamos National Laboratory computer scientist. 

“We study spiking neural networks, which are systems that learn much as living brains do,” said Watkins. “We were fascinated by the prospect of training a neuromorphic processor in a manner analogous to how humans and other biological systems learn from their environment during childhood development.”

Solving Instability in Network Simulations

Watkins and the team found that continuous periods of unsupervised learning led to instability in the network simulations. However, once the team introduced the networks to states that are a result of the waves that living brains experience during sleep, stability was able to be restored. 

“It was as though we were giving the neural networks the equivalent of a good night’s rest,” said Watkins.

The team made the discovery when they were working on developing neural networks based on how humans and other biological systems learn to see. The team faced some challenges when it came to stabilizing simulated neural networks that were undergoing unsupervised dictionary training. Unsupervised dictionary training involves classifying objects without having previous examples to use for comparison.

Garrett Kenyon is a computer scientist at Los Alamos and study coauthor.

“The issue of how to keep learning systems from becoming unstable really only arises when attempting to utilize biologically realistic, spiking neuromorphic processors or when trying to understand biology itself,” said Kenyon. “The vast majority of machine learning, deep learning, and AI researchers never encounter this issue because in the very artificial systems they study they have the luxury of performing global mathematical operations that have the effect of regulating the overall dynamical gain of the system.”

Sleep as a Last Resort Solution

According to the researchers, exposing the networks to an artificial analog of sleep was their last resort to stabilizing them. After experimenting with different types of noise, which is similar to the static between stations on a radio, the best results came from the waves of Gaussian noise. This type of noise includes a wide variety and ranges of frequencies and amplitudes. 

The researchers came up with the hypothesis that during slow-wave sleep the noise mimics the input received by biological neurons. The results suggested that slow-wave sleep could play a role in ensuring that cortical neurons do not suffer from hallucinations and maintain their stability. 

The team will now work on implementing the algorithm on Intel’s Loihi neuromorphic chip, hoping that the sleep will help it stably process information from a silicon retina camera in real-time. If the research does determine that artificial brains benefit from sleep, the same is likely true for androids and other intelligent machines.

Source: Using Sinusoidally-Modulated Noise as a Surrogate for Slow-Wave Sleep to Accomplish Stable Unsupervised Dictionary Learning in a Spike-Based Sparse Coding Model, CVPR Women in Computer Vision Workshop, 2020-06-14 (Seattle, Washington, United States)

 

Spread the love
Continue Reading

Artificial Neural Networks

AI Model Used To Map Dryness Of Forests, Predict Wildfires

mm

Published

on

A new deep learning model designed by researchers from Stanford University leverages moisture levels across 12 different states in order to assist in the prediction of wildfires and help fire management teams get ahead of potentially destructive wildfires.

Fire management teams aim to predict where the worst blazes might occur, in order that preventative measures like prescribed burns can be carried out. Predicting points of origin and spreading patterns for wildfires mandates information regarding fuel amounts and moisture levels for the target region. Collecting this data and analyzing it at the speed required to be useful to wildfire management teams is difficult, but deep learning models could help automate these critical processes.

As Futurity recently reported, researchers from Stanford University collected climate data and designed a model intended to render detailed maps of moisture levels across 12 western states, including the Pacific Coast states, Texas, Wyoming, Montana, and the southwest states. According to the researchers, although the model is still undergoing refinement it is already capable of revealing areas at high-risk for forest fires where the landscape is unusually dry.

The typical method of collecting data regarding fuel and moisture levels for a target region is by painstakingly comparing dried out vegetation to more moist vegetation. Specifically, researchers collect vegetation samples from trees and weigh them. Afterwards, the vegetation samples are dried out and reweighted. Comparisons are made between the weight of the dry samples and the wet samples to determine the amount of moisture in the vegetation. This process is a long, complex one that is only viable in certain areas and for some species of vegetation. However, the data collected from decades of this process has been used to create the National Fuel Moisture Database, comprised of over 200,000 records. The fuel-moisture content of a region is well known to be linked to the risk of wildfire, though it’s still unknown just how much of a role it plays between ecosystems and from one plant to other plants.

Krishna Rao, PhD student in earth systems science at Stanford was the lead author or the new study, and Rao explained to Futurity that machine learning affords researchers the ability to test assumptions about links between live fuel moisture and weather for different ecosystems. Rao and colleagues trained a recurrent neural network model on data from the National Fuel Moisture Database. The model was then tested by estimating fuel moisture levels based on measurements collected by space sensors. The data included signals from synthetic aperture radar (SAR), which is microwave radar signals that penetrate to the surface, and visible light bouncing off the planet’s surface. The training and validation data for the model consisted of  three years of data for approximately 240 sites across the western US starting in 2015.

The researchers ran analyses on various types of land coverage, including sparse vegetation, grasslands, shrublands, needleleaf evergreen forests, and broadleaf deciduous forests. The model’s predictions were the most accurate, most reliably matched the NFMD measurement, on shrubland regions. This is fortunate, as shrublands comprise approximately 45% of the ecosystems found throughout the US west. Shrublands, particularly chaparral shrublands, are often uniquely susceptible to fire, as seen in many of the fires that burned throughout California over recent years.

The predictions generated by the model have been used to create an interactive map that fire management agencies could one day use to prioritize regions for fire control and discern other relevant patterns. The researchers believe that with further training and refinement the model could.

As Alexandra Konings, assistant professor of earth systems science at Stanford, explained to Futurity:

“Creating these maps was the first step in understanding how this new fuel moisture data might affect fire risk and predictions. Now we’re trying to really pin down the best ways to use it for improved fire prediction.”

Spread the love
Continue Reading