stub AI Engineers Develop Method That Can Detect Intent Of Those Spreading Misinformation - Unite.AI
Connect with us

Artificial Intelligence

AI Engineers Develop Method That Can Detect Intent Of Those Spreading Misinformation

mm
Updated on

Dealing with misinformation in the digital age is a complex problem. Not only does misinformation have to be identified, tagged, and corrected, but the intent of those responsible for making the claim should also be distinguished. A person may unknowingly spread misinformation, or just be giving their opinion on an issue even though it is later reported as fact. Recently, a team of AI researchers and engineers at Dartmouth created a framework that can be used to derive opinion from “fake news” reports.

As ScienceDaily reports, the Dartmouth team’s study was recently published in the Journal of Experimental & Theoretical Artificial Intelligence. While previous studies have attempted to identify fake news and fight deception, this might be the first study that aimed to identify the intent of the speaker in a news piece. While a true story can be twisted into various deceptive forms, it’s important to distinguish whether or not deception was intended. The research team argues that intent matters when considering misinformation, as deception is only possible if there was intent to mislead. If an individual didn’t realize they were spreading misinformation or if they were just giving their opinion, there can’t be deception.

Eugene Santos Jr., an engineering professor at Dartmouth’s Thayer School of Engineering, explained to ScienceDaily why their model attempts to distinguish deceptive intent:

“Deceptive intent to mislead listeners on purpose poses a much larger threat than unintentional mistakes. To the best of our knowledge, our algorithm is the only method that detects deception and at the same time discriminates malicious acts from benign acts.”

In order to construct their model, the research team analyzed the features of deceptive reasoning. The resulting algorithm could distinguish intent to deceive from other forms of communication by focusing on discrepancies between a person’s past arguments and their current statements. The model constructed by the research team needs large amounts of data that can be used to measure how a person deviates from past arguments. The training data the team used to train their model consisted of data taken from a survey of opinions on controversial topics. Over 100 people gave their opinion on these controversial issues. Data was also pulled from reviews of 20 different hotels, consisting of 400 fictitious reviews and 800 real reviews.

According to Santo, the framework developed by the researchers could be refined and applied by news organizations and readers, in order to let them analyze the content of “fake news” articles. Readers could examine articles for the presence of opinions and determine for themselves if a logical argument has been used. Santos also said that the team wants to examine the impact of misinformation and the ripple effects that it has.

Popular culture often depicts non-verbal behaviors like facial expressions as indicators that someone is lying, but the authors of the study note that these behavioral hints aren’t always reliable indicators of lying. Deqing Li, co-author on the paper, explained that their research found that models based on reasoning intent are better indicators of lying than behavioral and verbal differences. Li explained that reasoning intent models “are better at distinguishing intentional lies from other types of information distortion”.

The work of the Dartmouth researchers isn’t the only recent advancement when it comes to fighting misinformation with AI. News articles with clickbait titles often mask misinformation. For example, they often imply one thing happened when another event actually occurred.

As reported by AINews, a team of researchers from both Arizona State University and Penn State University collaborated in order to create an AI that could detect clickbait. The researchers asked people to write their own clickbait headlines and also wrote a program to generate clickbait headlines. Both forms of headlines were then used to train a model that could effectively detect clickbait headlines, regardless of whether they were written by machines or people.

According to the researchers, their algorithm was around 14.5% more accurate, when it came to detecting clickbait titles than other AIs had been in the past. The lead researcher on the project and associate professor at the College of Information Sciences and Technology at Penn State, Dongwon Lee, explained how their experiment demonstrates the utility of generating data with an AI and feeding it back into a training pipeline.

“This result is quite interesting as we successfully demonstrated that machine-generated clickbait training data can be fed back into the training pipeline to train a wide variety of machine learning models to have improved performance,” explained Lee.