An AI model created by researchers from the University of Sheffield can potentially determine which users of Twitter will post disinformation before they actually do so. If the model proves reliable it could be used to complement existing methods of fighting disinformation on social media.
According to TechXplore, The study was lead by researchers from the University of Sheffield’s Department of Computer Science, including Dr. Nikos Aletras and Yida Mu. The study was published in the journal PeerJ and it details the methods used to predict if a user of social media is likely to spread disinformation by posting content from unreliable sources of news.
The research team collected over 1 million tweets from over 6000 users of Twitter, all publicly available. The team applied natural language processing techniques to ready the data for the training of an AI model. The AI was a binary classification model, labeling users as likely to share information from unreliable sources or not likely to. After the model was trained on the data, it was able to achieve approximately 79.7% classification accuracy.
When analyzing the results of the model’s performance, the researchers found that users who heavily used impolite language and constantly tweeted about religion and politics were more likely to post information from unreliable sources. In particular, there was heavy use of words like “liberal”, “media”, “government”, “Israel”, and “Islam”. Meanwhile, users who posted information from reliable sources tended to use words like “I’ll”, “gonna”, “wanna”, “mood”, “excited”, and “birthday”. Beyond this, they typically shared stories about their personal lives, such as interactions with friends, their emotions, or information about their hobbies.
The study’s findings could help social media companies like Facebook, Reddit, and Twitter engineer new ways of combating the spread of misinformation online. The research could also help psychologists and social scientists better understand the behavior that leads to the rampant spread of misinformation throughout a social network.
As Aletras explained according to TechXplore, social media has transformed into one of the predominant ways that people get their news. Millions of users around the world get their news stories through Facebook and Twitter every day, but these platforms have also become tools for spreading disinformation throughout society. Aletras went on to explain that reliably identifying certain trends in user behavior could help with curbing disinformation. As Aletras explained there was a “correlation between the use of impolite language and the spread of unreliable content can be attributed to high online political hostility.”
According to Mu analyzing the behavior of users who share unreliable information can assist social media platforms by complementing existing fact-checking methods and model disinformation at the user level. As Mu said via TechXplore:
“Studying and analyzing the behavior of users sharing content from unreliable news sources can help social media platforms to prevent the spread of fake news at the user level, complementing existing fact-checking methods that work on the post or the news source level.”
The research conducted by Aletras and Mu might be an instance of using AI to combat misinformation generated by AI. The past few months have seen an upswing in disinformation surrounding local and national politics, with much of the content generated and disseminated by AI algorithms. Deep neural networks have been employed to construct realistic photographs of and profiles for fake accounts that serve as disseminators of fake news. The research Aletras and Mu are engaged with could help social media companies figure out which accounts are fake, bot account created with the purpose of spreading harmful propaganda and misinformation.