Emilio Ferrara, a computer scientist from USC Information Sciences Institute (USC ISI), has new research showing that bots and fake accounts on social media are evolving to better mimic human behaviors in order to be undetected, all enabled by artificial intelligence.
The research, done by Ferrara and his team that included Luca Luceri (Scuola Universitaria Professionale della Svizzera Italiana), Ashok Deb (USC ISI), and Silvia Giordano (Scuola Universitaria Professionale della Svizzera Italiana) was published in the journal First Monday. They looked at the different bots and fake accounts that were used during the US 2018 elections and compared them to the behavior from the US 2016 elections.
In total, the team of researchers studied about 250,000 social media active users, and they focused on those who discussed the 2016 and 2018 elections. Out of those 250,000 users, the team found that 30,000 of them were bots.
The bots in the 2016 elections mostly retweeted content and focused on sending out large amounts of tweets regarding the same issue or message. The bots in 2018 evolved just as humans did when it came to social media. Bots began to retweet less content, and they stopped sharing those messages in high volume.
The bots in 2018 became better at mimicking human behavior. The researchers found that they were more likely to use multiple bots at the same time in order to appear more authentic. They used this to seem as if it was legitimate human engagement around an idea.
At that time, humans began to engage through replies rather than retweets. The bots followed this as well. They used replies to become engaged in dialogue and establish a voice on an issue or message. They also used polls in order to replicate a strategy used by legitimate news outlets and pollsters. The researchers believe that those polls were used to build an image of being reputable.
One of the examples that the researchers used was a bot that posted an online Twitter poll about federal elections. The poll asked if it should be required to present identification when voting in these elections. The bot then asked Twitter users to vote and retweet the poll.
Emilio Ferrara, the lead author of the study, spoke about the new research and what it means.
“Our study further corroborates this idea that there is an arms race between bots and detection algorithms. As social media companies put more efforts to mitigate abuse and stifle automated accounts, bots evolve to mimic human strategies. Advancements in AI enable bots producing more human-like content. We need to devote more efforts to understand how bots evolve and how more sophisticated ones can be detected. With the upcoming 2020 US elections, the integrity of social media discourse is of paramount importance to allow a democratic process free of external influences.”
Big Implications for the Future
The problem of fake online social media accounts and bots during elections has been a problem for years now. The problems that were unfolding during the 2016 elections seemed huge at the time, but that was small compared to what we’ll see in the near future. With artificial intelligence, this will get worse.
Bots are going to keep evolving to get better at mimicking human behavior, largely thanks to artificial intelligence. It will get to a point where it is impossible to determine who is real and who is not. This means there will be dramatic implications for not only the upcoming 2020 US elections, but all future elections there and around the world.
Early Warning System for Disinformation Developed with AI
Researchers at the University of Notre Dame are working on a project to combat disinformation online, including media campaigns to incite violence, sow discord, and meddle in democratic elections.
The team of researchers relied on artificial intelligence (AI) to develop an early warning system. The system will be able to identify manipulated images, deepfake videos, and disinformation online. It is a scalable, automated system that uses content-based image retrieval. It can then apply computer-vision based techniques to identify political memes on multiple social media networks.
Tim Weninger is an associate professor in the Department of Computer Science and Engineering at Notre Dame.
“Memes are easy to create and even easier to share,” said Weninger. “When it comes to political memes, these can be used to help get out the vote, but they can also be used to spread inaccurate information and cause harm.”
Weninger collaborated with Walter Scheirer, an assistant professor in the Department of Computer Science and Engineering at Notre Dame, along with members of the research team.
2019 General Election in Indonesia
The team tested out the system with the 2019 general election in Indonesia. They collected over two million images and content, coming from various sources on Twitter and Instagram related to the election.
In the election, the left-leaning, centrist incumbent beat the conservative, populist candidate. Following the election, violent protests erupted in which eight people died and hundreds more were injured. The team’s study found that there were spontaneous and coordinated campaigns launched on social media with the goal of influencing the election and inciting violence.
The coordinated campaigns used manipulated images, which projected false claims and misrepresented certain events. News stories and memes were fabricated with the use of legitimate news logos, with the goal of provoking citizens and supporters from both parties.
The Rest of the World
The 2019 general election in Indonesia is a representation of what can happen in the rest of the world. Disinformation, especially spread through social media, can threaten democratic processes.
The research team at Notre Dame included digital forensics experts and specialists in peace studies. According to the team, the system is being developed in order to flag manipulated content, with the goal of preventing violence and warning journalists or election monitors of potential threats.
The system is still in the research and development phase, but it will eventually be scalable and personalized for users to monitor content. Some of the biggest challenges in developing the system include determining the best way to scale up data ingestion and processing. According to Scheirer, the system is currently being evaluated with the next step being a transition to operational use.
There is a chance that the system can be used to monitor the 2020 general election in the United States, which is expected to see massive amounts of disinformation and manipulation.
“The disinformation age is here,” said Scheirer. “A deepfake replacing actors in a popular film might seem fun and lighthearted but imagine a video or a meme created for the sole purpose of pitting one world leader against another — saying words they didn’t actually say. Imagine how quickly that content could be shared and spread across platforms. Consider the consequences of those actions.”
Facebook Removes Accounts Generated By AI And Used To Perpetuate Conspiracy Theories
Social media companies have been aiming to control misinformation ahead of the 2020 election season in a variety of ways. While Twitter recently banned political ads from its platform, Facebook just announced that it has shuttered hundreds of fake accounts, groups, and pages. Many of these accounts seem to have profile images generated by artificial intelligence, and many have reportedly been used to disseminate misinformation and conspiracy theories.
As reported by Forbes, Facebook stated that the banned accounts and pages were linked to the “Beauty of Life” network, or “TheBL”, which Facebook said was linked to the conservative news publishing group, the Epoch Times. According to Facebook, Epoch Media Group has spent almost $9.5 million on advertising through many of the now-banned pages and groups, with many of the posts containing pro-Trump conspiracy theories. While Epoch Media Group denies the charges, Facebook has statted that it worked closely with independent researchers such as Graphika and the Atlantic Council’s Digital Forensic Research Lab (DFRLab) to determine the nature of the accounts and pages before taking action against them.
According to Facebook, the accounts were removed for “coordinated inauthentic behavior”, purposefully misleading others about their identities, and for attempting political interference. According to CNET, Facebook said the accounts often posted content promoting specific political candidates and ideology, focusing on conservative elections, conservative policies, and strong support for President Trump.
Facebook published a 39-page report on the event covering many of their findings. One of the notable aspects of Facebook’s report was that many of the banned accounts were created with the assistance of AI. Facebook’s researchers state in the report:
“Dozens of these fake accounts had profile pictures generated by artificial intelligence, in the first large-scale deployment of fake faces known to the authors of this report.”
According to the findings of the report, the AI-generated images weren’t perfect, with details often giving away their true nature. Contiguous elements of an image, like a person’s glasses or hair, were often asymmetrical. Furthermore, background details were often blurry and distorted. However, these elements may not be noticeable at first glance, especially given the small image sizes of profile photos in a Facebook comment chain. Many of the fake profiles also seemed to have fake profile information and even fake posts, potentially generated by AI.
As NBC reported, Facebook’s head of security policy, Nathaniel Gleicher, stated that the behavior of the accounts is what gave them away as inauthentic and that attempts to use fake images and profile info don’t help shield the accounts from discovery. Gleicher stated the AI-generated images were actually making the accounts more likely to get caught. Said Gleicher:
“We detected these accounts because they were engaged in fake behavior. Using AI-generated profiles as a way to make themselves look more real doesn’t actually help them. The biggest takeaway here is the egregiousness of the network in using fake identities… What’s new here is that this is purportedly a U.S.-based media company leveraging foreign actors posing as Americans to push political content. We’ve seen it a lot with state actors in the past.”
Nonetheless, the independent researchers from Graphika and the Atlantic Council stated that the ease with which the bad actors were able to create so many images and give their accounts perceived authenticity “is a concern”. Facebook and other social media companies are under pressure to step up efforts to combat the proliferation of political misinformation, a task that will require staying technologically ahead of those seeking to spread misinformation.
Before Facebook had brought the accounts, pages, and groups down, the content posted by these entities reached millions of people. Reportedly, at least 55 million accounts had followed one of the 89 different banned pages. Most of the followers were non-US accounts. In total, around 600 accounts, 90 pages, and 150 groups were removed from Facebook. Approximately 70 accounts were also removed from Instagram.
The news comes just as Facebook is kicking off a DeepFake detection challenge, which will run through March of 2020. Twitter has also recently banned almost 6000 accounts its suspects originated in Saudi Arabia and posted purposefully misleading content.
Lego Finds An Inventive Way to Combine AI and Motion Tracking
Lego toy systems have been around for generations and have been considered by many as a way to stimulate the imagination. Quite a few users have at some point imagined having a Lego figure in their own image they could use with their sets.
Realizing that fact, Lego has decided to try and make that dream come true. As Gizmodo reports, Lego will try to realize that dream for anybody who visits there theme park that will open in New York in 2020. To do this the company will employ sophisticated motion tracking and neural network facial recognition.
The theme park, named Legoland New York Resort will be located in Goshen, New York, which is about 60 miles northwest of New York City and it will open on July 4, 2020.
According to Mobile ID World, this possibility will be featured in a Lego Factory Adventure Ride “that takes park guests through a tour of a “factory” showing them how the iconic little plastic bricks are made.”
“Using Holovis’ Holotrack technology, the Lego Factory Adventure Ride will feature a segment where park guests are turned into one of Lego’s iconic miniature figures. Holotrack leverages the use of the same artificial intelligence and deep learning technologies that have made deepfake videos possible, taking an individual’s image and translating it onto a screen. The guest’s mini-figures will mimic their movements and appearance, copying their hair, glasses, clothing, and facial expressions. The time it takes to render a guest into a Lego figure is reported to be about half a second.”
But this is certainly not the new AI development in which Lego is involved. Back in 2013 Lego Engineering, used artificial intelligence to explore movement, using Lego building blocks. In 2014, researchers and programmers started using Lego Mindstorms EV3 robot with AI by connecting the brain of a worm to the sensors and motors of an EV3 robot using a computer program. AI development enthusiasts have been using Mindstorms EV3 for a while now trying particularly to develop robotic movement.
In 2004 and 2016, two research projects were published which researched how Lego could be used in teaching AI. The first employed Lego’s Mindstorms, while the latter, published by Western Washington University discussed 12-years of teaching experience on AI using Lego systems, including EV3.
But the company’s biggest advancement in the field of AI came this year when in August when it announced that it will “begin trials of a new system to aid those with visual disabilities in following LEGO instructions.”
The system is called Audio & Braille Building Instructions, and uses “AI to pair digital traditional-style visual instructions with verbal or tactile Braille directions, and was developed in collaboration with life-long LEGO fan Matthew Shifrin, who is blind.”
The system is in the early stages of development and currently supports “a handful of sets at present while the development team seeks feedback from users.” The feedback will be used to implement the feedback which will add to more sets “in the first half of 2020, with an eventual goal of supporting all-new LEGO product launches. “ The official instructions created by the new AI-driven program will be available for free from legoaudioinstructions.com.
- Akilesh Bapu, Founder & CEO of DeepScribe – Interview Series
- AI Models Trained On Sex Biased Data Perform Worse At Diagnosing Disease
- Stefano Pacifico, and David Heeger, Co-Founders of Epistemic AI – Interview Series
- New Software Developed to Improve Robotic Prosthetics
- Power Your ML and AI Efforts with Data Transformation – Thought Leaders