stub Bots Have Evolved to Mimic Human Behavior Better for 2020 Elections - Unite.AI
Connect with us

Artificial Intelligence

Bots Have Evolved to Mimic Human Behavior Better for 2020 Elections

Updated on

Emilio Ferrara, a computer scientist from USC Information Sciences Institute (USC ISI), has new research showing that bots and fake accounts on social media are evolving to better mimic human behaviors in order to be undetected, all enabled by artificial intelligence. 

The research, done by Ferrara and his team that included Luca Luceri (Scuola Universitaria Professionale della Svizzera Italiana), Ashok Deb (USC ISI), and Silvia Giordano (Scuola Universitaria Professionale della Svizzera Italiana) was published in the journal First Monday. They looked at the different bots and fake accounts that were used during the US 2018 elections and compared them to the behavior from the US 2016 elections. 

In total, the team of researchers studied about 250,000 social media active users, and they focused on those who discussed the 2016 and 2018 elections. Out of those 250,000 users, the team found that 30,000 of them were bots. 

The bots in the 2016 elections mostly retweeted content and focused on sending out large amounts of tweets regarding the same issue or message. The bots in 2018 evolved just as humans did when it came to social media. Bots began to retweet less content, and they stopped sharing those messages in high volume. 

The bots in 2018 became better at mimicking human behavior. The researchers found that they were more likely to use multiple bots at the same time in order to appear more authentic. They used this to seem as if it was legitimate human engagement around an idea. 

At that time, humans began to engage through replies rather than retweets. The bots followed this as well. They used replies to become engaged in dialogue and establish a voice on an issue or message. They also used polls in order to replicate a strategy used by legitimate news outlets and pollsters. The researchers believe that those polls were used to build an image of being reputable. 

 One of the examples that the researchers used was a bot that posted an online Twitter poll about federal elections. The poll asked if it should be required to present identification when voting in these elections. The bot then asked Twitter users to vote and retweet the poll. 

Emilio Ferrara, the lead author of the study, spoke about the new research and what it means. 

“Our study further corroborates this idea that there is an arms race between bots and detection algorithms. As social media companies put more efforts to mitigate abuse and stifle automated accounts, bots evolve to mimic human strategies. Advancements in AI enable bots producing more human-like content. We need to devote more efforts to understand how bots evolve and how more sophisticated ones can be detected. With the upcoming 2020 US elections, the integrity of social media discourse is of paramount importance to allow a democratic process free of external influences.”

 

Big Implications for the Future

The problem of fake online social media accounts and bots during elections has been a problem for years now. The problems that were unfolding during the 2016 elections seemed huge at the time, but that was small compared to what we’ll see in the near future. With artificial intelligence, this will get worse. 

Bots are going to keep evolving to get better at mimicking human behavior, largely thanks to artificial intelligence. It will get to a point where it is impossible to determine who is real and who is not. This means there will be dramatic implications for not only the upcoming 2020 US elections, but all future elections there and around the world. 

 

Alex McFarland is an AI journalist and writer exploring the latest developments in artificial intelligence. He has collaborated with numerous AI startups and publications worldwide.