Emilio Ferrara, a computer scientist from USC Information Sciences Institute (USC ISI), has new research showing that bots and fake accounts on social media are evolving to better mimic human behaviors in order to be undetected, all enabled by artificial intelligence.
The research, done by Ferrara and his team that included Luca Luceri (Scuola Universitaria Professionale della Svizzera Italiana), Ashok Deb (USC ISI), and Silvia Giordano (Scuola Universitaria Professionale della Svizzera Italiana) was published in the journal First Monday. They looked at the different bots and fake accounts that were used during the US 2018 elections and compared them to the behavior from the US 2016 elections.
In total, the team of researchers studied about 250,000 social media active users, and they focused on those who discussed the 2016 and 2018 elections. Out of those 250,000 users, the team found that 30,000 of them were bots.
The bots in the 2016 elections mostly retweeted content and focused on sending out large amounts of tweets regarding the same issue or message. The bots in 2018 evolved just as humans did when it came to social media. Bots began to retweet less content, and they stopped sharing those messages in high volume.
The bots in 2018 became better at mimicking human behavior. The researchers found that they were more likely to use multiple bots at the same time in order to appear more authentic. They used this to seem as if it was legitimate human engagement around an idea.
At that time, humans began to engage through replies rather than retweets. The bots followed this as well. They used replies to become engaged in dialogue and establish a voice on an issue or message. They also used polls in order to replicate a strategy used by legitimate news outlets and pollsters. The researchers believe that those polls were used to build an image of being reputable.
One of the examples that the researchers used was a bot that posted an online Twitter poll about federal elections. The poll asked if it should be required to present identification when voting in these elections. The bot then asked Twitter users to vote and retweet the poll.
Emilio Ferrara, the lead author of the study, spoke about the new research and what it means.
“Our study further corroborates this idea that there is an arms race between bots and detection algorithms. As social media companies put more efforts to mitigate abuse and stifle automated accounts, bots evolve to mimic human strategies. Advancements in AI enable bots producing more human-like content. We need to devote more efforts to understand how bots evolve and how more sophisticated ones can be detected. With the upcoming 2020 US elections, the integrity of social media discourse is of paramount importance to allow a democratic process free of external influences.”
Big Implications for the Future
The problem of fake online social media accounts and bots during elections has been a problem for years now. The problems that were unfolding during the 2016 elections seemed huge at the time, but that was small compared to what we’ll see in the near future. With artificial intelligence, this will get worse.
Bots are going to keep evolving to get better at mimicking human behavior, largely thanks to artificial intelligence. It will get to a point where it is impossible to determine who is real and who is not. This means there will be dramatic implications for not only the upcoming 2020 US elections, but all future elections there and around the world.
Facebook Removes Accounts Generated By AI And Used To Perpetuate Conspiracy Theories
Social media companies have been aiming to control misinformation ahead of the 2020 election season in a variety of ways. While Twitter recently banned political ads from its platform, Facebook just announced that it has shuttered hundreds of fake accounts, groups, and pages. Many of these accounts seem to have profile images generated by artificial intelligence, and many have reportedly been used to disseminate misinformation and conspiracy theories.
As reported by Forbes, Facebook stated that the banned accounts and pages were linked to the “Beauty of Life” network, or “TheBL”, which Facebook said was linked to the conservative news publishing group, the Epoch Times. According to Facebook, Epoch Media Group has spent almost $9.5 million on advertising through many of the now-banned pages and groups, with many of the posts containing pro-Trump conspiracy theories. While Epoch Media Group denies the charges, Facebook has statted that it worked closely with independent researchers such as Graphika and the Atlantic Council’s Digital Forensic Research Lab (DFRLab) to determine the nature of the accounts and pages before taking action against them.
According to Facebook, the accounts were removed for “coordinated inauthentic behavior”, purposefully misleading others about their identities, and for attempting political interference. According to CNET, Facebook said the accounts often posted content promoting specific political candidates and ideology, focusing on conservative elections, conservative policies, and strong support for President Trump.
Facebook published a 39-page report on the event covering many of their findings. One of the notable aspects of Facebook’s report was that many of the banned accounts were created with the assistance of AI. Facebook’s researchers state in the report:
“Dozens of these fake accounts had profile pictures generated by artificial intelligence, in the first large-scale deployment of fake faces known to the authors of this report.”
According to the findings of the report, the AI-generated images weren’t perfect, with details often giving away their true nature. Contiguous elements of an image, like a person’s glasses or hair, were often asymmetrical. Furthermore, background details were often blurry and distorted. However, these elements may not be noticeable at first glance, especially given the small image sizes of profile photos in a Facebook comment chain. Many of the fake profiles also seemed to have fake profile information and even fake posts, potentially generated by AI.
As NBC reported, Facebook’s head of security policy, Nathaniel Gleicher, stated that the behavior of the accounts is what gave them away as inauthentic and that attempts to use fake images and profile info don’t help shield the accounts from discovery. Gleicher stated the AI-generated images were actually making the accounts more likely to get caught. Said Gleicher:
“We detected these accounts because they were engaged in fake behavior. Using AI-generated profiles as a way to make themselves look more real doesn’t actually help them. The biggest takeaway here is the egregiousness of the network in using fake identities… What’s new here is that this is purportedly a U.S.-based media company leveraging foreign actors posing as Americans to push political content. We’ve seen it a lot with state actors in the past.”
Nonetheless, the independent researchers from Graphika and the Atlantic Council stated that the ease with which the bad actors were able to create so many images and give their accounts perceived authenticity “is a concern”. Facebook and other social media companies are under pressure to step up efforts to combat the proliferation of political misinformation, a task that will require staying technologically ahead of those seeking to spread misinformation.
Before Facebook had brought the accounts, pages, and groups down, the content posted by these entities reached millions of people. Reportedly, at least 55 million accounts had followed one of the 89 different banned pages. Most of the followers were non-US accounts. In total, around 600 accounts, 90 pages, and 150 groups were removed from Facebook. Approximately 70 accounts were also removed from Instagram.
The news comes just as Facebook is kicking off a DeepFake detection challenge, which will run through March of 2020. Twitter has also recently banned almost 6000 accounts its suspects originated in Saudi Arabia and posted purposefully misleading content.
Lego Finds An Inventive Way to Combine AI and Motion Tracking
Lego toy systems have been around for generations and have been considered by many as a way to stimulate the imagination. Quite a few users have at some point imagined having a Lego figure in their own image they could use with their sets.
Realizing that fact, Lego has decided to try and make that dream come true. As Gizmodo reports, Lego will try to realize that dream for anybody who visits there theme park that will open in New York in 2020. To do this the company will employ sophisticated motion tracking and neural network facial recognition.
The theme park, named Legoland New York Resort will be located in Goshen, New York, which is about 60 miles northwest of New York City and it will open on July 4, 2020.
According to Mobile ID World, this possibility will be featured in a Lego Factory Adventure Ride “that takes park guests through a tour of a “factory” showing them how the iconic little plastic bricks are made.”
“Using Holovis’ Holotrack technology, the Lego Factory Adventure Ride will feature a segment where park guests are turned into one of Lego’s iconic miniature figures. Holotrack leverages the use of the same artificial intelligence and deep learning technologies that have made deepfake videos possible, taking an individual’s image and translating it onto a screen. The guest’s mini-figures will mimic their movements and appearance, copying their hair, glasses, clothing, and facial expressions. The time it takes to render a guest into a Lego figure is reported to be about half a second.”
But this is certainly not the new AI development in which Lego is involved. Back in 2013 Lego Engineering, used artificial intelligence to explore movement, using Lego building blocks. In 2014, researchers and programmers started using Lego Mindstorms EV3 robot with AI by connecting the brain of a worm to the sensors and motors of an EV3 robot using a computer program. AI development enthusiasts have been using Mindstorms EV3 for a while now trying particularly to develop robotic movement.
In 2004 and 2016, two research projects were published which researched how Lego could be used in teaching AI. The first employed Lego’s Mindstorms, while the latter, published by Western Washington University discussed 12-years of teaching experience on AI using Lego systems, including EV3.
But the company’s biggest advancement in the field of AI came this year when in August when it announced that it will “begin trials of a new system to aid those with visual disabilities in following LEGO instructions.”
The system is called Audio & Braille Building Instructions, and uses “AI to pair digital traditional-style visual instructions with verbal or tactile Braille directions, and was developed in collaboration with life-long LEGO fan Matthew Shifrin, who is blind.”
The system is in the early stages of development and currently supports “a handful of sets at present while the development team seeks feedback from users.” The feedback will be used to implement the feedback which will add to more sets “in the first half of 2020, with an eventual goal of supporting all-new LEGO product launches. “ The official instructions created by the new AI-driven program will be available for free from legoaudioinstructions.com.
Deep Learning Is Re-Shaping The Broadcasting Industry
Deep learning has become a buzz word in many endeavors, and broadcasting organizations are also among those that have to start to explore all the potential it has to offer, from news reporting to feature films and programs, both in the cinemas and on TV.
As TechRadar reported, the number of opportunities deep learning presents in the field of video production, editing and cataloging are already quite high. But as is noted, this technology is not just limited to what is considered repetitive tasks in broadcasting, since it can also “enhance the creative process, improve video delivery and help preserve the massive video archives that many studios keep.”
As far as video generation and editing are concerned, it is mentioned that Warner Bros. recently had to spend $25M on reshoots for ‘Justice League’ and part of that money went to digitally removing a mustache that star Henry Cavill had grown and could not shave due to an overlapping commitment. The use of deep learning in such time-consuming and financially taxing processes in post-production will certainly be put to good use.
Even widely available solutions like Flo make it possible to use deep learning in creating automatically a video just by describing your idea. The software then searches for possible relevant videos that are stored in a certain library and edits them together automatically.
Flo is also able to sort and classify videos, making it easier to find a particular part of the footage. Such technologies also make it possible to easily remove undesirable footage or make a personal recommendation list based on a video somebody has expressed an interest in.
Google has come up with a neural network “that can automatically separate the foreground and background of a video. What used to require a green screen can now be done with no special equipment.”
The deep fake has already made a name for itself, both good and bad, but its potential use in special effects has already reached quite a high level.
The area where deep learning will certainly make a difference in the restoration of classic films, as the UCLA Film & Television Archive, nearly half of all films produced prior to 1950 have disappeared and 90% of the classic film prints are currently in a very poor condition.
Colorizing black and white footage is still a controversial subject among the filmmakers, but those who decide to go that route can now use Nvidia tools, which will significantly shorten such a lengthy process as it now requires that the artist colors only one frame of a scene and deep learning will do the rest from there. On the other hand, Google has come up with a technology that is able to recreate part of a video-recorded scene based on start and end frames.
Face/Object recognition is already actively used, from classifying a video collection or archive, searching for clips with a given actor or newsperson, or counting the exact time of an actor in a video or film. TechRadar mentions that Sky News recently used facial recognition to identify famous faces at the royal wedding.
This technology is now becoming widely used in sports broadcasting to, say, “track the movements of the ball, or to identify other key elements to the game, such as the goal.” In soccer (football) this technology, given the name VAR is actually used in many official tournaments and national leagues as a referee’s tool during the game.
Streaming is yet another aspect of broadcasting that can benefit from deep learning. Neural networks can recreate high definition frames from low definition input, making it possible for the viewer to benefit from better viewing, even if the original input signal is not fully up to the standard.