Connect with us

Deepfakes

Bots Have Evolved to Mimic Human Behavior Better for 2020 Elections

Published

 on

Emilio Ferrara, a computer scientist from USC Information Sciences Institute (USC ISI), has new research showing that bots and fake accounts on social media are evolving to better mimic human behaviors in order to be undetected, all enabled by artificial intelligence. 

The research, done by Ferrara and his team that included Luca Luceri (Scuola Universitaria Professionale della Svizzera Italiana), Ashok Deb (USC ISI), and Silvia Giordano (Scuola Universitaria Professionale della Svizzera Italiana) was published in the journal First Monday. They looked at the different bots and fake accounts that were used during the US 2018 elections and compared them to the behavior from the US 2016 elections. 

In total, the team of researchers studied about 250,000 social media active users, and they focused on those who discussed the 2016 and 2018 elections. Out of those 250,000 users, the team found that 30,000 of them were bots. 

The bots in the 2016 elections mostly retweeted content and focused on sending out large amounts of tweets regarding the same issue or message. The bots in 2018 evolved just as humans did when it came to social media. Bots began to retweet less content, and they stopped sharing those messages in high volume. 

The bots in 2018 became better at mimicking human behavior. The researchers found that they were more likely to use multiple bots at the same time in order to appear more authentic. They used this to seem as if it was legitimate human engagement around an idea. 

At that time, humans began to engage through replies rather than retweets. The bots followed this as well. They used replies to become engaged in dialogue and establish a voice on an issue or message. They also used polls in order to replicate a strategy used by legitimate news outlets and pollsters. The researchers believe that those polls were used to build an image of being reputable. 

 One of the examples that the researchers used was a bot that posted an online Twitter poll about federal elections. The poll asked if it should be required to present identification when voting in these elections. The bot then asked Twitter users to vote and retweet the poll. 

Emilio Ferrara, the lead author of the study, spoke about the new research and what it means. 

“Our study further corroborates this idea that there is an arms race between bots and detection algorithms. As social media companies put more efforts to mitigate abuse and stifle automated accounts, bots evolve to mimic human strategies. Advancements in AI enable bots producing more human-like content. We need to devote more efforts to understand how bots evolve and how more sophisticated ones can be detected. With the upcoming 2020 US elections, the integrity of social media discourse is of paramount importance to allow a democratic process free of external influences.”

 

Big Implications for the Future

The problem of fake online social media accounts and bots during elections has been a problem for years now. The problems that were unfolding during the 2016 elections seemed huge at the time, but that was small compared to what we’ll see in the near future. With artificial intelligence, this will get worse. 

Bots are going to keep evolving to get better at mimicking human behavior, largely thanks to artificial intelligence. It will get to a point where it is impossible to determine who is real and who is not. This means there will be dramatic implications for not only the upcoming 2020 US elections, but all future elections there and around the world. 

 

Spread the love

Alex McFarland is a historian and journalist covering the newest developments in artificial intelligence.

AI 101

What Are Deepfakes?

mm

Published

on

As deepfakes become easier to make and more prolific, more attention is paid to them. Deepfakes have become the focal point of discussions involving AI ethics, misinformation, openness of information and the internet, and regulation. It pays to be informed regarding deepfakes, and to have an intuitive understanding of what deepfakes are. This article will clarify the definition of a deepfake, examine their use cases, discuss how deepfakes can be detected, and examine the implications of deepfakes for society.

What Is A Deepfakes?

Before going on to discuss deepfakes further, it would be helpful to take some time and clarify what “deepfakes” actually are. There is a substantial amount of confusion regarding the term Deepfake, and often the term is misapplied to any falsified media, regardless of whether or not it is a genuine deepfake. In order to qualify as a Deepfake, the faked media in question must be generated with a machine-learning system, specifically a deep neural network.

The key ingredient of deepfakes is machine learning. Machine learning has made it possible for computers to automatically generate video and audio relatively quickly and easily. Deep neural networks are trained on footage of a real person in order for the network to learn how people look and move under the target environmental conditions. The trained network is then used on images of another individual and augmented with additional computer graphics techniques in order to combine the new person with the original footage. An encoder algorithm is used to determine the similarities between the original face and the target face. Once the common features of the faces have been isolated, a second AI algorithm called a decoder is used. The decoder examines the encoded (compressed) images and reconstructs them based off on the features in the original images. Two decoders are used, one on the original subject’s face and the second on the target person’s face. In order for the swap to be made, the decoder trained on images of person X is fed images of person Y. The result is that person Y’s face is reconstruction over Person X’s facial expressions and orientation.

Currently, it still takes a fair amount of time for a deepfake to be made. The creator of the fake has to spend a long time manually adjusting parameters of the model, as suboptimal parameters will lead to noticeable imperfections and image glitches that give away the fake’s true nature.

Although it’s frequently assumed that most deepfakes are made with a type of neural network called a generative adversarial network (GAN), many (perhaps most) deepfakes created these days do not rely on GANs. While GANs did play a prominent role in the creation of early deepfakes,  most deepfake videos are created through alternative methods, according to Siwei Lyu from SUNY Buffalo.

It takes a disproportionately large amount of training data in order to train a GAN, and GANs often take much longer to render an image compared to other image generation techniques. GANs are also better for generating static images than video, as GANs have difficulties maintaining consistencies from frame to frame. It’s much more common to use an encoder and multiple decoders to create deepfakes.

What Are Deepfakes Used For?

Many of the deepfakes found online are pornographic in nature. According to research done by Deeptrace, an AI firm, out of a sample of approximately 15,000 deepfake videos taken in September of 2019, approximately 95% of them were pornographic in nature. A troubling implication of this fact is that as the technology becomes easier to use, incidents of fake revenge porn could rise.

However, not all deep fakes are pornographic in nature. There are more legitimate uses for deepfake technology. Audio deepfake technology could help people broadcast their regular voices after they are damaged or lost due to illness or injury. Deepfakes can also be used for hiding the faces of people who are in sensitive, potentially dangerous situations, while still allowing their lips and expressions to be read. Deepfake technology can potentially be used to improve the dubbing on foreign-language films, aid in the repair of old and damaged media, and even create new styles of art.

Non-Video Deepfakes

While most people think of fake videos when they hear the term “deepfake”, fake videos are by no means the only kind of fake media produced with deepfake technology. Deepfake technology is used to create photo and audio fakes as well. As previously mentioned, GANs are frequently used to generate fake images. It’s thought that there have been many cases of fake LinkedIn and Facebook profiles that have profile images generated with deepfake algorithms.

It’s possible to create audio deepfakes as well. Deep neural networks are trained to produce voice clones/voice skins of different people, including celebrities and politicians. One famous example of an audio Deepfake is when the AI company Dessa made use of an AI model, supported by non-AI algorithms, to recreate the voice of the podcast host Joe Rogan.

How To Spot Deepfakes

As deepfakes become more and more sophisticated, distinguishing them from genuine media will become tougher and tougher. Currently, there are a few telltale signs people can look for to ascertain if a video is potentially a deepfake, like poor lip-syncing, unnatural movement, flickering around the edge of the face, and warping of fine details like hair, teeth, or reflections. Other potential signs of a deepfake include lower-quality parts of the same video, and irregular blinking of the eyes.

While these signs may help one spot a deepfake at the moment, as deepfake technology improves the only option for reliable deepfake detection might be other types of AI trained to distinguish fakes from real media.

Artificial intelligence companies, including many of the large tech companies, are researching methods of detecting deepfakes. Last December, a deepfake detection challenge was started, supported by three tech giants: Amazon, Facebook, and Microsoft. Research teams from around the world worked on methods of detecting deepfakes, competing to develop the best detection methods. Other groups of researchers, like a group of combined researchers from Google and Jigsaw, are working on a type of “face forensics” that can detect videos that have been altered, making their datasets open source and encouraging others to develop deepfake detection methods. The aforementioned Dessa has worked on refining deepfake detection techniques, trying to ensure that the detection models work on deepfake videos found in the wild (out on the internet) rather than just on pre-composed training and testing datasets, like the open-source dataset Google provided.

There are also other strategies that are being investigated to deal with the proliferation of deepfakes. For instance, checking videos for concordance with other sources of information is one strategy. Searches can be done for video of events potentially taken from other angles, or background details of the video (like weather patterns and locations) can be checked for incongruities. Beyond this, a Blockchain online ledger system could register videos when they are initially created, holding their original audio and images so that derivative videos can always be checked for manipulation.

Ultimately, it’s important that reliable methods of detecting deepfakes are created and that these detection methods keep up with the newest advances in deepfake technology. While it is hard to know exactly what the effects of deepfakes will be, if there are not reliable methods of detecting deepfakes (and other forms of fake media), misinformation could potentially run rampant and degrade people’s trust in society and institutions.

Implications of Deepfakes

What are the dangers of allowing deep fake to proliferate unchecked?

One of the biggest problems that deepfakes create currently is nonconsensual pornography, engineered by combining people’s faces with pornographic videos and images. AI ethicists are worried that deepfakes will see more use in the creation of fake revenge porn. Beyond this, deepfakes could be used to bully and damage the reputation of just about anyone, as they could be used to place people into controversial and compromising scenarios.

Companies and cybersecurity specialists have expressed concern about the use of deepfakes to facilitate scams, fraud, and extortion. Allegedly, deepfake audio has been used to convince employees of a company to transfer money to scammers

It’s possible that deepfakes could have harmful effects even beyond those listed above. Deepfakes could potentially erode people’s trust in media generally, and make it difficult for people to distinguish between real news and fake news. If many videos on the web are fake, it becomes easier for governments, companies, and other entities to cast doubt on legitimate controversies and unethical practices.

When it comes to governments, deepfakes may even pose threats to the operation of democracy. Democracy requires that citizens are able to make informed decisions about politicians based on reliable information. Misinformation undermines democratic processes. For example, the president of Gabon, Ali Bongo, appeared in a video attempting to reassure the Gabon citizenry. The president was assumed to be unwell for long a long period of time, and his sudden appearance in a likely fake video kicked off an attempted coup. President Donald Trump claimed that an audio recording of him bragging about grabbing women by the genitals was fake, despite also describing it as “locker room talk”. Prince Andrew also claimed that an image provided by Emily Maitilis’ attorney was fake, though the attorney insisted on its authenticity.

Ultimately, while there are legitimate uses for deepfake technology, there are many potential harms that can arise from the misuse of that technology. For that reason, it’s extremely important that methods to determine the authenticity of media be created and maintained.

Spread the love
Continue Reading

Deepfakes

Early Warning System for Disinformation Developed with AI

Published

on

Researchers at the University of Notre Dame are working on a project to combat disinformation online, including media campaigns to incite violence, sow discord, and meddle in democratic elections. 

The team of researchers relied on artificial intelligence (AI) to develop an early warning system. The system will be able to identify manipulated images, deepfake videos, and disinformation online. It is a scalable, automated system that uses content-based image retrieval. It can then apply computer-vision based techniques to identify political memes on multiple social media networks. 

Tim Weninger is an associate professor in the Department of Computer Science and Engineering at Notre Dame. 

“Memes are easy to create and even easier to share,” said Weninger. “When it comes to political memes, these can be used to help get out the vote, but they can also be used to spread inaccurate information and cause harm.”

Weninger collaborated with Walter Scheirer, an assistant professor in the Department of Computer Science and Engineering at Notre Dame, along with members of the research team. 

2019 General Election in Indonesia

The team tested out the system with the 2019 general election in Indonesia. They collected over two million images and content, coming from various sources on Twitter and Instagram related to the election. 

In the election, the left-leaning, centrist incumbent beat the conservative, populist candidate. Following the election, violent protests erupted in which eight people died and hundreds more were injured. The team’s study found that there were spontaneous and coordinated campaigns launched on social media with the goal of influencing the election and inciting violence. 

The coordinated campaigns used manipulated images, which projected false claims and misrepresented certain events. News stories and memes were fabricated with the use of legitimate news logos, with the goal of provoking citizens and supporters from both parties. 

The Rest of the World

The 2019 general election in Indonesia is a representation of what can happen in the rest of the world. Disinformation, especially spread through social media, can threaten democratic processes. 

The research team at Notre Dame included digital forensics experts and specialists in peace studies. According to the team, the system is being developed in order to flag manipulated content, with the goal of preventing violence and warning journalists or election monitors of potential threats. 

The system is still in the research and development phase, but it will eventually be scalable and personalized for users to monitor content. Some of the biggest challenges in developing the system include determining the best way to scale up data ingestion and processing. According to Scheirer, the system is currently being evaluated with the next step being a transition to operational use. 

There is a chance that the system can be used to monitor the 2020 general election in the United States, which is expected to see massive amounts of disinformation and manipulation.

“The disinformation age is here,” said Scheirer. “A deepfake replacing actors in a popular film might seem fun and lighthearted but imagine a video or a meme created for the sole purpose of pitting one world leader against another — saying words they didn’t actually say. Imagine how quickly that content could be shared and spread across platforms. Consider the consequences of those actions.”

 

Spread the love
Continue Reading

Deepfakes

Facebook Removes Accounts Generated By AI And Used To Perpetuate Conspiracy Theories

mm

Published

on

Social media companies have been aiming to control misinformation ahead of the 2020 election season in a variety of ways. While Twitter recently banned political ads from its platform, Facebook just announced that it has shuttered hundreds of fake accounts, groups, and pages. Many of these accounts seem to have profile images generated by artificial intelligence, and many have reportedly been used to disseminate misinformation and conspiracy theories.

As reported by Forbes, Facebook stated that the banned accounts and pages were linked to the “Beauty of Life” network, or “TheBL”, which Facebook said was linked to the conservative news publishing group, the Epoch Times. According to Facebook, Epoch Media Group has spent almost $9.5 million on advertising through many of the now-banned pages and groups, with many of the posts containing pro-Trump conspiracy theories. While Epoch Media Group denies the charges, Facebook has statted that it worked closely with independent researchers such as Graphika and the Atlantic Council’s Digital Forensic Research Lab (DFRLab) to determine the nature of the accounts and pages before taking action against them.

According to Facebook, the accounts were removed for “coordinated inauthentic behavior”, purposefully misleading others about their identities, and for attempting political interference.  According to CNET, Facebook said the accounts often posted content promoting specific political candidates and ideology, focusing on conservative elections, conservative policies, and strong support for President Trump.

Facebook published a 39-page report on the event covering many of their findings. One of the notable aspects of Facebook’s report was that many of the banned accounts were created with the assistance of AI. Facebook’s researchers state in the report:

“Dozens of these fake accounts had profile pictures generated by artificial intelligence, in the first large-scale deployment of fake faces known to the authors of this report.”

According to the findings of the report, the AI-generated images weren’t perfect, with details often giving away their true nature. Contiguous elements of an image, like a person’s glasses or hair, were often asymmetrical. Furthermore, background details were often blurry and distorted. However, these elements may not be noticeable at first glance, especially given the small image sizes of profile photos in a Facebook comment chain. Many of the fake profiles also seemed to have fake profile information and even fake posts, potentially generated by AI.

As NBC reported, Facebook’s head of security policy, Nathaniel Gleicher, stated that the behavior of the accounts is what gave them away as inauthentic and that attempts to use fake images and profile info don’t help shield the accounts from discovery. Gleicher stated the AI-generated images were actually making the accounts more likely to get caught. Said Gleicher:

“We detected these accounts because they were engaged in fake behavior. Using AI-generated profiles as a way to make themselves look more real doesn’t actually help them. The biggest takeaway here is the egregiousness of the network in using fake identities… What’s new here is that this is purportedly a U.S.-based media company leveraging foreign actors posing as Americans to push political content. We’ve seen it a lot with state actors in the past.”

Nonetheless, the independent researchers from Graphika and the Atlantic Council stated that the ease with which the bad actors were able to create so many images and give their accounts perceived authenticity “is a concern”. Facebook and other social media companies are under pressure to step up efforts to combat the proliferation of political misinformation, a task that will require staying technologically ahead of those seeking to spread misinformation.

Before Facebook had brought the accounts, pages, and groups down, the content posted by these entities reached millions of people. Reportedly, at least 55 million accounts had followed one of the 89 different banned pages. Most of the followers were non-US accounts. In total, around 600 accounts, 90 pages, and 150 groups were removed from Facebook. Approximately 70 accounts were also removed from Instagram.

The news comes just as Facebook is kicking off a DeepFake detection challenge, which will run through March of 2020. Twitter has also recently banned almost 6000 accounts its suspects originated in Saudi Arabia and posted purposefully misleading content.

Spread the love
Continue Reading