Connect with us

Deepfakes

Bots Have Evolved to Mimic Human Behavior Better for 2020 Elections

Published

 on

Bots Have Evolved to Mimic Human Behavior Better for 2020 Elections

Emilio Ferrara, a computer scientist from USC Information Sciences Institute (USC ISI), has new research showing that bots and fake accounts on social media are evolving to better mimic human behaviors in order to be undetected, all enabled by artificial intelligence. 

The research, done by Ferrara and his team that included Luca Luceri (Scuola Universitaria Professionale della Svizzera Italiana), Ashok Deb (USC ISI), and Silvia Giordano (Scuola Universitaria Professionale della Svizzera Italiana) was published in the journal First Monday. They looked at the different bots and fake accounts that were used during the US 2018 elections and compared them to the behavior from the US 2016 elections. 

In total, the team of researchers studied about 250,000 social media active users, and they focused on those who discussed the 2016 and 2018 elections. Out of those 250,000 users, the team found that 30,000 of them were bots. 

The bots in the 2016 elections mostly retweeted content and focused on sending out large amounts of tweets regarding the same issue or message. The bots in 2018 evolved just as humans did when it came to social media. Bots began to retweet less content, and they stopped sharing those messages in high volume. 

The bots in 2018 became better at mimicking human behavior. The researchers found that they were more likely to use multiple bots at the same time in order to appear more authentic. They used this to seem as if it was legitimate human engagement around an idea. 

At that time, humans began to engage through replies rather than retweets. The bots followed this as well. They used replies to become engaged in dialogue and establish a voice on an issue or message. They also used polls in order to replicate a strategy used by legitimate news outlets and pollsters. The researchers believe that those polls were used to build an image of being reputable. 

 One of the examples that the researchers used was a bot that posted an online Twitter poll about federal elections. The poll asked if it should be required to present identification when voting in these elections. The bot then asked Twitter users to vote and retweet the poll. 

Emilio Ferrara, the lead author of the study, spoke about the new research and what it means. 

“Our study further corroborates this idea that there is an arms race between bots and detection algorithms. As social media companies put more efforts to mitigate abuse and stifle automated accounts, bots evolve to mimic human strategies. Advancements in AI enable bots producing more human-like content. We need to devote more efforts to understand how bots evolve and how more sophisticated ones can be detected. With the upcoming 2020 US elections, the integrity of social media discourse is of paramount importance to allow a democratic process free of external influences.”

 

Big Implications for the Future

The problem of fake online social media accounts and bots during elections has been a problem for years now. The problems that were unfolding during the 2016 elections seemed huge at the time, but that was small compared to what we’ll see in the near future. With artificial intelligence, this will get worse. 

Bots are going to keep evolving to get better at mimicking human behavior, largely thanks to artificial intelligence. It will get to a point where it is impossible to determine who is real and who is not. This means there will be dramatic implications for not only the upcoming 2020 US elections, but all future elections there and around the world. 

 

Spread the love

Deep Learning Specialization on Coursera

Alex McFarland is a historian and journalist covering the newest developments in artificial intelligence.

Deep Learning

Deep Learning Is Re-Shaping The Broadcasting Industry

mm

Published

on

Deep Learning Is Re-Shaping The Broadcasting Industry

Deep learning has become a buzz word in many endeavors, and broadcasting organizations are also among those that have to start to explore all the potential it has to offer, from news reporting to feature films and programs, both in the cinemas and on TV.

As TechRadar reported, the number of opportunities deep learning presents in the field of video production, editing and cataloging are already quite high. But as is noted, this technology is not just limited to what is considered repetitive tasks in broadcasting, since it can also “enhance the creative process, improve video delivery and help preserve the massive video archives that many studios keep.”

As far as video generation and editing are concerned, it is mentioned that Warner Bros. recently had to spend $25M on reshoots for ‘Justice League’ and part of that money went to digitally removing a mustache that star Henry Cavill had grown and could not shave due to an overlapping commitment. The use of deep learning in such time-consuming and financially taxing processes in post-production will certainly be put to good use.

Even widely available solutions like Flo make it possible to use deep learning in creating automatically a video just by describing your idea. The software then searches for possible relevant videos that are stored in a certain library and edits them together automatically.

Flo is also able to sort and classify videos, making it easier to find a particular part of the footage. Such technologies also make it possible to easily remove undesirable footage or make a personal recommendation list based on a video somebody has expressed an interest in.

Google has come up with a neural network “that can automatically separate the foreground and background of a video. What used to require a green screen can now be done with no special equipment.”

The deep fake has already made a name for itself, both good and bad, but its potential use in special effects has already reached quite a high level.

The area where deep learning will certainly make a difference in the restoration of classic films, as the UCLA Film & Television Archive, nearly half of all films produced prior to 1950 have disappeared and 90% of the classic film prints are currently in a very poor condition.

Colorizing black and white footage is still a controversial subject among the filmmakers, but those who decide to go that route can now use Nvidia tools, which will significantly shorten such a lengthy process as it now requires that the artist colors only one frame of a scene and deep learning will do the rest from there. On the other hand, Google has come up with a technology that is able to recreate part of a video-recorded scene based on start and end frames.

Face/Object recognition is already actively used, from classifying a video collection or archive, searching for clips with a given actor or newsperson, or counting the exact time of an actor in a video or film. TechRadar mentions that Sky News recently used facial recognition to identify famous faces at the royal wedding.

This technology is now becoming widely used in sports broadcasting to, say, “track the movements of the ball, or to identify other key elements to the game, such as the goal.” In soccer (football) this technology, given the name VAR is actually used in many official tournaments and national leagues as a referee’s tool during the game.

Streaming is yet another aspect of broadcasting that can benefit from deep learning. Neural networks can recreate high definition frames from low definition input, making it possible for the viewer to benefit from better viewing, even if the original input signal is not fully up to the standard.

 

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

Deepfakes

Cybersecurity Experts Defend from AI Cyberattacks

mm

Published

on

Cybersecurity Experts Defend from AI Cyberattacks

Not everybody with good intentions is set to use the advantages of artificial intelligence. Cybersecurity is certainly one of those fields where both those trying to defend a certain cyber system and those trying to attack it are using the most advanced technologies.

In its analysis of the subject, World Economic Forum (WEF) cites an example when in march 2019, “the CEO of a large energy firm sanctioned the urgent transfer of €220,000 to what he believed to be the account of a new Eastern European supplier after a call he believed to be with the CEO of his parent company. Within hours, the money had passed through a network of accounts in Latin America to suspected criminals who had used artificial intelligence (AI) to convincingly mimic the voice of the CEO.” For their part, Forbes cites an example when “two hospitals in Ohio and West Virginia turned patients away due to a ransomware attack that led to a system failure. The hospitals could not process any emergency patient requests. Hence, they sent incoming patients to nearby hospitals.”

This cybersecurity threat is certainly the reason why Equifax and the World Economic Forum convened the inaugural Future Series: Cybercrime 2025. Global cybersecurity experts from academia, government, law enforcement, and the private sector are set to meet in Atlanta, Georgia to review the capabilities AI can give them in the field of cybersecurity. Also, Capgemini Research Institute came up with a report that concludes that building up cybersecurity defenses with AI is imperative fro practically all organizations.

In their analysis, WEF, indicated four challenges in preventing the use of AI in cybercrime. The first is the increasing sophistication of attackers – the volume of attacks will be on the rise, and “AI-enabled technology may also enhance attackers’ abilities to preserve both their anonymity and distance from their victims in an environment where attributing and investigating crimes is already challenging.”

The second is the asymmetry in the goals – while defenders must have a 100% success rate, the attackers need to be successful only once. “While AI and automation are reducing variability and cost, improving scale and limiting errors, attackers may also use AI to tip the balance.”

The third is the fact that as “organizations continue to grow, so do the size and complexity of their technology and data estates, meaning attackers have more surfaces to explore and exploit. To stay ahead of attackers, organizations can deploy advanced technologies such as AI and automation to help create defensible ‘choke points’ rather than spreading efforts equally across the entire environment.”

The fourth would be to achieve the right balance between the possible risks and actual “operational enablement” of the defenders. WEF is the opinion that “security teams can use a risk-based approach, by establishing governance processes and materiality thresholds, informing operational leaders of their cybersecurity posture, and identifying initiatives to continuously improve it.” Through their Future Series: Cybercrime 2025 program, WEF, and its partners are seeking “to identify the effective actions needed to mitigate and overcome these risks.”

For their part, Forbes has identified four steps of direct use of AI in cybersecurity prepared by their contributor Naveen Joshi and presented in the bellow graphic:

Cybersecurity Experts Defend from AI Cyberattacks

In any case, it is certain that both defenders and attackers in the field of cybersecurity will keep on developing their use of artificial intelligence as the technology itself reaches a new stage of complexity.

 

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

Deepfakes

Expert Says “Perfectly Real” DeepFakes Will Be Here In 6 Months

mm

Published

on

Expert Says "Perfectly Real" DeepFakes Will Be Here In 6 Months

The impressive but controversial DeepFakes, images and video manipulated or generated by deep neural networks, are likely to get both more impressive and more controversial in the near future, according to Hao Li, the Director of the Vision and Graphics Lab at the University of Southern California. Li is a computer vision and DeepFakes expert, and in a recent interview with CNBC he said that “perfectly real” Deepfakes are likely to arrive within half a year.

Li explained that most DeepFakes are still recognizable as fake to the real eye, and even the more convincing DeepFakes still require substantial effort on the part of the creator to make them appear realistic. However, Li is convinced that within six months, DeepFakes that appear perfectly real are likely to appear as the algorithms get more sophisticated.

Li initially thought that it would take between two to three years for extremely convincing DeepFakes to become more commonplace, making that prediction at a recent conference hosted at the Massachusetts Institute of Technology. However, Li revised his timeline after the revelation of the recent Chinese app Zao and other recent developments concerning DeepFakes technology. Li explained to CNBC that the methods needed to create realistic DeepFakes are more or less the method currently being used and that the main ingredient which will create realistic DeepFakes is more training data.

Li and his fellow researchers have been hard at work on DeepFake detection technology, anticipating the arrival of extremely convincing DeepFakes. Li and his colleagues, such as Hany Farid from the University fo California Berkely, experimented with state of the art DeepFake algorithms to understand how the technology that creates them works.

Li explained to CNBC:

“If you want to be able to detect deepfakes, you have to also see what the limits are. If you need to build A.I. frameworks that are capable of detecting things that are extremely real, those have to be trained using these types of technologies, so in some ways, it’s impossible to detect those if you don’t know how they work.”

Li and his colleagues are invested in creating tools to detect DeepFakes in acknowledgment of the potential issues and dangers that the technology poses. Li and colleagues are far from the only group of AI researchers concerned about the possible effects of DeepFakes and interested in creating countermeasures to them.

Recently, Facebook started a joint partnership with MIT, Microsoft and the University of Oxford to create the DeepFake Detection Challenge, which aims to create tools that can be used to detect when images or videos have been altered. These tools will be open source and usable by companies, media organizations, and governments. Meanwhile, researchers from the University of Southern California’s Information Sciences Institute recently created a series of algorithms that could distinguish fakes videos with around 96% accuracy.

However, Li also explained that the issue with DeepFakes is the way they can be misused, and not the technology itself. Li noted several legitimate possible uses for DeepFake technology, including in the entertainment and fashion industries.

DeepFake techniques have also been used to replicate the facial expressions of people with their faces obscured in images. Researchers used Generative Adnversail Networks to create an entirely new face that had the same expression of a subject in an original image. The techniques developed by the Norwegian University of Science and Technology could help render facial expressions during interviews with sensitive people who need privacy, such as whistleblowers. Someone else could let their face be used as a stand-in for the person who needs anonymity, but the person’s facial expressions could still be read.

As the sophistication of Deepfake technology increases, the legitimate use cases for Deepfakes will increase as well. However, the danger will also increase, and for this reason, the work on detecting DeepFakes done by Li and others grows even more important.

Spread the love

Deep Learning Specialization on Coursera
Continue Reading