Connect with us

Ethics

Facebook Removes Accounts Generated By AI And Used To Perpetuate Conspiracy Theories

mm

Published

 on

Facebook Removes Accounts Generated By AI And Used To Perpetuate Conspiracy Theories

Social media companies have been aiming to control misinformation ahead of the 2020 election season in a variety of ways. While Twitter recently banned political ads from its platform, Facebook just announced that it has shuttered hundreds of fake accounts, groups, and pages. Many of these accounts seem to have profile images generated by artificial intelligence, and many have reportedly been used to disseminate misinformation and conspiracy theories.

As reported by Forbes, Facebook stated that the banned accounts and pages were linked to the “Beauty of Life” network, or “TheBL”, which Facebook said was linked to the conservative news publishing group, the Epoch Times. According to Facebook, Epoch Media Group has spent almost $9.5 million on advertising through many of the now-banned pages and groups, with many of the posts containing pro-Trump conspiracy theories. While Epoch Media Group denies the charges, Facebook has statted that it worked closely with independent researchers such as Graphika and the Atlantic Council’s Digital Forensic Research Lab (DFRLab) to determine the nature of the accounts and pages before taking action against them.

According to Facebook, the accounts were removed for “coordinated inauthentic behavior”, purposefully misleading others about their identities, and for attempting political interference.  According to CNET, Facebook said the accounts often posted content promoting specific political candidates and ideology, focusing on conservative elections, conservative policies, and strong support for President Trump.

Facebook published a 39-page report on the event covering many of their findings. One of the notable aspects of Facebook’s report was that many of the banned accounts were created with the assistance of AI. Facebook’s researchers state in the report:

“Dozens of these fake accounts had profile pictures generated by artificial intelligence, in the first large-scale deployment of fake faces known to the authors of this report.”

According to the findings of the report, the AI-generated images weren’t perfect, with details often giving away their true nature. Contiguous elements of an image, like a person’s glasses or hair, were often asymmetrical. Furthermore, background details were often blurry and distorted. However, these elements may not be noticeable at first glance, especially given the small image sizes of profile photos in a Facebook comment chain. Many of the fake profiles also seemed to have fake profile information and even fake posts, potentially generated by AI.

As NBC reported, Facebook’s head of security policy, Nathaniel Gleicher, stated that the behavior of the accounts is what gave them away as inauthentic and that attempts to use fake images and profile info don’t help shield the accounts from discovery. Gleicher stated the AI-generated images were actually making the accounts more likely to get caught. Said Gleicher:

“We detected these accounts because they were engaged in fake behavior. Using AI-generated profiles as a way to make themselves look more real doesn’t actually help them. The biggest takeaway here is the egregiousness of the network in using fake identities… What’s new here is that this is purportedly a U.S.-based media company leveraging foreign actors posing as Americans to push political content. We’ve seen it a lot with state actors in the past.”

Nonetheless, the independent researchers from Graphika and the Atlantic Council stated that the ease with which the bad actors were able to create so many images and give their accounts perceived authenticity “is a concern”. Facebook and other social media companies are under pressure to step up efforts to combat the proliferation of political misinformation, a task that will require staying technologically ahead of those seeking to spread misinformation.

Before Facebook had brought the accounts, pages, and groups down, the content posted by these entities reached millions of people. Reportedly, at least 55 million accounts had followed one of the 89 different banned pages. Most of the followers were non-US accounts. In total, around 600 accounts, 90 pages, and 150 groups were removed from Facebook. Approximately 70 accounts were also removed from Instagram.

The news comes just as Facebook is kicking off a DeepFake detection challenge, which will run through March of 2020. Twitter has also recently banned almost 6000 accounts its suspects originated in Saudi Arabia and posted purposefully misleading content.

Spread the love

Deepfakes

Early Warning System for Disinformation Developed with AI

Published

on

Early Warning System for Disinformation Developed with AI

Researchers at the University of Notre Dame are working on a project to combat disinformation online, including media campaigns to incite violence, sow discord, and meddle in democratic elections. 

The team of researchers relied on artificial intelligence (AI) to develop an early warning system. The system will be able to identify manipulated images, deepfake videos, and disinformation online. It is a scalable, automated system that uses content-based image retrieval. It can then apply computer-vision based techniques to identify political memes on multiple social media networks. 

Tim Weninger is an associate professor in the Department of Computer Science and Engineering at Notre Dame. 

“Memes are easy to create and even easier to share,” said Weninger. “When it comes to political memes, these can be used to help get out the vote, but they can also be used to spread inaccurate information and cause harm.”

Weninger collaborated with Walter Scheirer, an assistant professor in the Department of Computer Science and Engineering at Notre Dame, along with members of the research team. 

2019 General Election in Indonesia

The team tested out the system with the 2019 general election in Indonesia. They collected over two million images and content, coming from various sources on Twitter and Instagram related to the election. 

In the election, the left-leaning, centrist incumbent beat the conservative, populist candidate. Following the election, violent protests erupted in which eight people died and hundreds more were injured. The team’s study found that there were spontaneous and coordinated campaigns launched on social media with the goal of influencing the election and inciting violence. 

The coordinated campaigns used manipulated images, which projected false claims and misrepresented certain events. News stories and memes were fabricated with the use of legitimate news logos, with the goal of provoking citizens and supporters from both parties. 

The Rest of the World

The 2019 general election in Indonesia is a representation of what can happen in the rest of the world. Disinformation, especially spread through social media, can threaten democratic processes. 

The research team at Notre Dame included digital forensics experts and specialists in peace studies. According to the team, the system is being developed in order to flag manipulated content, with the goal of preventing violence and warning journalists or election monitors of potential threats. 

The system is still in the research and development phase, but it will eventually be scalable and personalized for users to monitor content. Some of the biggest challenges in developing the system include determining the best way to scale up data ingestion and processing. According to Scheirer, the system is currently being evaluated with the next step being a transition to operational use. 

There is a chance that the system can be used to monitor the 2020 general election in the United States, which is expected to see massive amounts of disinformation and manipulation.

“The disinformation age is here,” said Scheirer. “A deepfake replacing actors in a popular film might seem fun and lighthearted but imagine a video or a meme created for the sole purpose of pitting one world leader against another — saying words they didn’t actually say. Imagine how quickly that content could be shared and spread across platforms. Consider the consequences of those actions.”

 

Spread the love
Continue Reading

Deepfakes

Lego Finds An Inventive Way to Combine AI and Motion Tracking

mm

Published

on

Lego Finds An Inventive Way to Combine AI and Motion Tracking

Lego toy systems have been around for generations and have been considered by many as a way to stimulate the imagination. Quite a few users have at some point imagined having a Lego figure in their own image they could use with their sets.

Realizing that fact, Lego has decided to try and make that dream come true. As Gizmodo reports, Lego will try to realize that dream for anybody who visits there theme park that will open in New York in 2020. To do this the company will employ sophisticated motion tracking and neural network facial recognition.

The theme park, named Legoland New York Resort will be located in Goshen, New York, which is about 60 miles northwest of New York City and it will open on July 4, 2020.

According to Mobile ID World, this possibility will be featured in a Lego Factory Adventure Ride “that takes park guests through a tour of a “factory” showing them how the iconic little plastic bricks are made.”

Using Holovis’ Holotrack technology, the Lego Factory Adventure Ride will feature a segment where park guests are turned into one of Lego’s iconic miniature figures. Holotrack leverages the use of the same artificial intelligence and deep learning technologies that have made deepfake videos possible, taking an individual’s image and translating it onto a screen. The guest’s mini-figures will mimic their movements and appearance, copying their hair, glasses, clothing, and facial expressions. The time it takes to render a guest into a Lego figure is reported to be about half a second.”

But this is certainly not the new AI development in which Lego is involved. Back in 2013 Lego Engineering, used artificial intelligence to explore movement, using Lego building blocks. In 2014, researchers and programmers started using Lego Mindstorms EV3 robot with AI by connecting the brain of a worm to the sensors and motors of an EV3 robot using a computer program. AI development enthusiasts have been using Mindstorms EV3 for a while now trying particularly to develop robotic movement.

In  2004 and 2016, two research projects were published which researched how Lego could be used in teaching AI. The first employed Lego’s Mindstorms, while the latter, published by Western Washington University discussed 12-years of teaching experience on AI using Lego systems, including EV3.

But the company’s biggest advancement in the field of AI came this year when in August when it announced that it will “begin trials of a new system to aid those with visual disabilities in following LEGO instructions.”

The system is called Audio & Braille Building Instructions, and uses “AI to pair digital traditional-style visual instructions with verbal or tactile Braille directions, and was developed in collaboration with life-long LEGO fan Matthew Shifrin, who is blind.”

The system is in the early stages of development and currently supports “a handful of sets at present while the development team seeks feedback from users.”  The feedback will be used to implement the feedback which will add to more sets “in the first half of 2020, with an eventual goal of supporting all-new LEGO product launches. “ The official instructions created by the new AI-driven program will be available for free from legoaudioinstructions.com

 

Spread the love
Continue Reading

Deep Learning

Deep Learning Is Re-Shaping The Broadcasting Industry

mm

Published

on

Deep Learning Is Re-Shaping The Broadcasting Industry

Deep learning has become a buzz word in many endeavors, and broadcasting organizations are also among those that have to start to explore all the potential it has to offer, from news reporting to feature films and programs, both in the cinemas and on TV.

As TechRadar reported, the number of opportunities deep learning presents in the field of video production, editing and cataloging are already quite high. But as is noted, this technology is not just limited to what is considered repetitive tasks in broadcasting, since it can also “enhance the creative process, improve video delivery and help preserve the massive video archives that many studios keep.”

As far as video generation and editing are concerned, it is mentioned that Warner Bros. recently had to spend $25M on reshoots for ‘Justice League’ and part of that money went to digitally removing a mustache that star Henry Cavill had grown and could not shave due to an overlapping commitment. The use of deep learning in such time-consuming and financially taxing processes in post-production will certainly be put to good use.

Even widely available solutions like Flo make it possible to use deep learning in creating automatically a video just by describing your idea. The software then searches for possible relevant videos that are stored in a certain library and edits them together automatically.

Flo is also able to sort and classify videos, making it easier to find a particular part of the footage. Such technologies also make it possible to easily remove undesirable footage or make a personal recommendation list based on a video somebody has expressed an interest in.

Google has come up with a neural network “that can automatically separate the foreground and background of a video. What used to require a green screen can now be done with no special equipment.”

The deep fake has already made a name for itself, both good and bad, but its potential use in special effects has already reached quite a high level.

The area where deep learning will certainly make a difference in the restoration of classic films, as the UCLA Film & Television Archive, nearly half of all films produced prior to 1950 have disappeared and 90% of the classic film prints are currently in a very poor condition.

Colorizing black and white footage is still a controversial subject among the filmmakers, but those who decide to go that route can now use Nvidia tools, which will significantly shorten such a lengthy process as it now requires that the artist colors only one frame of a scene and deep learning will do the rest from there. On the other hand, Google has come up with a technology that is able to recreate part of a video-recorded scene based on start and end frames.

Face/Object recognition is already actively used, from classifying a video collection or archive, searching for clips with a given actor or newsperson, or counting the exact time of an actor in a video or film. TechRadar mentions that Sky News recently used facial recognition to identify famous faces at the royal wedding.

This technology is now becoming widely used in sports broadcasting to, say, “track the movements of the ball, or to identify other key elements to the game, such as the goal.” In soccer (football) this technology, given the name VAR is actually used in many official tournaments and national leagues as a referee’s tool during the game.

Streaming is yet another aspect of broadcasting that can benefit from deep learning. Neural networks can recreate high definition frames from low definition input, making it possible for the viewer to benefit from better viewing, even if the original input signal is not fully up to the standard.

 

Spread the love
Continue Reading