Efforts by tech companies to tackle misinformation and fake content are kicking into high gear in recent times as sophisticated fake content generation technologies like DeepFakes become easier to use and more refined. One upcoming attempt to help people detect and fight deepfakes is RealityDefender, produced by the AI Foundation, which has committed itself to developing ethical AI agents and assistants that users can train to complete various tasks.
The AI Foundation’s most notable project is a platform that allows people to create their own digital personas that look like them and represent them in virtual hangout spaces. The AI Foundation is overseen by the Global AI Council and as part of their mandate they must anticipate the possible negative impacts of AI platforms, then try to get ahead of these problems. As reported by VentureBeat, One of the tools that the AI Foundation has created to assist in the detection of deepfakes is dubbed Reality Defender. Reality Defender is a tool that a person can use in their web browser (check on that), which will analyze video, images, and other types of media to detect signs that the media has been faked or altered in some fashion. It’s hoped that the tool will help counteract the increasing flow of deepfakes on the internet, which according to some estimates have roughly doubled over the course of the past six months.
Reality defender operates by utilizing a variety of AI-based algorithms that can detect clues suggesting an image or video might have been faked. The AI models detect subtle signs of trickery and manipulation, and the false positives the model detects are labeled as incorrect by the users of the tool. The data is then used to retrain the model. AI companies who create non-deceptive deepfakes have their content tagged with an “honest AI” tag or watermark that lets people readily identify the AI-generated fakes.
Reality Defender is just one of a suite of tools and an entire AI responsibility platform that AI Foundation is attempting to create. AI Foundation is pursuing the creation of Guardian AI, a responsibility platform built on the precept that individuals should have access to personal AI agents that work for them and that can help guard against their exploitation by bad actors. Essentially, AI Foundation is aiming to both expand the reach of AI in society, bringing it to more people, while also guarding against the risks of AI.
Reality Defender isn’t the only new AI-driven product aiming to reduce misinformation on the United States. A similar product is called SurfSafe, which was created by two undergraduates from UC Berkeley, Rohan Phadte and Ash Bhat. According to The Verge, SurfSafe operates by allowing its users to click on a piece of media that they are curious about and the program will carry out a reverse image search and try to find similar content from various trusted sources on the internet, flagging images that are known to be doctored.
It’s unclear just how effective these solutions will be in the long run. Dartmouth College professor and forensics expert Hany Farid was quoted by The Verge as saying that he is “extremely skeptical” that plans systems like Reality Defender will work in a meaningful capacity. Farid explained that one of the key challenges with detecting fake content is that media isn’t purely fake or real. Farid explained:
“There is a continuum; an incredibly complex range of issues to deal with. Some changes are meaningless, and some fundamentally alter the nature of an image. To pretend we can train an AI to spot the difference is incredibly naïve. And to pretend we can crowdsource it is even more so.”
Furthermore, it’s difficult to include crowdsourcing elements, such as tagging false positives, because humans are typically quite bad at identifying fake images. Humans often make mistakes and miss subtle details that mark an image as fake. It’s also unclear how to deal with bad faith actors who troll when they flag content.
It seems likely that, in order to be maximally effective, fake-detecting tools will have to be combined with digital literacy efforts that teach people how to reason about the content they interact with online.
AI Experts Rank Deepfakes and 19 Other AI-Based Crimes By Danger Level
A new report published by University College London aimed to identify the many different ways that AI could potentially assist criminals over the next 15 years. The report had 31 different AI experts take 20 different methods of using AI to carry out crimes and rank these methods based on various factors. The AI experts ranked the crimes according to variables like how easy the crime would be to commit, the potential societal harm the crime could do, the amount of money a criminal could make, and how difficult the crime would be to stop. According to the results of the report, Deepfakes posed the greatest threat to law-abiding citizens and society generally, as their potential for exploitation by criminals and terrorists is high.
The AI experts ranked deepfakes at the top of the list of potential AI threats because deepfakes are difficult to identify and counteract. Deepfakes are constantly getting better at fooling even the eyes of deepfake experts and even other AI-based methods of detecting deepfakes are often unreliable. In terms of their capacity for harm, deepfakes can easily be used by bad actors to discredit trusted, expert figures or to attempt to swindle people by posing as loved ones or other trusted individuals. If deepfakes are abundant, people could begin to lose trust in any audio or video media, which could make them lost faith in the validity of real events and facts.
Dr. Matthew Caldwell, from UCL Computer Science, was the first author on the paper. Caldwell underlines the growing danger of deepfakes as more and more of our activity moves online. As Caldwell was quoted by UCL News:
“Unlike many traditional crimes, crimes in the digital realm can be easily shared, repeated, and even sold, allowing criminal techniques to be marketed and for crime to be provided as a service. This means criminals may be able to outsource the more challenging aspects of their AI-based crime.”
The team of experts ranked five other emerging AI technologies as highly concerning potential catalysts for new kinds of crime: driverless vehicles being used as weapons, hack attacks on AI-controlled systems and devices, online data collection for the purposes of blackmail, AI-based phishing featuring customized messages, and fake news/misinformation in general.
According to Shane Johnson, the Director of the Dawes Centre for Future Crimes at UCL, the goal of the study was to identify possible threats associated with newly emerging technologies and hypothesize ways to get ahead of these threats. Johnson says that as the speed of technological change increases, it’s imperative that “we anticipate future crime threats so that policymakers and other stakeholders with the competency to act can do so before new ‘crime harvests’ occur”.
Regarding the fourteen other possible crimes on the list, they were put into one of two categories: moderate concern and low concern.
AI crimes of moderate concern include the misuse of military robots, data poisoning, automated attack drones, learning-based cyberattacks, denial of service attacks for online activities, manipulating financial/stock markets, snake oil (sale of fraudulent services cloaked in AI/ML terminology), and tricking face recognition.
Low concern AI-based crimes include the forgery of art or music, AI-assisted stalking, fake reviews authored by AI, evading AI detection methods, and “burglar bots” (bots which break into people’s homes to steal things).
Of course, AI models themselves can be used to help combat some of these crimes. Recently, AI models have been deployed to assist in the detection of money laundering schemes, detecting suspicious financial transactions. The results are analyzed by human operators who then approve or deny the alert, and the feedback is used to better train the model. It seems likely that the future will involve AIs being pitted against one another, with criminals trying to design their best AI-assisted tools and security, law enforcement, and other ethical AI designers trying to design their own best AI systems.
What Are Deepfakes?
As deepfakes become easier to make and more prolific, more attention is paid to them. Deepfakes have become the focal point of discussions involving AI ethics, misinformation, openness of information and the internet, and regulation. It pays to be informed regarding deepfakes, and to have an intuitive understanding of what deepfakes are. This article will clarify the definition of a deepfake, examine their use cases, discuss how deepfakes can be detected, and examine the implications of deepfakes for society.
What are Deepfakes?
Before going on to discuss deepfakes further, it would be helpful to take some time and clarify what “deepfakes” actually are. There is a substantial amount of confusion regarding the term Deepfake, and often the term is misapplied to any falsified media, regardless of whether or not it is a genuine deepfake. In order to qualify as a Deepfake, the faked media in question must be generated with a machine-learning system, specifically a deep neural network.
The key ingredient of deepfakes is machine learning. Machine learning has made it possible for computers to automatically generate video and audio relatively quickly and easily. Deep neural networks are trained on footage of a real person in order for the network to learn how people look and move under the target environmental conditions. The trained network is then used on images of another individual and augmented with additional computer graphics techniques in order to combine the new person with the original footage. An encoder algorithm is used to determine the similarities between the original face and the target face. Once the common features of the faces have been isolated, a second AI algorithm called a decoder is used. The decoder examines the encoded (compressed) images and reconstructs them based off on the features in the original images. Two decoders are used, one on the original subject’s face and the second on the target person’s face. In order for the swap to be made, the decoder trained on images of person X is fed images of person Y. The result is that person Y’s face is reconstruction over Person X’s facial expressions and orientation.
Currently, it still takes a fair amount of time for a deepfake to be made. The creator of the fake has to spend a long time manually adjusting parameters of the model, as suboptimal parameters will lead to noticeable imperfections and image glitches that give away the fake’s true nature.
Although it’s frequently assumed that most deepfakes are made with a type of neural network called a generative adversarial network (GAN), many (perhaps most) deepfakes created these days do not rely on GANs. While GANs did play a prominent role in the creation of early deepfakes, most deepfake videos are created through alternative methods, according to Siwei Lyu from SUNY Buffalo.
It takes a disproportionately large amount of training data in order to train a GAN, and GANs often take much longer to render an image compared to other image generation techniques. GANs are also better for generating static images than video, as GANs have difficulties maintaining consistencies from frame to frame. It’s much more common to use an encoder and multiple decoders to create deepfakes.
What Are Deepfakes Used For?
Many of the deepfakes found online are pornographic in nature. According to research done by Deeptrace, an AI firm, out of a sample of approximately 15,000 deepfake videos taken in September of 2019, approximately 95% of them were pornographic in nature. A troubling implication of this fact is that as the technology becomes easier to use, incidents of fake revenge porn could rise.
However, not all deep fakes are pornographic in nature. There are more legitimate uses for deepfake technology. Audio deepfake technology could help people broadcast their regular voices after they are damaged or lost due to illness or injury. Deepfakes can also be used for hiding the faces of people who are in sensitive, potentially dangerous situations, while still allowing their lips and expressions to be read. Deepfake technology can potentially be used to improve the dubbing on foreign-language films, aid in the repair of old and damaged media, and even create new styles of art.
While most people think of fake videos when they hear the term “deepfake”, fake videos are by no means the only kind of fake media produced with deepfake technology. Deepfake technology is used to create photo and audio fakes as well. As previously mentioned, GANs are frequently used to generate fake images. It’s thought that there have been many cases of fake LinkedIn and Facebook profiles that have profile images generated with deepfake algorithms.
It’s possible to create audio deepfakes as well. Deep neural networks are trained to produce voice clones/voice skins of different people, including celebrities and politicians. One famous example of an audio Deepfake is when the AI company Dessa made use of an AI model, supported by non-AI algorithms, to recreate the voice of the podcast host Joe Rogan.
How To Spot Deepfakes
As deepfakes become more and more sophisticated, distinguishing them from genuine media will become tougher and tougher. Currently, there are a few telltale signs people can look for to ascertain if a video is potentially a deepfake, like poor lip-syncing, unnatural movement, flickering around the edge of the face, and warping of fine details like hair, teeth, or reflections. Other potential signs of a deepfake include lower-quality parts of the same video, and irregular blinking of the eyes.
While these signs may help one spot a deepfake at the moment, as deepfake technology improves the only option for reliable deepfake detection might be other types of AI trained to distinguish fakes from real media.
Artificial intelligence companies, including many of the large tech companies, are researching methods of detecting deepfakes. Last December, a deepfake detection challenge was started, supported by three tech giants: Amazon, Facebook, and Microsoft. Research teams from around the world worked on methods of detecting deepfakes, competing to develop the best detection methods. Other groups of researchers, like a group of combined researchers from Google and Jigsaw, are working on a type of “face forensics” that can detect videos that have been altered, making their datasets open source and encouraging others to develop deepfake detection methods. The aforementioned Dessa has worked on refining deepfake detection techniques, trying to ensure that the detection models work on deepfake videos found in the wild (out on the internet) rather than just on pre-composed training and testing datasets, like the open-source dataset Google provided.
There are also other strategies that are being investigated to deal with the proliferation of deepfakes. For instance, checking videos for concordance with other sources of information is one strategy. Searches can be done for video of events potentially taken from other angles, or background details of the video (like weather patterns and locations) can be checked for incongruities. Beyond this, a Blockchain online ledger system could register videos when they are initially created, holding their original audio and images so that derivative videos can always be checked for manipulation.
Ultimately, it’s important that reliable methods of detecting deepfakes are created and that these detection methods keep up with the newest advances in deepfake technology. While it is hard to know exactly what the effects of deepfakes will be, if there are not reliable methods of detecting deepfakes (and other forms of fake media), misinformation could potentially run rampant and degrade people’s trust in society and institutions.
Implications of Deepfakes
What are the dangers of allowing deep fake to proliferate unchecked?
One of the biggest problems that deepfakes create currently is nonconsensual pornography, engineered by combining people’s faces with pornographic videos and images. AI ethicists are worried that deepfakes will see more use in the creation of fake revenge porn. Beyond this, deepfakes could be used to bully and damage the reputation of just about anyone, as they could be used to place people into controversial and compromising scenarios.
Companies and cybersecurity specialists have expressed concern about the use of deepfakes to facilitate scams, fraud, and extortion. Allegedly, deepfake audio has been used to convince employees of a company to transfer money to scammers
It’s possible that deepfakes could have harmful effects even beyond those listed above. Deepfakes could potentially erode people’s trust in media generally, and make it difficult for people to distinguish between real news and fake news. If many videos on the web are fake, it becomes easier for governments, companies, and other entities to cast doubt on legitimate controversies and unethical practices.
When it comes to governments, deepfakes may even pose threats to the operation of democracy. Democracy requires that citizens are able to make informed decisions about politicians based on reliable information. Misinformation undermines democratic processes. For example, the president of Gabon, Ali Bongo, appeared in a video attempting to reassure the Gabon citizenry. The president was assumed to be unwell for long a long period of time, and his sudden appearance in a likely fake video kicked off an attempted coup. President Donald Trump claimed that an audio recording of him bragging about grabbing women by the genitals was fake, despite also describing it as “locker room talk”. Prince Andrew also claimed that an image provided by Emily Maitilis’ attorney was fake, though the attorney insisted on its authenticity.
Ultimately, while there are legitimate uses for deepfake technology, there are many potential harms that can arise from the misuse of that technology. For that reason, it’s extremely important that methods to determine the authenticity of media be created and maintained.
Early Warning System for Disinformation Developed with AI
Researchers at the University of Notre Dame are working on a project to combat disinformation online, including media campaigns to incite violence, sow discord, and meddle in democratic elections.
The team of researchers relied on artificial intelligence (AI) to develop an early warning system. The system will be able to identify manipulated images, deepfake videos, and disinformation online. It is a scalable, automated system that uses content-based image retrieval. It can then apply computer-vision based techniques to identify political memes on multiple social media networks.
Tim Weninger is an associate professor in the Department of Computer Science and Engineering at Notre Dame.
“Memes are easy to create and even easier to share,” said Weninger. “When it comes to political memes, these can be used to help get out the vote, but they can also be used to spread inaccurate information and cause harm.”
Weninger collaborated with Walter Scheirer, an assistant professor in the Department of Computer Science and Engineering at Notre Dame, along with members of the research team.
2019 General Election in Indonesia
The team tested out the system with the 2019 general election in Indonesia. They collected over two million images and content, coming from various sources on Twitter and Instagram related to the election.
In the election, the left-leaning, centrist incumbent beat the conservative, populist candidate. Following the election, violent protests erupted in which eight people died and hundreds more were injured. The team’s study found that there were spontaneous and coordinated campaigns launched on social media with the goal of influencing the election and inciting violence.
The coordinated campaigns used manipulated images, which projected false claims and misrepresented certain events. News stories and memes were fabricated with the use of legitimate news logos, with the goal of provoking citizens and supporters from both parties.
The Rest of the World
The 2019 general election in Indonesia is a representation of what can happen in the rest of the world. Disinformation, especially spread through social media, can threaten democratic processes.
The research team at Notre Dame included digital forensics experts and specialists in peace studies. According to the team, the system is being developed in order to flag manipulated content, with the goal of preventing violence and warning journalists or election monitors of potential threats.
The system is still in the research and development phase, but it will eventually be scalable and personalized for users to monitor content. Some of the biggest challenges in developing the system include determining the best way to scale up data ingestion and processing. According to Scheirer, the system is currently being evaluated with the next step being a transition to operational use.
There is a chance that the system can be used to monitor the 2020 general election in the United States, which is expected to see massive amounts of disinformation and manipulation.
“The disinformation age is here,” said Scheirer. “A deepfake replacing actors in a popular film might seem fun and lighthearted but imagine a video or a meme created for the sole purpose of pitting one world leader against another — saying words they didn’t actually say. Imagine how quickly that content could be shared and spread across platforms. Consider the consequences of those actions.”
- Dimitris Vassos, CEO, Co-founder, and Chief Architect of Omilia – Interview Series
- Human Brain’s Light Processing Ability Could Lead to Better Robotic Sensing
- Game Developers Look To Voice AI For New Creative Opportunities
- Udacity Launches RPA Developer Nanodegree Program in Conjunction with UiPath
- AI Used To Identify Gene Activation Sequences and Find Disease-Causing Genes