Connect with us

Deepfakes

Lego Finds An Inventive Way to Combine AI and Motion Tracking

mm

Updated

 on

Lego toy systems have been around for generations and have been considered by many as a way to stimulate the imagination. Quite a few users have at some point imagined having a Lego figure in their own image they could use with their sets.

Realizing that fact, Lego has decided to try and make that dream come true. As Gizmodo reports, Lego will try to realize that dream for anybody who visits there theme park that will open in New York in 2020. To do this the company will employ sophisticated motion tracking and neural network facial recognition.

The theme park, named Legoland New York Resort will be located in Goshen, New York, which is about 60 miles northwest of New York City and it will open on July 4, 2020.

According to Mobile ID World, this possibility will be featured in a Lego Factory Adventure Ride “that takes park guests through a tour of a “factory” showing them how the iconic little plastic bricks are made.”

Using Holovis’ Holotrack technology, the Lego Factory Adventure Ride will feature a segment where park guests are turned into one of Lego’s iconic miniature figures. Holotrack leverages the use of the same artificial intelligence and deep learning technologies that have made deepfake videos possible, taking an individual’s image and translating it onto a screen. The guest’s mini-figures will mimic their movements and appearance, copying their hair, glasses, clothing, and facial expressions. The time it takes to render a guest into a Lego figure is reported to be about half a second.”

But this is certainly not the new AI development in which Lego is involved. Back in 2013 Lego Engineering, used artificial intelligence to explore movement, using Lego building blocks. In 2014, researchers and programmers started using Lego Mindstorms EV3 robot with AI by connecting the brain of a worm to the sensors and motors of an EV3 robot using a computer program. AI development enthusiasts have been using Mindstorms EV3 for a while now trying particularly to develop robotic movement.

In  2004 and 2016, two research projects were published which researched how Lego could be used in teaching AI. The first employed Lego’s Mindstorms, while the latter, published by Western Washington University discussed 12-years of teaching experience on AI using Lego systems, including EV3.

But the company’s biggest advancement in the field of AI came this year when in August when it announced that it will “begin trials of a new system to aid those with visual disabilities in following LEGO instructions.”

The system is called Audio & Braille Building Instructions, and uses “AI to pair digital traditional-style visual instructions with verbal or tactile Braille directions, and was developed in collaboration with life-long LEGO fan Matthew Shifrin, who is blind.”

The system is in the early stages of development and currently supports “a handful of sets at present while the development team seeks feedback from users.”  The feedback will be used to implement the feedback which will add to more sets “in the first half of 2020, with an eventual goal of supporting all-new LEGO product launches. “ The official instructions created by the new AI-driven program will be available for free from legoaudioinstructions.com

 

Spread the love

Former diplomat and translator for the UN, currently freelance journalist/writer/researcher, focusing on modern technology, artificial intelligence, and modern culture.

Cybersecurity

AI Experts Rank Deepfakes and 19 Other AI-Based Crimes By Danger Level

mm

Updated

 on

A new report published by University College London aimed to identify the many different ways that AI could potentially assist criminals over the next 15 years. The report had 31 different AI experts take 20 different methods of using AI to carry out crimes and rank these methods based on various factors. The AI experts ranked the crimes according to variables like how easy the crime would be to commit, the potential societal harm the crime could do, the amount of money a criminal could make, and how difficult the crime would be to stop. According to the results of the report, Deepfakes posed the greatest threat to law-abiding citizens and society generally, as their potential for exploitation by criminals and terrorists is high.

The AI experts ranked deepfakes at the top of the list of potential AI threats because deepfakes are difficult to identify and counteract. Deepfakes are constantly getting better at fooling even the eyes of deepfake experts and even other AI-based methods of detecting deepfakes are often unreliable. In terms of their capacity for harm, deepfakes can easily be used by bad actors to discredit trusted, expert figures or to attempt to swindle people by posing as loved ones or other trusted individuals. If deepfakes are abundant, people could begin to lose trust in any audio or video media, which could make them lost faith in the validity of real events and facts.

Dr. Matthew Caldwell, from UCL Computer Science, was the first author on the paper. Caldwell underlines the growing danger of deepfakes as more and more of our activity moves online. As Caldwell was quoted by UCL News:

“Unlike many traditional crimes, crimes in the digital realm can be easily shared, repeated, and even sold, allowing criminal techniques to be marketed and for crime to be provided as a service. This means criminals may be able to outsource the more challenging aspects of their AI-based crime.”

The team of experts ranked five other emerging AI technologies as highly concerning potential catalysts for new kinds of crime: driverless vehicles being used as weapons, hack attacks on AI-controlled systems and devices, online data collection for the purposes of blackmail, AI-based phishing featuring customized messages, and fake news/misinformation in general.

According to Shane Johnson, the Director of the Dawes Centre for Future Crimes at UCL, the goal of the study was to identify possible threats associated with newly emerging technologies and hypothesize ways to get ahead of these threats. Johnson says that as the speed of technological change increases, it’s imperative that “we anticipate future crime threats so that policymakers and other stakeholders with the competency to act can do so before new ‘crime harvests’ occur”.

Regarding the fourteen other possible crimes on the list, they were put into one of two categories: moderate concern and low concern.

AI crimes of moderate concern include the misuse of military robots, data poisoning, automated attack drones, learning-based cyberattacks, denial of service attacks for online activities, manipulating financial/stock markets, snake oil (sale of fraudulent services cloaked in AI/ML terminology), and tricking face recognition.

Low concern AI-based crimes include the forgery of art or music, AI-assisted stalking, fake reviews authored by AI, evading AI detection methods, and “burglar bots” (bots which break into people’s homes to steal things).

Of course, AI models themselves can be used to help combat some of these crimes. Recently, AI models have been deployed to assist in the detection of money laundering schemes, detecting suspicious financial transactions. The results are analyzed by human operators who then approve or deny the alert, and the feedback is used to better train the model. It seems likely that the future will involve AIs being pitted against one another, with criminals trying to design their best AI-assisted tools and security, law enforcement, and other ethical AI designers trying to design their own best AI systems.

Spread the love
Continue Reading

Deepfakes

AI Browser Tools Aim To Recognize Deepfakes and Other Fake Media

mm

Updated

 on

Efforts by tech companies to tackle misinformation and fake content are kicking into high gear in recent times as sophisticated fake content generation technologies like DeepFakes become easier to use and more refined. One upcoming attempt to help people detect and fight deepfakes is RealityDefender, produced by the AI Foundation, which has committed itself to developing ethical AI agents and assistants that users can train to complete various tasks.

The AI Foundation’s most notable project is a platform that allows people to create their own digital personas that look like them and represent them in virtual hangout spaces. The AI Foundation is overseen by the Global AI Council and as part of their mandate they must anticipate the possible negative impacts of AI platforms, then try to get ahead of these problems. As reported by VentureBeat, One of the tools that the AI Foundation has created to assist in the detection of deepfakes is dubbed Reality Defender. Reality Defender is a tool that a person can use in their web browser (check on that), which will analyze video, images, and other types of media to detect signs that the media has been faked or altered in some fashion. It’s hoped that the tool will help counteract the increasing flow of deepfakes on the internet, which according to some estimates have roughly doubled over the course of the past six months.

Reality defender operates by utilizing a variety of AI-based algorithms that can detect clues suggesting an image or video might have been faked. The AI models detect subtle signs of trickery and manipulation, and the false positives the model detects are labeled as incorrect by the users of the tool. The data is then used to retrain the model. AI companies who create non-deceptive deepfakes have their content tagged with an “honest AI” tag or watermark that lets people readily identify the AI-generated fakes.

Reality Defender is just one of a suite of tools and an entire AI responsibility platform that AI Foundation is attempting to create. AI Foundation is pursuing the creation of Guardian AI, a responsibility platform built on the precept that individuals should have access to personal AI agents that work for them and that can help guard against their exploitation by bad actors. Essentially, AI Foundation is aiming to both expand the reach of AI in society, bringing it to more people, while also guarding against the risks of AI.

Reality Defender isn’t the only new AI-driven product aiming to reduce misinformation on the United States. A similar product is called SurfSafe, which was created by two undergraduates from UC Berkeley, Rohan Phadte and Ash Bhat. According to The Verge, SurfSafe operates by allowing its users to click on a piece of media that they are curious about and the program will carry out a reverse image search and try to find similar content from various trusted sources on the internet, flagging images that are known to be doctored.

It’s unclear just how effective these solutions will be in the long run. Dartmouth College professor and forensics expert Hany Farid was quoted by The Verge as saying that he is “extremely skeptical” that plans systems like Reality Defender will work in a meaningful capacity. Farid explained that one of the key challenges with detecting fake content is that media isn’t purely fake or real. Farid explained:

“There is a continuum; an incredibly complex range of issues to deal with. Some changes are meaningless, and some fundamentally alter the nature of an image. To pretend we can train an AI to spot the difference is incredibly naïve. And to pretend we can crowdsource it is even more so.”

Furthermore, it’s difficult to include crowdsourcing elements, such as tagging false positives, because humans are typically quite bad at identifying fake images. Humans often make mistakes and miss subtle details that mark an image as fake. It’s also unclear how to deal with bad faith actors who troll when they flag content.

It seems likely that, in order to be maximally effective, fake-detecting tools will have to be combined with digital literacy efforts that teach people how to reason about the content they interact with online.

Spread the love
Continue Reading

AI 101

What Are Deepfakes?

mm

Updated

 on

As deepfakes become easier to make and more prolific, more attention is paid to them. Deepfakes have become the focal point of discussions involving AI ethics, misinformation, openness of information and the internet, and regulation. It pays to be informed regarding deepfakes, and to have an intuitive understanding of what deepfakes are. This article will clarify the definition of a deepfake, examine their use cases, discuss how deepfakes can be detected, and examine the implications of deepfakes for society.

What are Deepfakes?

Before going on to discuss deepfakes further, it would be helpful to take some time and clarify what “deepfakes” actually are. There is a substantial amount of confusion regarding the term Deepfake, and often the term is misapplied to any falsified media, regardless of whether or not it is a genuine deepfake. In order to qualify as a Deepfake, the faked media in question must be generated with a machine-learning system, specifically a deep neural network.

The key ingredient of deepfakes is machine learning. Machine learning has made it possible for computers to automatically generate video and audio relatively quickly and easily. Deep neural networks are trained on footage of a real person in order for the network to learn how people look and move under the target environmental conditions. The trained network is then used on images of another individual and augmented with additional computer graphics techniques in order to combine the new person with the original footage. An encoder algorithm is used to determine the similarities between the original face and the target face. Once the common features of the faces have been isolated, a second AI algorithm called a decoder is used. The decoder examines the encoded (compressed) images and reconstructs them based off on the features in the original images. Two decoders are used, one on the original subject’s face and the second on the target person’s face. In order for the swap to be made, the decoder trained on images of person X is fed images of person Y. The result is that person Y’s face is reconstruction over Person X’s facial expressions and orientation.

Currently, it still takes a fair amount of time for a deepfake to be made. The creator of the fake has to spend a long time manually adjusting parameters of the model, as suboptimal parameters will lead to noticeable imperfections and image glitches that give away the fake’s true nature.

Although it’s frequently assumed that most deepfakes are made with a type of neural network called a generative adversarial network (GAN), many (perhaps most) deepfakes created these days do not rely on GANs. While GANs did play a prominent role in the creation of early deepfakes,  most deepfake videos are created through alternative methods, according to Siwei Lyu from SUNY Buffalo.

It takes a disproportionately large amount of training data in order to train a GAN, and GANs often take much longer to render an image compared to other image generation techniques. GANs are also better for generating static images than video, as GANs have difficulties maintaining consistencies from frame to frame. It’s much more common to use an encoder and multiple decoders to create deepfakes.

What Are Deepfakes Used For?

Many of the deepfakes found online are pornographic in nature. According to research done by Deeptrace, an AI firm, out of a sample of approximately 15,000 deepfake videos taken in September of 2019, approximately 95% of them were pornographic in nature. A troubling implication of this fact is that as the technology becomes easier to use, incidents of fake revenge porn could rise.

However, not all deep fakes are pornographic in nature. There are more legitimate uses for deepfake technology. Audio deepfake technology could help people broadcast their regular voices after they are damaged or lost due to illness or injury. Deepfakes can also be used for hiding the faces of people who are in sensitive, potentially dangerous situations, while still allowing their lips and expressions to be read. Deepfake technology can potentially be used to improve the dubbing on foreign-language films, aid in the repair of old and damaged media, and even create new styles of art.

Non-Video Deepfakes

While most people think of fake videos when they hear the term “deepfake”, fake videos are by no means the only kind of fake media produced with deepfake technology. Deepfake technology is used to create photo and audio fakes as well. As previously mentioned, GANs are frequently used to generate fake images. It’s thought that there have been many cases of fake LinkedIn and Facebook profiles that have profile images generated with deepfake algorithms.

It’s possible to create audio deepfakes as well. Deep neural networks are trained to produce voice clones/voice skins of different people, including celebrities and politicians. One famous example of an audio Deepfake is when the AI company Dessa made use of an AI model, supported by non-AI algorithms, to recreate the voice of the podcast host Joe Rogan.

How To Spot Deepfakes

As deepfakes become more and more sophisticated, distinguishing them from genuine media will become tougher and tougher. Currently, there are a few telltale signs people can look for to ascertain if a video is potentially a deepfake, like poor lip-syncing, unnatural movement, flickering around the edge of the face, and warping of fine details like hair, teeth, or reflections. Other potential signs of a deepfake include lower-quality parts of the same video, and irregular blinking of the eyes.

While these signs may help one spot a deepfake at the moment, as deepfake technology improves the only option for reliable deepfake detection might be other types of AI trained to distinguish fakes from real media.

Artificial intelligence companies, including many of the large tech companies, are researching methods of detecting deepfakes. Last December, a deepfake detection challenge was started, supported by three tech giants: Amazon, Facebook, and Microsoft. Research teams from around the world worked on methods of detecting deepfakes, competing to develop the best detection methods. Other groups of researchers, like a group of combined researchers from Google and Jigsaw, are working on a type of “face forensics” that can detect videos that have been altered, making their datasets open source and encouraging others to develop deepfake detection methods. The aforementioned Dessa has worked on refining deepfake detection techniques, trying to ensure that the detection models work on deepfake videos found in the wild (out on the internet) rather than just on pre-composed training and testing datasets, like the open-source dataset Google provided.

There are also other strategies that are being investigated to deal with the proliferation of deepfakes. For instance, checking videos for concordance with other sources of information is one strategy. Searches can be done for video of events potentially taken from other angles, or background details of the video (like weather patterns and locations) can be checked for incongruities. Beyond this, a Blockchain online ledger system could register videos when they are initially created, holding their original audio and images so that derivative videos can always be checked for manipulation.

Ultimately, it’s important that reliable methods of detecting deepfakes are created and that these detection methods keep up with the newest advances in deepfake technology. While it is hard to know exactly what the effects of deepfakes will be, if there are not reliable methods of detecting deepfakes (and other forms of fake media), misinformation could potentially run rampant and degrade people’s trust in society and institutions.

Implications of Deepfakes

What are the dangers of allowing deep fake to proliferate unchecked?

One of the biggest problems that deepfakes create currently is nonconsensual pornography, engineered by combining people’s faces with pornographic videos and images. AI ethicists are worried that deepfakes will see more use in the creation of fake revenge porn. Beyond this, deepfakes could be used to bully and damage the reputation of just about anyone, as they could be used to place people into controversial and compromising scenarios.

Companies and cybersecurity specialists have expressed concern about the use of deepfakes to facilitate scams, fraud, and extortion. Allegedly, deepfake audio has been used to convince employees of a company to transfer money to scammers

It’s possible that deepfakes could have harmful effects even beyond those listed above. Deepfakes could potentially erode people’s trust in media generally, and make it difficult for people to distinguish between real news and fake news. If many videos on the web are fake, it becomes easier for governments, companies, and other entities to cast doubt on legitimate controversies and unethical practices.

When it comes to governments, deepfakes may even pose threats to the operation of democracy. Democracy requires that citizens are able to make informed decisions about politicians based on reliable information. Misinformation undermines democratic processes. For example, the president of Gabon, Ali Bongo, appeared in a video attempting to reassure the Gabon citizenry. The president was assumed to be unwell for long a long period of time, and his sudden appearance in a likely fake video kicked off an attempted coup. President Donald Trump claimed that an audio recording of him bragging about grabbing women by the genitals was fake, despite also describing it as “locker room talk”. Prince Andrew also claimed that an image provided by Emily Maitilis’ attorney was fake, though the attorney insisted on its authenticity.

Ultimately, while there are legitimate uses for deepfake technology, there are many potential harms that can arise from the misuse of that technology. For that reason, it’s extremely important that methods to determine the authenticity of media be created and maintained.

Spread the love
Continue Reading