Connect with us

Deepfakes

Expert Says “Perfectly Real” DeepFakes Will Be Here In 6 Months

mm

Published

 on

Expert Says "Perfectly Real" DeepFakes Will Be Here In 6 Months

The impressive but controversial DeepFakes, images and video manipulated or generated by deep neural networks, are likely to get both more impressive and more controversial in the near future, according to Hao Li, the Director of the Vision and Graphics Lab at the University of Southern California. Li is a computer vision and DeepFakes expert, and in a recent interview with CNBC he said that “perfectly real” Deepfakes are likely to arrive within half a year.

Li explained that most DeepFakes are still recognizable as fake to the real eye, and even the more convincing DeepFakes still require substantial effort on the part of the creator to make them appear realistic. However, Li is convinced that within six months, DeepFakes that appear perfectly real are likely to appear as the algorithms get more sophisticated.

Li initially thought that it would take between two to three years for extremely convincing DeepFakes to become more commonplace, making that prediction at a recent conference hosted at the Massachusetts Institute of Technology. However, Li revised his timeline after the revelation of the recent Chinese app Zao and other recent developments concerning DeepFakes technology. Li explained to CNBC that the methods needed to create realistic DeepFakes are more or less the method currently being used and that the main ingredient which will create realistic DeepFakes is more training data.

Li and his fellow researchers have been hard at work on DeepFake detection technology, anticipating the arrival of extremely convincing DeepFakes. Li and his colleagues, such as Hany Farid from the University fo California Berkely, experimented with state of the art DeepFake algorithms to understand how the technology that creates them works.

Li explained to CNBC:

“If you want to be able to detect deepfakes, you have to also see what the limits are. If you need to build A.I. frameworks that are capable of detecting things that are extremely real, those have to be trained using these types of technologies, so in some ways, it’s impossible to detect those if you don’t know how they work.”

Li and his colleagues are invested in creating tools to detect DeepFakes in acknowledgment of the potential issues and dangers that the technology poses. Li and colleagues are far from the only group of AI researchers concerned about the possible effects of DeepFakes and interested in creating countermeasures to them.

Recently, Facebook started a joint partnership with MIT, Microsoft and the University of Oxford to create the DeepFake Detection Challenge, which aims to create tools that can be used to detect when images or videos have been altered. These tools will be open source and usable by companies, media organizations, and governments. Meanwhile, researchers from the University of Southern California’s Information Sciences Institute recently created a series of algorithms that could distinguish fakes videos with around 96% accuracy.

However, Li also explained that the issue with DeepFakes is the way they can be misused, and not the technology itself. Li noted several legitimate possible uses for DeepFake technology, including in the entertainment and fashion industries.

DeepFake techniques have also been used to replicate the facial expressions of people with their faces obscured in images. Researchers used Generative Adnversail Networks to create an entirely new face that had the same expression of a subject in an original image. The techniques developed by the Norwegian University of Science and Technology could help render facial expressions during interviews with sensitive people who need privacy, such as whistleblowers. Someone else could let their face be used as a stand-in for the person who needs anonymity, but the person’s facial expressions could still be read.

As the sophistication of Deepfake technology increases, the legitimate use cases for Deepfakes will increase as well. However, the danger will also increase, and for this reason, the work on detecting DeepFakes done by Li and others grows even more important.

Spread the love

Cybersecurity

Deep Learning Used to Trick Hackers

Published

on

Deep Learning Used to Trick Hackers

A group of computer scientists at the University of Texas at Dallas have developed a new approach for defending against cybersecurity. Rather than blocking hackers, they entice them in.

The newly developed method is called DEEP-Dig (DEcEPtion DIGging), and it entices hackers into a decoy site so that the computer can learn their tactics. The computer is then trained with the information in order to recognize and stop future attacks.

The UT Dallas researchers presented their paper titled “Improving Intrusion Detectors by Crook-Sourcing,” at the annual Computer Security Applications Conference in December in Puerto Rico. The group also presented “Automating Cyberdeception Evaluation with Deep Learning” at the Hawaii International Conference of System Sciences in January.

DEEP-Dig is part of an increasingly popular cybersecurity field called deception technology. As evident by the name, this field relies on traps that are set for hackers. Researchers are hoping that this will be able to be used effectively for defense organizations. 

Dr. Kevin Hamlen is a Eugene McDermott Professor of computer science.

“There are criminals trying to attack our networks all the time, and normally we view that as a negative thing,” he said. “Instead of blocking them, maybe what we could be doing is viewing these attackers as a source of free labor. They’re providing us data about what malicious attacks look like. It’s a free source of highly prized data.”

This new approach is being used to solve some of the major problems associated with the use of artificial intelligence (AI) for cybersecurity. One of those problems is that there is a shortage of data needed to train computers to detect hackers, and this is caused by privacy concerns. According to Gbadebo Ayoade MS’14, PhD’19, better data means a better ability to detect attacks. Ayoade presented the findings at the conferences, and he is now a data scientist at Procter & Gamble Co.

“We’re using the data from hackers to train the machine to identify an attack,” said Ayoade. “We’re using deception to get better data.”

The most common method used by hackers is to begin with simpler tricks and progressively get more sophisticated, according to Hamlen. Most of the cyber defense programs being used today attempt to disrupt the intruders immediately, so the intruders’ techniques are never learned. DEEP-Dig attempts to solve this by pushing the hackers into a decoy site full of disinformation so that the techniques can be observed. According to Dr. Latifur Khan, professor of computer science at UT Dallas, the decoy site appears legitimate to the hackers.

“Attackers will feel they’re successful,” Khan said.

Cyberattacks are a major concern for governmental agencies, businesses, nonprofits, and individuals. According to a report to the White House from the Council of Economic Advisers, the attacks cost the U.S. economy more than $57 billion in 2016.

DEEP-Dig could play a major role in evolving defense tactics at the same time hacking techniques evolve. The intruders could disrupt the method if they realize they have entered into a decoy site, but Hamlen is not overly concerned. 

“So far, we’ve found this doesn’t work. When an attacker tries to play along, the defense system just learns how hackers try to hide their tracks,” Hamlen said. “It’s an all-win situation — for us, that is.”

Other researchers involved in the work include Frederico Araujo PhD’16, research scientist at IBM’s Thomas J. Watson Research Center; Khaled Al-Naami PhD’17; Yang Gao, a UT Dallas computer science graduate student; and Dr. Ahmad Mustafa of Jordan University of Science and Technology.

The research was partly supported by the Office of Naval Research, the National Security Agency, the National Science Foundation, and the Air Force Office of Scientific Research.

 

Spread the love
Continue Reading

Cybersecurity

Startups Creating AI Tools To Detect Email Harassment

mm

Published

on

Startups Creating AI Tools To Detect Email Harassment

Since the Me Too movement came to prominence in late 2017, more and more attention is paid to incidents of sexual harassment, including workplace harassment and harassment through email or instant messaging.

As reported by The Guardian, AI researchers and engineers have been creating tools to detect harassment through text communications, dubbed MeTooBots. MeTooBots are being implemented by companies around the world in order to flag potentially harmful and harassing communications. One example of this is a bot created by the company Nex AI, which is currently being used by around 50 different companies. The bot utilizes an algorithm that examines company documents, chat and emails and compares it to its training data of bullying or harassing messages. Messages deemed potentially harassing or harmful can then be sent to an HR manager for review, although Nex AI has not revealed the specific terms that the bot looks for across communications it analyzes.

Other startups have also created AI-powered harassment detection tools. The AI startup Spot owns a chatbot that is capable of enabling employees to anonymously report allegations of sexual harassment. The bot will ask questions and give advice in order to collect more details and further an investigation into the incident. Spot wants to help HR teams deal with harassment issues in a sensitive manner while still ensuring anonymity is preserved.

According to The Guardian, Prof. Brian Subirana, MIT and Harvard AI professor, explained that attempts to use AI to detect harassment have their limitations. Harassment can be very subtle and hard to pick up, frequently manifesting itself only as a pattern that reveals itself when examining weeks of data. Bots also can’t, as of yet, go beyond the detection of certain trigger words and analyze the broader interpersonal or cultural dynamics that could potentially be at play. Despite the complexities of detecting harassment, Subirana does believe that bots could play a role in combating online harassment. Subirana could see the bots being used to train people to detect harassment when they see it, creating a database of potentially problematic messages. Subirana also stated that there could be a placebo effect that makes people less likely to harass their colleagues even they suspect their messages may be being scrutinized, even if they aren’t.

While Subirana does believe that bots have their potential uses in combating harassment, Subirana also argued that confidentiality of data and privacy is a major concern. Subirana states that such technology could potentially create an atmosphere of distrust and suspicion if misused. Sam Smethers, the chief executive of women’s rights NGO the Fawcett Society, also expressed concern about how the bots could be misused. Smethers stated:

“We would want to look carefully at how the technology is being developed, who is behind it, and whether the approach taken is informed by a workplace culture that is seeking to prevent harassment and promote equality, or whether it is in fact just another way to control their employees.”

Methods of using bots to detect harassment and still protect anonymity and privacy will have to be worked out between bot developers, companies, and regulators. Some possible methods of utilizing the predictive power of bots and AI while still safeguarding privacy include keeping communications anonymous. For instance, reports could be generated by the bot that only includes the presence of potentially harmful language and counts of how often the possibly harassing language appears. HR could then get an idea if uses of toxic language are dropping following awareness seminars, or conversely could determine if they should be on the lookout for increased harassment.

Despite the disagreement over appropriate uses of machine learning algorithms and bots in detecting harassment, both sides seem to agree that the ultimate decision to get intervene in cases of harassment should be by a human, and that bots should only ever alert people to matched patterns rather than saying definitively that something was an instance of harassment.

Spread the love
Continue Reading

Cybersecurity

AI Security Monitoring & Job Recruitment Companies Raise Funds

mm

Published

on

AI Security Monitoring & Job Recruitment Companies Raise Funds

VentureBeat reports on two new high fundings for startups developing artificial intelligence. Umbo Computer Vision (UCV) works on autonomous video security systems to businesses, while Xor is developing an AI chatbot platform for recruiters and job seekers. Both startups are located in San Francisco, and UCV is a joint venture with Taiwan and has a base there and in the UK too. UCV raised $8 million for its Ai-powered video security, while Xor managed to raise $8.4 million for its project.

Xor’s capital infusion came after a year in which the startup tripled its sales in the US, “reaching $2 million in annual recurring revenue and closing deals with over 100 customers in 15 countries, including ExxonMobil, Ikea, Baxter Personnel, Heineken, IBS, Aldi, Hoff, McDonald’s, and Mars.” As the company co-founder and CEO Aida Fazylova explains, she “started the company to let recruiters focus on the human touch — building relationships, interviewing candidates, and attracting the best talent to their companies. Meanwhile, AI takes care of repetitive tasks and provides 24/7 personalized service to every candidate. We are proud to get support from SignalFire and other amazing investors who help us drive our mission to make the recruitment experience better and more transparent for everyone.”

Xor’s chatbot “automates tedious job recruitment tasks, like scheduling interviews; sorting applications; and responding to questions via email, text, and messaging apps like Facebook Messenger and Skype. The eponymous Xor — which is hosted on Microsoft’s Azure — draws on over 500 sources for suitable candidates and screens those candidates autonomously, leveraging 103 different languages and algorithms trained on 17 different HR and recruitment data sets.”

According to Grand View Research, the chatbot market is expected to reach $1.23 billion by 2025, while Gartner predicts that chatbots will power 85% of all customer service interactions by the year 2020.

For its part, Umbo develops “ software, hardware, and AI smarts that can detect and identify human behaviors related to security, such as intrusion, tailgating (when an unauthorized individual follows someone into private premises), and wall-scaling.”

The company says it has developed its AI systems entirely in-house, and their system incorporates three components.“AiCameras are built in-house and feature built-in AI chips, connecting directly to the cloud to bypass servers and video recording intermediates, such as NVRs or DVRsLight is AI-powered software for detecting and issuing alerts on human-related security actions.” There is also “TruePlatform, a centralized platform where businesses can monitor and manage all their cameras, users, and security events.” As Shawn Guan, Umbo’s cofounder and CEO points out, the company launched  Umbo Light, “which implemented feedback that we gathered from our customers about what their primary wants from video security systems were. This allowed us to design and deliver a system based on the needs of those who use it most.”

The global video surveillance market, which is now practically relying on the use of AI, was pegged at $28 billion in 2017 and is expected to grow to more than $87 billion by 2025.

 

Spread the love
Continue Reading