Since the earliest deepfake detection solutions began to emerge in 2018, the computer vision and security research sector has been seeking to define an essential characteristic...
New research from Australia suggests that our brain is adroit at recognizing sophisticated deepfakes, even when we believe consciously that the images we’re seeing are real....
A new collaboration between a researcher from the United States’ National Security Agency (NSA) and the University of California at Berkeley offers a novel method for...
New research from the US indicates that pretrained language models (PLMs) such as GPT-3 can be successfully queried for real-world email addresses that were included in...
New research from Canada offers a possible method by which attackers could steal the fruits of expensive machine learning frameworks, even when the only access to...
Two recent research papers from the US and China have proposed a novel solution for teeth-based authentication: just grind or bite your teeth a bit, and...
A new research collaboration between the University of Wisconsin and Google sets machine learning against one of the most notorious web user annoyances of the last...
A new paper from researchers in Italy and Germany proposes a method to detect deepfake videos based on biometric face and voice behavior, rather than artifacts...
New research from the United States and Qatar offers a novel method for identifying fake news that has been written in the way that humans actually...
A joint academic research project from the United States has developed a method to foil CAPTCHA* tests, reportedly outperforming similar state-of-the-art machine learning solutions by using...
A research collaboration, including contributors from NVIDIA and MIT, has developed a machine learning method that can identify hidden people simply by observing indirect illumination on...
A new research paper from Germany discloses that NVIDIA has confirmed a hardware vulnerability that allows an attacker to gain privileged control of code execution for...
Researchers in the US have developed an adversarial attack against the ability of machine learning systems to correctly interpret what they see – including mission-critical items...
Removing a particular piece of data that contributed to a machine learning model is like trying to remove the second spoonful of sugar from a cup...
Researchers from China have used the ‘black box’ nature of neural networks to devise a novel method for malicious botnets to communicate with their Command and...