The majority of internet users probably like to believe they can spot a dating scam from a mile away and simply can’t understand how anybody would fall for such a trick.
But there is a reason a lot of people fall for catfish or internet dating scams, and it is not because they are dumb or desperate.
Most people are aware of the significant indicators of a dating scammer like asking for cash, never wanting to video chat and sharing very few pictures of themselves.
Just take this girl for instance. She is youthful and attractive, and it is unlikely many potential love-interests would think twice about conversing with her on a dating program.
Most people wouldn’t even question this was not a real person
But this woman isn’t real. And I really don’t mean in the sense that someone has stolen her picture from social networking and is using it with no knowledge on relationship apps.
She does not exist.
The image was created by a site called ThisPersonDoesNotExist.com which uses AI technology to randomly generate realistic-looking faces.
Each time you refresh the page a new”individual” is established.
Even though a single picture on its own may not look a very big threat, when you mix it with the constant progress in deepfake technology there’s a real cause for concern.
Deepfake is an AI-based technology that produces hyper realistic pictures and videos of situations which never occurred.
We’ve noticed a rise in this technology being used to blackmail people by making videos of these in sexual or embarrassing scenarios which never happened.
These videos seem so realistic it’s difficult to prove they are fake.
A current example of the major problems this technology could cause is when a video made the rounds last year of Barack Obama appearing to call Donald Trump a”dipshit”.
There are certain points where it is possible to see blurring or distortion on the movie that suggests it isn’t actual, but it provides an notion of exactly how dangerous this technology could be.
Bearing this in mind, there is increasing potential for scammers to utilize AI-generated pictures and generate a whole new person.
Phillip Wang, the guy behind the website ThisPersonDoesNotExist.com, told news.com.au he made it to prove a point to buddies regarding AI technology.
“I then decided to share it on an AI Facebook group to raise awareness for the present state of the art for this technology.
When asked if he had any concerns about people using the pictures to scam other people, he said that issue already existed long before the site was created.
“Anyone can download the code and the version and instantly begin generating faces on their own machine,” he said.
Mr Wang said developing a website where people could comprehend exactly how easy it was to make a fake person was helping to raise awareness about the consequences this kind of technology could have in the future.
He said it was getting increasingly difficult to tell deepfakes from fact, and it had been”beyond something that easy photoshop forensics can help defeat”.
The technology can even produce realistic pictures of children.
There are an increasing number of instances of deepfakes being used to make fake revenge or star pornography.
Zach, a senior reputation analyst in Internet Removals, an organisation which helps people get sensitive material offline, said they encountered deepfakes at 2017.
“One of our team was alerted to naked images of this A-list star being shared across the world wide web.
We initially thought it was a group of ill people manually photoshopping each picture, which would take a very long time.”
Unfortunately, there’s hardly any people can do to protect themselves from becoming targets of these online attacks. And even obtaining the photographs removed when they are created can be difficult.
“The individual who created the image is often protected as they are seen as being the writer of this work as the picture is created with them,” Zach said.
“It may already be a tricky procedure to get images removed from the internet, but it becomes even tougher when deepfake is involved.”
There are already indications of how hackers use this technology to their advantage.
Ordinarily, this is something a scammer or bot tries to avoid as the person they’re talking to will realise they aren’t real.
Nevertheless, once they admitted the video chat it showed a woman undressing and encouraging the other person to perform the same.
Among the first things Zach and his staff do if people tell them they think they have fallen for a dating scam is inverse picture search the pictures used by the scammer.
This makes it possible for them to see whether the same picture has been used anywhere else on the world wide web, so that they could see whether the scammer was using someone else’s images.
But with AI-created pictures, of course the person from the image doesn’t exist, therefore it can’t be proved that they were stolen from anywhere.
But this tactic doesn’t always help even if the pictures are stolen.
“If folks steal a photo of a real person and mess up with one or two pixels or metadata then it is considered a different picture, and our hunt can’t pick it up,” Zach said.
“This makes it almost impossible to work out if it is a deepfake or a person’s stolen photograph.”
Another issue is if it is not immediately obvious someone isn’t real, a great deal of individuals on dating apps do not even consider something may be off.
“The men and women using these relationship programs, as much as they say it is there to find love, a number of them are just looking for a sexual encounter,” Zach said.
“When they begin talking to someone, they aren’t actually thinking with the mindset ‘is this individual real or not’.
“We have not ever had a customer who has paired with someone and then tried to reverse image hunt them to see if they were who they say they had been.”
Zach said people would have to be”increasingly cautious” as this kind of technology was likely to be used a whole lot longer to scam others.
“Any tool that could create these kinds of believable images is a significant disadvantage to relationship app users,” he explained.
“We are probably going to begin encountering deepfakes more and more without even realising it.”
Nearly all internet users likely like to believe they could spot a dating scam from a mile away and simply can’t understand how anyone would fall for such a trick.
But there’s a reason a lot of people fall for catfish or online dating scams, and it isn’t because they are dumb or desperate.
Most people know about the significant indicators of a relationship scammer like asking for money, never needing to video chat and sharing very few images of themselves.
But scammers are constantly figuring out new ways to make their stories appear more believable and also to get people to trust them.
Just take this girl for example. She’s young and attractive, and it is unlikely many prospective love-interests would think twice about chatting with her on a dating program.
Most people would not even wonder this was not a real person
But this woman is not real. And I really don’t mean in the sense that someone has stolen her picture from social networking and is utilizing it with no knowledge on relationship programs.
She doesn’t exist.
The image was created by a site called ThisPersonDoesNotExist.com which uses AI technology to randomly create realistic-looking faces.
Even though a single picture on its own might not look a very big threat, when you mix it with the continuous progress in deepfake technology there is a real cause for concern.
Deepfake is an AI-based technology that produces hyper realistic images and videos of situations which never happened.
We have noticed a increase in this technology used to blackmail people by creating videos of these in sexual or embarrassing scenarios that never happened.
These videos look so realistic it is hard to prove they’re fake.
There are certain points where it is possible to see blurring or distortion to the movie that suggests it is not actual, but it gives an notion of just how harmful this technology can be.
Bearing this in mind, there is growing potential for individuals to use AI-generated images and create an entirely new person.
Phillip Wang, the man behind the site ThisPersonDoesNotExist.com, told news.com.au he created it to establish a point to buddies about AI technology.
“I then decided to talk about it in an AI Facebook group to raise awareness for the current state of the art with this technology.
When asked if he had any concerns about individuals using the images to scam other people, he stated that concern already existed long before the site was created.
“Anyone may download the code and the version and immediately begin creating faces in their machine,” he said.
Mr Wang said developing a site where people could comprehend just how simple it was to make a fake individual was helping raise awareness about the implications this kind of technology could have later on.
He explained it was getting increasingly difficult to tell deepfakes from fact, and it was”beyond something that easy photoshop forensics can help conquer”.
The technology may even produce realistic images of children.
There are a growing number of instances of deepfakes used to make fake revenge or celebrity porn.
Zach, a senior reputation analyst in Internet Removals, an organisation that helps people get sensitive content said they encountered deepfakes at 2017.
“One of our staff was alerted to nude pictures of the A-list star being shared around the internet. We looked it up and there were tonnes of pictures, and we simply couldn’t wrap our heads about how it was being done,” he told news.com.au.
“We did not understand what we were dealing with. We initially thought it was a group of sick people manually photoshopping every picture, which would take a very long moment.”
Unfortunately, there is hardly any people can do to protect themselves from becoming targets of those online attacks. And even obtaining the photos removed once they’re created can be difficult.
“The person who created the image is often protected since they are seen as being the writer of the work as the image is technically created by them,” Zach said.
“It can be a tricky procedure to get images removed on the internet, but it becomes much harder when deepfake is involved.”
There are already signs of how scammers are using this technology to their benefit.
Zach said their staff came across a scammer on Tinder that invited people to video chat with them. Ordinarily, this is something a scammer or bot tries to prevent as the person they’re speaking to will realise that they aren’t real.
Nevertheless, once they accepted the movie chat it showed a girl undressing and encouraging the other person to do the same.
The only indication that something was wrong was that the sound didn’t match up to the movement of the woman’s mouth.
Among the initial things Zach and his team do if folks tell them they think they have fallen for a dating scam is inverse image search the pictures used by the scammer.
This makes it possible for them to see whether the exact same picture was used everywhere else on the world wide web, so that they could see if the scammer was using someone else’s images.
However, with AI-created pictures, of course the individual from the picture doesn’t exist, so it can’t be proved they were stolen from anywhere.
But this tactic doesn’t always help even if the pictures are stolen.
“If people steal a photograph of a true man and mess around with one or two pixels or metadata then it’s considered a different image, and our hunt can’t pick this up,” Zach said.
“This makes it almost impossible to work out if it is a deepfake or someone’s stolen photo.”
Another issue is if it isn’t immediately obvious someone is not real, a lot of people on dating programs don’t even consider something might be off.
“The men and women using these relationship programs, as much as they say it is there to find love, a number of them are just looking for a sexual experience,” Zach said.
“So when they start talking to someone, they aren’t actually thinking using the mindset of’is this person real or not’.
“We have never had a client who has matched with somebody and then tried to reverse picture search them to see whether they were who they say they had been.”
Zach said people would need to be”more cautious” because this type of technology was likely to be used a lot longer to scam other people.
“Any tool that can create these types of believable pictures is a significant disadvantage to relationship app consumers,” he said.
Big Developments Bring Us Closer to Fully Untethered Soft Robots
Researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and Caltech have developed new soft robotic systems that are inspired by origami. These new systems are able to move and change shape in response to external stimuli. The new developments bring us closer to having fully untethered soft robots. The soft robots that we possess today use external power and control. Because of this, they have to be tethered to off-board systems with hard components.
The research was published in Science Robotics. Jennifer A. Lewis, a Hansjorg Wyss Professor of Biologically Inspired Engineering at SEAS and co-lead author of the study, spoke about the new developments.
“The ability to integrate active materials within 3D-printed objects enables the design and fabrication of entirely new classes of soft robotic matter,” she said.
The researchers used origami as a model to create multifunctional soft robots. Origami, through sequential folds, is able to change into multiple shapes and functionalities while staying in a single structure. The research team used liquid crystal elastomers that are able to change their shape when exposed to heat. The team utilized 3D-printing to get two types of soft hinges. Those hinges fold depending on the temperature, and they can be programmed to fold into a specific order.
Arda Kotikan is a graduate student at SEAS and the Graduate School of Arts and Sciences and the co-first author of the paper.
“With our method of 3D printing active hinges, we have full programmability over temperature response, the amount of torque the hinges can exert, their bending angle, and fold orientation. Our fabrication method facilitates integrating these active components with other materials,” she said.
Connor McMahan is a graduate student at Caltech and co-first author of the paper as well.
“Using hinges makes it easier to program robotic functions and control how a robot will change shape. Instead of having the entire body of a soft robot deform in ways that can be difficult to predict, you only need to program how a few small regions of your structure will respond to changes in temperature,” he said.
The team of researchers built multiple soft devices. One of these devices was an untethered soft robot called “Rollbot.” It starts as an 8 centimeters long and 4 centimeters wide flat sheet. When it is in contact with a hot surface of around 200°C, one set of the hinges folds and shapes the robot into a pentagonal wheel.
On each of the five sides of the wheel, there are more sets of hinges that fold when in contact with a hot surface.
“Many existing soft robots require a tether to external power and control systems or are limited by the amount of force they can exert. These active hinges are useful because they allow soft robots to operate in environments where tethers are impractical and to lift objects many times heavier than the hinges,” said McMahan.
This research that was conducted focused solely on temperature responses. In the future, the liquid crystal elastomers will be studied further as they are also able to respond to light, pH, humidity, and other external stimuli.
“This work demonstrates how the combination of responsive polymers in an architected composite can lead to materials with self-actuation in response to different stimuli. In the future, such materials can be programmed to perform ever more complex tasks, blurring the boundaries between materials and robots,” said Chiara Daraio, Professor of Mechanical Engineering and Applied Physics at Caltech and co-lead author of the study.
The research included co-authors Emily C. Davidson, Jalilah M. Muhammad, and Robert D. Weeks. The work was supported by the Army Research Office, Harvard Materials Research Science and Engineering Center through the National Science Foundation, and the NASA Space Technology Research Fellowship.
Modeling Artificial Neural Networks (ANNs) On Animal Brains
Cold Spring Harbor Laboratory (CSHL) neuroscientist Anthony Zador has shown that evolution and animal brains can be used as inspiration for machine learning. It can be beneficial in helping AI solve many different problems.
According to CSHL neuroscientist Anthony Zador, Artificial Intelligence (AI) can be greatly improved by looking to animal brains. WIth this approach, neuroscientists and those working in the AI field have a new way of solving some of AI’s most pressing problems.
Anthony Zador, M.D., Ph.D., has dedicated much of his career to explaining the complex neural networks within the living brain. He goes all the way down to the individual neuron. In the beginning of his career, he focused on something different. He studied artificial neural networks (ANNs). ANNs are computing systems that have been the basis of much of our developments in the AI secor. They are modeled after the networks in both animal and human brains. Until now, this is where the concept stopped.
A recent perspective piece, authored by Zador, was published in Nature Communications. In that piece, Zador detailed how new and improved learning algorithms are helping AI systems develop to a point where they greatly outperform humans. This happens in a variety of tasks, problems, and games like chess and poker. Even though some of these computers are able to perform so well in a variety of complex problems, they are often confused by things us humans would consider simple.
If those working in this field were able to solve this problem, robots could reach a point in development where they could learn to do extremely natural and organic things such as stalking prey or building a nest. They could even do something like washing the dishes, which has proven to be extremely difficult for robots.
“The things that we find hard, like abstract thought or chess-playing, are actually not the hard thing for machines. The things that we find easy, like interacting with the physical world, that’s what’s hard,” Zador explained. “The reason that we think it’s easy is that we had half a billion years of evolution that has wired up our circuits so that we do it effortlessly.”
Zador thinks that if we want robots to achieve quick learning, something that would change everything in the sector, we might not want to only look at a perfected general learning algorithm. What scientists and others should do is look towards biological neural networks that have been given to us through nature and evolution. These could be used as a base to build on for quick and easy learning of specific types of tasks, tasks that are important for survival.
Zador talks about what we can learn from squirrels living in our own backyards if we just looked at genetics, neural networks, and genetic predisposition.
“You have squirrels that can jump from tree to tree within a few weeks after birth, but we don’t have mice learning the same thing. Why not?” Zador said. “It’s because one is genetically predetermined to become a tree-dwelling creature.”
Zador believes that one thing that could come from genetic predisposition is the innate circuitry that is within an animal. It helps that animal and guides its early learning. One of the problems with attaching this to the AI world is that the networks used in machine learning, ones that are pursued by AI experts, are much more generalized than the ones in nature.
If we are able to get to a point where ANNs reach a point in development where they can be modeled after the things we see in nature, robots could begin to do tasks that at one point were extremely difficult.
California Start-Up Cerebras Has Developed World’s Biggest Chip For AI
California start-up Cerebras has developed the world’s biggest computer chip to be used to train AI systems. It is set to be revealed after being in development for four years.
Contrary to the normal progression of chips getting smaller, the new one developed by Cerebras has a surface area bigger than an IPad. It is more than 80 times bigger than any competitors, and it uses a large amount of electricity.
The new development represents the astounding amount of computing power that is being used in AI. Included in this is the $1bn investment from Microsoft into OpenAI that was announced last month. OpenAI is trying to develop an Artificial General Intelligence (AGI) which will be a giant leap forward, something that will change much of what we know.
Cerebras is unique in this field because of the enormous size of their chip. Other companies endlessly work to create extremely small chips. Most of our advanced chips today are assembled like this. According to Patrick Moorhead, a US chip analyst, Cerebras basically put an entire computing cluster on a single chip.
Cerebras is looking to join the likes of other companies like Intel, Habana, Labs, and the UK start-up Graphcore. They are all building a new generation of specialized AI chips. This development in AI chips is reaching its biggest stage yet as the companies are going to start delivering the first chips to customers by the end of the year. Among the companies, Cerebras will be looking to be the go-to for massive computing tasks that are being done by our largest internet companies.
There are many more companies and start-ups involved in this space including Graphcore, Wave Computing, and the Chinese based start-up Cambricon. They are all looking to develop specialized AI chips used for inference. They want to take a trained AI system and use it in real-world scenarios.
Normally, it takes a long time for the development process to finish and actual products be shipped to people and companies. According to Linley Group, a US chip research firm, there are a lot of technical issues that are time-consuming. Although it takes awhile for products to be developed, there is still a big interest in these companies. Cerebras has raised over $200m in venture capital. As of late last year, they were valued at about $1.6bn. There is a lot of projected growth for the global revenue of these deep learning chipsets.
The reason that these companies are focusing on this type of processor for AI is because of the huge amounts of data that are needed in order to train neural networks. Those neural networks are then used in deep-learning systems and are responsible for things such as image recognition.
The chip from Cerebras is a single chip made out of a 300mm diameter circular wafer. It is the largest silicon disc to be made in the current chip factories. The norm is for these wafers to be split up into many individual chips instead of one giant one. Anyone who tried before ran into issues with putting circuitry into something so big. Cerebras got past this by connecting the different sectors on the wafers. Once this is done, they are able to communicate with each other and become a big processor.
Cerebras is looking forward and will try to link cores in a matrix pattern to be able to communicate with each other. They want to connect 400,000 cores while keeping all of the processing on one single chip.
It will be exciting to see these developments move forward with Cerebras and other companies continuing to advance our AI systems.