Connect with us

AI News

AI Used in Dating Scams

mm

Published

 on

The majority of internet users probably like to believe they can spot a dating scam from a mile away and simply can’t understand how anybody would fall for such a trick.

But there is a reason a lot of people fall for catfish or internet dating scams, and it is not because they are dumb or desperate.

Most people are aware of the significant indicators of a dating scammer like asking for cash, never wanting to video chat and sharing very few pictures of themselves.

Just take this girl for instance. She is youthful and attractive, and it is unlikely many potential love-interests would think twice about conversing with her on a dating program.

Most people wouldn’t even question this was not a real person

But this woman isn’t real. And I really don’t mean in the sense that someone has stolen her picture from social networking and is using it with no knowledge on relationship apps.

She does not exist.

The image was created by a site called ThisPersonDoesNotExist.com which uses AI technology to randomly generate realistic-looking faces.

Each time you refresh the page a new”individual” is established.

Even though a single picture on its own may not look a very big threat, when you mix it with the constant progress in deepfake technology there’s a real cause for concern.

Deepfake is an AI-based technology that produces hyper realistic pictures and videos of situations which never occurred.

We’ve noticed a rise in this technology being used to blackmail people by making videos of these in sexual or embarrassing scenarios which never happened.

These videos seem so realistic it’s difficult to prove they are fake.

A current example of the major problems this technology could cause is when a video made the rounds last year of Barack Obama appearing to call Donald Trump a”dipshit”.

There are certain points where it is possible to see blurring or distortion on the movie that suggests it isn’t actual, but it provides an notion of exactly how dangerous this technology could be.

Bearing this in mind, there is increasing potential for scammers to utilize AI-generated pictures and generate a whole new person.

Phillip Wang, the guy behind the website ThisPersonDoesNotExist.com, told news.com.au he made it to prove a point to buddies regarding AI technology.

“I then decided to share it on an AI Facebook group to raise awareness for the present state of the art for this technology.

When asked if he had any concerns about people using the pictures to scam other people, he said that issue already existed long before the site was created.

“Anyone can download the code and the version and instantly begin generating faces on their own machine,” he said.

Mr Wang said developing a website where people could comprehend exactly how easy it was to make a fake person was helping to raise awareness about the consequences this kind of technology could have in the future.

He said it was getting increasingly difficult to tell deepfakes from fact, and it had been”beyond something that easy photoshop forensics can help defeat”.

The technology can even produce realistic pictures of children.

There are an increasing number of instances of deepfakes being used to make fake revenge or star pornography.

Zach, a senior reputation analyst in Internet Removals, an organisation which helps people get sensitive material offline, said they encountered deepfakes at 2017.

“One of our team was alerted to naked images of this A-list star being shared across the world wide web.

We initially thought it was a group of ill people manually photoshopping each picture, which would take a very long time.”

Unfortunately, there’s hardly any people can do to protect themselves from becoming targets of these online attacks. And even obtaining the photographs removed when they are created can be difficult.

“The individual who created the image is often protected as they are seen as being the writer of this work as the picture is created with them,” Zach said.

“It may already be a tricky procedure to get images removed from the internet, but it becomes even tougher when deepfake is involved.”

There are already indications of how hackers use this technology to their advantage.

Ordinarily, this is something a scammer or bot tries to avoid as the person they’re talking to will realise they aren’t real.

Nevertheless, once they admitted the video chat it showed a woman undressing and encouraging the other person to perform the same.

Among the first things Zach and his staff do if people tell them they think they have fallen for a dating scam is inverse picture search the pictures used by the scammer.

This makes it possible for them to see whether the same picture has been used anywhere else on the world wide web, so that they could see whether the scammer was using someone else’s images.

But with AI-created pictures, of course the person from the image doesn’t exist, therefore it can’t be proved that they were stolen from anywhere.

But this tactic doesn’t always help even if the pictures are stolen.

“If folks steal a photo of a real person and mess up with one or two pixels or metadata then it is considered a different picture, and our hunt can’t pick it up,” Zach said.

“This makes it almost impossible to work out if it is a deepfake or a person’s stolen photograph.”

Another issue is if it is not immediately obvious someone isn’t real, a great deal of individuals on dating apps do not even consider something may be off.

“The men and women using these relationship programs, as much as they say it is there to find love, a number of them are just looking for a sexual encounter,” Zach said.

“When they begin talking to someone, they aren’t actually thinking with the mindset ‘is this individual real or not’.

“We have not ever had a customer who has paired with someone and then tried to reverse image hunt them to see if they were who they say they had been.”

Zach said people would have to be”increasingly cautious” as this kind of technology was likely to be used a whole lot longer to scam others.

“Any tool that could create these kinds of believable images is a significant disadvantage to relationship app users,” he explained.

“We are probably going to begin encountering deepfakes more and more without even realising it.”

Nearly all internet users likely like to believe they could spot a dating scam from a mile away and simply can’t understand how anyone would fall for such a trick.

But there’s a reason a lot of people fall for catfish or online dating scams, and it isn’t because they are dumb or desperate.

Most people know about the significant indicators of a relationship scammer like asking for money, never needing to video chat and sharing very few images of themselves.

But scammers are constantly figuring out new ways to make their stories appear more believable and also to get people to trust them.

Just take this girl for example. She’s young and attractive, and it is unlikely many prospective love-interests would think twice about chatting with her on a dating program.

Most people would not even wonder this was not a real person

But this woman is not real. And I really don’t mean in the sense that someone has stolen her picture from social networking and is utilizing it with no knowledge on relationship programs.

She doesn’t exist.

The image was created by a site called ThisPersonDoesNotExist.com which uses AI technology to randomly create realistic-looking faces.

Even though a single picture on its own might not look a very big threat, when you mix it with the continuous progress in deepfake technology there is a real cause for concern.

Deepfake is an AI-based technology that produces hyper realistic images and videos of situations which never happened.

We have noticed a increase in this technology used to blackmail people by creating videos of these in sexual or embarrassing scenarios that never happened.

These videos look so realistic it is hard to prove they’re fake.

There are certain points where it is possible to see blurring or distortion to the movie that suggests it is not actual, but it gives an notion of just how harmful this technology can be.

Bearing this in mind, there is growing potential for individuals to use AI-generated images and create an entirely new person.

Phillip Wang, the man behind the site ThisPersonDoesNotExist.com, told news.com.au he created it to establish a point to buddies about AI technology.

“I then decided to talk about it in an AI Facebook group to raise awareness for the current state of the art with this technology.

When asked if he had any concerns about individuals using the images to scam other people, he stated that concern already existed long before the site was created.

“Anyone may download the code and the version and immediately begin creating faces in their machine,” he said.

Mr Wang said developing a site where people could comprehend just how simple it was to make a fake individual was helping raise awareness about the implications this kind of technology could have later on.

He explained it was getting increasingly difficult to tell deepfakes from fact, and it was”beyond something that easy photoshop forensics can help conquer”.

The technology may even produce realistic images of children.

There are a growing number of instances of deepfakes used to make fake revenge or celebrity porn.

Zach, a senior reputation analyst in Internet Removals, an organisation that helps people get sensitive content said they encountered deepfakes at 2017.

“One of our staff was alerted to nude pictures of the A-list star being shared around the internet. We looked it up and there were tonnes of pictures, and we simply couldn’t wrap our heads about how it was being done,” he told news.com.au.

“We did not understand what we were dealing with. We initially thought it was a group of sick people manually photoshopping every picture, which would take a very long moment.”

Unfortunately, there is hardly any people can do to protect themselves from becoming targets of those online attacks. And even obtaining the photos removed once they’re created can be difficult.

“The person who created the image is often protected since they are seen as being the writer of the work as the image is technically created by them,” Zach said.

“It can be a tricky procedure to get images removed on the internet, but it becomes much harder when deepfake is involved.”

There are already signs of how scammers are using this technology to their benefit.

Zach said their staff came across a scammer on Tinder that invited people to video chat with them. Ordinarily, this is something a scammer or bot tries to prevent as the person they’re speaking to will realise that they aren’t real.

Nevertheless, once they accepted the movie chat it showed a girl undressing and encouraging the other person to do the same.

The only indication that something was wrong was that the sound didn’t match up to the movement of the woman’s mouth.

Among the initial things Zach and his team do if folks tell them they think they have fallen for a dating scam is inverse image search the pictures used by the scammer.

This makes it possible for them to see whether the exact same picture was used everywhere else on the world wide web, so that they could see if the scammer was using someone else’s images.

However, with AI-created pictures, of course the individual from the picture doesn’t exist, so it can’t be proved they were stolen from anywhere.

But this tactic doesn’t always help even if the pictures are stolen.

“If people steal a photograph of a true man and mess around with one or two pixels or metadata then it’s considered a different image, and our hunt can’t pick this up,” Zach said.

“This makes it almost impossible to work out if it is a deepfake or someone’s stolen photo.”

Another issue is if it isn’t immediately obvious someone is not real, a lot of people on dating programs don’t even consider something might be off.

“The men and women using these relationship programs, as much as they say it is there to find love, a number of them are just looking for a sexual experience,” Zach said.

“So when they start talking to someone, they aren’t actually thinking using the mindset of’is this person real or not’.

“We have never had a client who has matched with somebody and then tried to reverse picture search them to see whether they were who they say they had been.”

Zach said people would need to be”more cautious” because this type of technology was likely to be used a lot longer to scam other people.

“Any tool that can create these types of believable pictures is a significant disadvantage to relationship app consumers,” he said.

Spread the love

Suzie is fascinated by anything machine learning and AI related, and she is a huge proponent of researching ethics in AI. She believes that Basic Universal Income will become inevitable as AI replaces a significant portion of the workforce.

AI News

AI to Assist with Selection of Embryo

mm

Published

on

IF A WOMAN (or non-female-identifying person with a uterus and visions of starting a family) is struggling to conceive and decides to improve their reproductive odds at an IVF clinic, they’ll likely interact with a doctor, a nurse, and a receptionist. They will probably never meet the army of trained embryologists working behind closed lab doors to collect eggs, fertilize them, and develop the embryos bound for implantation.

One of embryologists’ more time-consuming jobs is grading embryos—looking at their morphological features under a microscope and assigning a quality score. Round, even numbers of cells are good. Fractured and fragmented cells, bad. They’ll use that information to decide which embryos to implant first.

It’s more gut than science and not particularly accurate. Newer methods, like pulling off a cell to extract its DNA and test for abnormalities, called preimplantation genetic screening, provide more information. But that tacks on additional costs to an already expensive IVF cycle and requires freezing the embryos until the test results come back. Manual embryo grading may be a crude tool, but it’s noninvasive and easy for most fertility clinics to carry out. Now, scientists say, an algorithm has learned to do all that time-intensive embryo ogling even better than a human.

In new research published today in NPJ Digital Medicine, scientists at Cornell University trained an off-the-shelf Google deep learning algorithm to identify IVF embryos as either good, fair, or poor, based on the likelihood each would successfully implant. This type of AI—the same neural network that identifies faces, animals, and objects in pictures uploaded to Google’s online services—has proven adept in medical settings. It has learned to diagnose diabetic blindness and identify the genetic mutations fueling cancerous tumor growth. IVF clinics could be where it’s headed next.

“All evaluation of the embryo as it’s done today is subjective,” says Nikica Zaninovic, director of the embryology lab at Weill Cornell Medicine, where the research was conducted. In 2011, the lab installed a time-lapse imaging system inside its incubators, so its technicians could watch (and record) the embryos developing in real time. This gave them something many fertility clinics in the US do not have—videos of more than 10,000 fully anonymized embryos that could each be freeze-framed and fed into a neural network. About two years ago, Zaninovic began Googling to find an AI expert to collaborate with. He found one just across campus in Olivier Elemento, director of Weill Cornell’s Englander Institute for Precision Medicine.

For years, Elemento had been collecting all kinds of medical imaging data—MRIs, mammograms, stained slides of tumor tissue—from any colleague who would give it to him, to develop automated systems to help radiologists and pathologists do their jobs better. He’d never thought to try it with IVF but could immediately see the potential. There’s a lot going on in an embryo that’s invisible to the human eye but might not be to a computer. “It was an opportunity to automate a process that is time-consuming and prone to errors,” he says. “Which is something that’s not really been done before with human embryos.”

To judge how their neural net, nicknamed STORK, stacked up against its human counterparts, they recruited five embryologists from clinics on three continents to grade 394 embryos based on images taken from different labs. The five embryologists reached the same conclusion on only 89 embryos, less than a quarter of the total. So the researchers instituted a majority voting procedure—three out of five embryologists needed to agree to classify an embryo as good, fair, or poor. When STORK looked at the same images, it predicted the embryologist majority voting decision with 95.7 percent accuracy. The most consistent volunteer matched results only 70 percent of the time; the least, 25 percent.

For now, STORK is just a tool embryologists can upload images to and play around with on a secure website hosted by Weill Cornell. It won’t be ready for the clinic until it can pass rigorous testing that follows implanted embryos over time, to see how well the algorithm fares in real life. Elemento says the group is still finalizing the design for a trial that would do that by pitting embryologists against the AI in a small, randomized cohort. Most important is understanding if STORK actually improves outcomes—not just implantation rates but successful, full-term pregnancies. On that score, at least some embryologists are skeptical.

“All this algorithm can do is change the order of which embryos we transfer,” says Eric Forman, medical and lab director at Columbia University Fertility Center. “It needs more evidence to say it helps women get pregnant quicker and safer.” On its own, he worries that STORK might make only a small contribution to improving IVF’s success rate, while possibly inserting its own biases.

In addition to embryo grading, the Columbia clinic uses pre-implantation genetic screening to improve patients’ odds of pregnancy. While not routine, it is offered to everyone. Forman says about 70 percent of the clinic’s IVF cycles include the blastocyst biopsy procedure, which can add a few thousand dollars to a patient’s tab. That’s why he’s most intrigued about what Elemento’s team is cooking up next. They’re training a new set of neural networks to see if they can detect chromosomal abnormalities, like the one that causes Down Syndrome. With an embryo developing under a camera’s watchful gaze, Elemento’s algorithm would monitor the feed for telltale signs of trouble. “We think the patterns of cell division we can capture with these movies could potentially carry information about these defects, which are hidden in just the snapshots,” says Elemento. They’re also looking into using the technique to predict miscarriages.

There’s plenty of room to improve the performance of IVF, and these algorithmic upgrades could make a dent—in the right circumstances. “If it could provide accurate predictions in real time with minimal risk for harm and no additional cost, then I could see the potential to implement AI like this for embryo selection,” says Forman. But there would be barriers to its adoption. Most IVF clinics in the US don’t have one of these fancy time-lapse recording systems because they’re so expensive. And there are a lot of other potential ways to improve embryo viability that could be more affordable—like tailoring hormone treatments and culturing techniques to the different kinds of infertility that women experience. In the end, though, the number one problem IVF clinics contend with is that sometimes there just aren’t enough high-quality eggs, no matter how many cycles a patient goes through. And no AI, no matter how smart, can do anything about that.

Spread the love
Continue Reading

AI News

Google Employees Sign Petition to Remove Conservative from AI Ethics Pane;

mm

Published

on

Over 1,720 Google employees have signed a petition requesting the company to remove Kay Cole James, the president of the Heritage Foundation, from a Google panel that was new.

The petition says that James’s positions on civic and transgender rights must disqualify her from sitting on Google’s new artificial intelligence (AI) ethics board, which was declared last week.

The controversy introduces a struggle for Google, which is already facing criticism over a host of issues.

Thus far, the business has been publicly silent about the request as pressure builds and conservatives demanding the leadership of Google stand their own ground.

Lawmakers and industry watchers told The Hill that James’s inclusion on the AI integrity council was likely an attempt to allay issues over bias by Google and other online platforms.

, chairman of the Senate Judiciary Committee, told The Hill for the ethics panel Regarding James’ Collection.

Graham added that it was”good for Google to know they’ve got an issue.”

Google and james didn’t respond for comment to The Hill’s requests.

Google has faced criticism in particular LGBTQ groups which pressured the company to eliminate an app that critics said promoted conversion therapy, a discredited idea that someone could change their sexual orientation. The program was removed by google month. But critics noted that the business acted following a rights group that was LGBTQ suspended Google out of its corporate ranks.

James’s comments about transgender people have Google back.

James last month called the Equality Act, federal legislation that would enshrine civil rights for LGBTQ people,”anything but equality.”

“This bill would… open every female toilet and sports group to biological males,” James wrote.

The petitioners wrote that her addition on the regulation could indicate Google”values closeness to power over the health of trans folks, other LGBTQ individuals, and immigrants.”

“That is unacceptable.”

“There’s this attempt to integrate each the views of as many stakeholders as possible, but a total ignorance of the fact that a stakeholder group that warrants the validity of nonbinary men and women, for example, isn’t a plausible, inclusive practice,” Ali Alkhatib, a computer science student at Stanford University and a petition signer, told The Hill.

For conservatives, the request is ammunition for their claims that Google is hostile to conservative views, and they have rallied to James’s defense.

Sen. Ted Cruz (R-Texas) called the Google worker protest”consistent with a persistent pattern.”

“We have observed Google and all of big tech acting with nude partisan and ideological bias,” Cruz told The Hill. “It is more than ironic that leftists at Google, in the name of inclusivity, are pushing to bar one of the most respected African American women in the country from participating in discussions of coverage.”

Google has repeatedly denied claims that its search results are biased against conservatives and has noted that there is evidence for all those allegations. Google CEO Sundar Pichai only last week met with President Trump to discuss”political fairness,” Trump shown in a tweet.

The Google employees, coordinating under the title Googlers Against Hate and Transphobia, say the issue is not that she has lobbied against expanded rights for LGBTQ men and women, although that James is a conservative.

The brand new AI ethics committee, that has fewer than 10 members of google, is tasked with providing an ethical test on AI technologies as new cloud computing enterprise is pursued by the firm.

Googlers Against Transphobia and Hate state that there are civil rights issues such as research demonstrating it misrecognize transgender men and women and may discriminate against, about AI technology.

Kate Crawford, co-founder of this AI Now Institute at New York University, stated”respecting human rights for everybody should be a simple pre-requisite for membership of an ethics board.”

“There’s no greater obligation for major companies making AI tools that affect the lives of countless people,” Crawford said in a statement to The Hill.

The Google protesters wrote that the company must”place agents from vulnerable communities in the middle of conclusion” about AI technology.

Google so far has not responded to any of the concerns raised about the AI integrity council and James.

Workers have pushed the business on different difficulties. Google last year finished work from workers about working with the military after criticism together with the Pentagon on an AI project. And the firm gave up pursuit of a Pentagon cloud computing agency.

The latest controversy only highlights the issues in balancing the issues of Google’s activist workforce with the bottom line of the company.

“This is truly unacceptable, & we anticipate an on the record answer from Google.”

Spread the love
Continue Reading

AI News

Using AI to Target Liver Cancer

mm

Published

on

A genomics company claims it’s discovered a way to detect liver cancer linked to hepatitis B months before other methods can detect it.

The conclusion has been based on a study from Genetron Health and the Australian Academy using a method named HCCscreen, which applies intelligence in blood.

The researchers found that the new method could pick up early signs of the cancer in people who had tested negative based on traditional alpha-fetoprotein (AFP) and ultrasound tests.

Genetron Health chief executive Wang Sizhen explained early detection was important because it significantly improved the chances of survival.

“The study is a breakthrough in genomics technology and it’s very likely to aid hepatitis B virus carriers, whose risk of liver cancer is much higher,” Wang explained.

The researchers used AI engineering to identify biomarkers frequent in famous instances of a kind of liver cancer called hepatocellular carcinoma, or HCC.

The group used it with hepatitis B that had tested negative for liver cancer in AFP and ultrasound tests on people and developed the HCCscreen technique to look for those markers.

Individuals tested positive and have been tracked over eight months, with four finally being diagnosed with pericardial liver .

The four patients had surgery to remove the tumours and another 20 from the group had a HCCscreen test that is second . Wang reported all participants in the group of 20 would continue to be tracked.

“This is the very first large-scale potential study on early identification [of liver cancer],” he said.

The results were published in the Proceedings of the National Academy of Sciences earlier this month.

There are approximately 93 million people with hepatitis B in China and carriers of this virus have a higher chance of developing liver disease.

Liver cancer is generally tough to find in its early stages, also AFP tests and twice-yearly ultrasounds for the disease are advocated for groups such as people with hepatitis B virus infections, or cirrhosis — scarring of liver tissue.

However, in China HCC cases were discovered at stage, the authors of the study wrote.

According to the National Cancer Centre, 466,000 people were diagnosed with liver cancer and 422,000 died in 2015 from the disease in China.

Wang said the company aimed to commercialise the technology but even then it would take the time to make it cheap.

“[High-risk] individuals need to have regular screening. This is important for public health but the technology has to be affordable enough to become widespread,” Wang said. “The ultimate goal of the study is to develop a product that people in China can manage.”

Spread the love
Continue Reading