Connect with us

AI News

Facebook AI Fails at Detection of Murder

mm

Published

 on

Facebook has given another update on measures it took and what more it is doing in the wake of the livestreamed video of a gun massacre by a terrorist who killed 50 people in two mosques in Christchurch, New Zealand.

Before this week the company said the video of the slayings were seen less than 200 occasions throughout the livestream broadcast itself, and roughly about 4,000 times before it was removed from Facebook — with the stream never reported to Facebook till 12 minutes after it had stopped.

None of the users that watched the killings unfold on the company’s platform in real-time apparently reported that the stream to the company, according to the company.

It also previously stated it eliminated 1.5 million versions of the movie from its site in the first 24 hours following the livestream, with 1.2M of those captured in the point of upload — meaning it failed to prevent 300,000 uploads at that point. We found other versions of this movie still circulating on its stage 12 hours after.

In the wake of the livestreamed terror assault, Facebook has continued to confront calls from world leaders to do more to ensure such content can’t be distributed by its own platform.

The prime minister of New Zealand, Jacinda Ardern told media yesterday the video “should not be dispersed, accessible, able to be viewed”, dubbing it: “Horrendous.”

She confirmed Facebook had been connected with her government but emphasized that in her view the business hasn’t done enough.

She also later told that the New Zealand parliament:”We cannot simply sit back and accept that these platforms just exist and that what is stated on these is not the responsibility of the location where they are published. They’re the writer. Not just the postman.”

We requested Facebook for a response to Ardern’s phone for online content platforms to accept publisher-level responsibility for the material that they distribute. Its spokesman avoided the question — pointing instead to its latest item of emergency PR which it names:”A Further Update on New Zealand Terrorist Attack”.

Here it writes that”people are looking to know how online platforms such as Facebook were used to broadcast horrific videos of this terrorist attack”, saying it therefore “wished to present additional information out of our review to our products were used and how we can improve going forward”, before heading on to reiterate many of the details it’s put out.

Including the massacre movie was quickly shared to the 8chan message board by an individual posting a hyperlink to a copy of the movie on a file-sharing site. This was before Facebook itself being alerted to the movie being broadcast on its platform.

It goes on to imply 8chan was a heart for broader sharing of this video — claiming that:”Forensic identifiers on many of the videos later circulated, such as a bookmarks toolbar observable in a display recording, match the articles posted on 8chan.”

So it is clearly trying to make sure it isn’t singled out by political leaders seek coverage responses to the challenge posed by online hatred and terrorist content.

Further details it chooses to dwell on in the update is the way the AIs it utilizes to aid the human content inspection procedure of flagged Facebook Live flows are in fact tuned to”detect and prioritize videos which are likely to comprise suicidal or harmful acts” — together with the AI pushing such videos to the very top of human moderators’ content heaps, above all the other stuff that they also must look at.

Certainly”harmful acts” were involved in the New Zealand terrorist attack.

Facebook explains this by saying it is because it does not have the training data to create an algorithm that knows it’s looking at mass murder cooperating in real time.

Additionally, it implies the task of training an AI to capture such a dreadful scenario is exacerbated by the proliferation of movies of first person shooter video games on internet content programs.

To achieve that we will have to provide our systems with substantial volumes of information of this specific kind of content, something which is difficult as these events are mercifully rare. Another challenge would be to automatically discern this content from visually akin, innocuous content — for example if thousands of movies out of live-streamed video games have been flagged from our systems, our reviewers could miss the important real world videos where we could alert first responders to get help on the ground.”

The movie game element is a frightening detail to consider.

It indicates that a harmful real-life act that imitates a violent video game might just blend into the background, as much as AI moderation systems are worried; invisible in a sea of benign, virtually violent content churned out by players. (Which in turn makes you wonder whether the Internet-steeped killer in Christchurch understood — or guessed — that filming the attack from a video game-esque first person shooter perspective might offer a workaround to dupe Facebook’s pristine AI watchdogs.)

Facebook post is emphatic that AI is “not perfect” and is “not likely to be ideal”.

“People will continue to be part of this equation, whether it’s the folks on our team who review articles, or individuals who use our services and document content to us,” it writes, reiterating yet again that it has ~30,000 people working in “safety and security”, about half of whom are doing the sweating hideous toil of articles inspection.

This is, as we’ve said many times previously, a fantastically tiny variety of individual moderators given the vast scale of articles continually uploaded to Facebook’s 2.2BN+ user platform.

Moderating Facebook stays a hopeless task because so few humans do this.

Additionally AI can not really help. (Later in the site post Facebook also writes that there are”millions” of livestreams broadcast on its stage every single day, saying that’s why incorporating a short broadcast delay — such as TV channels do — wouldn’t at all help catch inappropriate real-time content.)

At the same time Facebook’s upgrade makes it clear just how much its’safety and security’ systems rely on outstanding humans too: Aka Facebook users taking the time and head to report harmful content.

Some may say that’s an excellent argument for a social networking taxation .

The fact Facebook didn’t get one report of this Christchurch massacre livestream while the terrorist attack unfolded supposed the content was not prioritized for”accelerated review” by its own systems, which it clarifies prioritize reports attached to videos which are still being streamed — since”when there’s real-world harm we have a better chance to alert first responders and attempt to get aid on the ground”.

Though it also says it enlarged its acceleration logic annually to”also cover videos that were very recently live, in the past few hours”.

But it did with a focus on suicide prevention — meaning that the Christchurch movie would only have been flagged for acceleration review in the hours following the flow ended if it had been reported as suicide content.

Therefore the’problem’ is that Facebook’s systems do not prioritize mass murder.

“In [the initial ] report, and a number of subsequent reports, the movie was noted for reasons aside from suicide and as such it had been treated according to different procedures,” it writes, adding it’s”learning from this” and”re-examining our reporting logic and experiences for both live and lately live videos in order to expand the classes that would get to accelerated review”.

No shit.

Facebook also discusses its failure to prevent variations of the massacre video from resurfacing on its platform, having been as it informs it –“so effective” at preventing the spread of propaganda out of terrorist organizations such as ISIS with the usage of image and video matching tech.

It asserts its tech was outfoxed in this case by”bad actors” creating many different edited versions of the movie to try to thwart filters, in addition to by the various manners”a broader set of people distributed the video and unwittingly created it more challenging to match copies”.

So, basically, the’virality’ of the awful event created a lot of versions of this video for Facebook’s matching tech to cope.

“Some people may have noticed the video onto a computer or TV, filmed that with a phone and sent it to a buddy. Still others may have watched the movie on their computer, listed their screen and passed that on.

In all Facebook says it discovered and blocked more than 800 visually-distinct versions of the movie that were circulating on its platform.

It reveals it resorted to utilizing audio fitting technology to attempt and detect videos which had been visually altered but had exactly the exact same soundtrack. And claims it’s attempting to learn and develop with better techniques for blocking content that’s being re-shared widely by people as well as being rebroadcast by mainstream media. So any kind of major news event, basically.

In a segment on following steps Facebook says advancing its matching technology to prevent the spread of improper viral videos being spread is its priority.

But audio matching clearly will not assist if malicious re-sharers only both re-edit the visuals and also switch the soundtrack also in future.

It also concedes it has to have the ability to react faster”to this sort of content onto a live streamed video” — though it doesn’t have any firm fixes to offer there either, saying only that it will explore”whether and how AI can be utilized for these instances, and how to get to consumer reports faster”.

Another priority it asserts among its”next steps” is fighting”hate speech of a variety on our platform”, stating that this includes more than 200 white supremacist organizations internationally”whose content we’re eliminating through proactive detection technology”.

It’s glossing over plenty of criticism on that front also though — including research that indicates banned much right hate preachers may be able to evade detection on its platform. Plus its foot-dragging on shutting far right extremists. (Facebook just eventually banned one infamous UK far right activist last month, for instance )

In its last PR sop, Facebook says it’s committed to expanding its business collaboration to undertake hate speech through the Global Internet Forum to Counter Terrorism (GIFCT), which shaped in 2017 as platforms were being squeezed by politicians to wash ISIS content — in a collective effort to fend off tighter regulation.

“We are experimenting with sharing URLs systematically rather than simply content hashes, are working to address the range of terrorists and violent extremists operating online, and mean to refine and improve our capacity to collaborate at a catastrophe,” Facebook writes now, offering more obscure experiments as politicians call for content responsibility.

Spread the love

Suzie is fascinated by anything machine learning and AI related, and she is a huge proponent of researching ethics in AI. She believes that Basic Universal Income will become inevitable as AI replaces a significant portion of the workforce.

AI News

AI to Assist with Selection of Embryo

mm

Published

on

IF A WOMAN (or non-female-identifying person with a uterus and visions of starting a family) is struggling to conceive and decides to improve their reproductive odds at an IVF clinic, they’ll likely interact with a doctor, a nurse, and a receptionist. They will probably never meet the army of trained embryologists working behind closed lab doors to collect eggs, fertilize them, and develop the embryos bound for implantation.

One of embryologists’ more time-consuming jobs is grading embryos—looking at their morphological features under a microscope and assigning a quality score. Round, even numbers of cells are good. Fractured and fragmented cells, bad. They’ll use that information to decide which embryos to implant first.

It’s more gut than science and not particularly accurate. Newer methods, like pulling off a cell to extract its DNA and test for abnormalities, called preimplantation genetic screening, provide more information. But that tacks on additional costs to an already expensive IVF cycle and requires freezing the embryos until the test results come back. Manual embryo grading may be a crude tool, but it’s noninvasive and easy for most fertility clinics to carry out. Now, scientists say, an algorithm has learned to do all that time-intensive embryo ogling even better than a human.

In new research published today in NPJ Digital Medicine, scientists at Cornell University trained an off-the-shelf Google deep learning algorithm to identify IVF embryos as either good, fair, or poor, based on the likelihood each would successfully implant. This type of AI—the same neural network that identifies faces, animals, and objects in pictures uploaded to Google’s online services—has proven adept in medical settings. It has learned to diagnose diabetic blindness and identify the genetic mutations fueling cancerous tumor growth. IVF clinics could be where it’s headed next.

“All evaluation of the embryo as it’s done today is subjective,” says Nikica Zaninovic, director of the embryology lab at Weill Cornell Medicine, where the research was conducted. In 2011, the lab installed a time-lapse imaging system inside its incubators, so its technicians could watch (and record) the embryos developing in real time. This gave them something many fertility clinics in the US do not have—videos of more than 10,000 fully anonymized embryos that could each be freeze-framed and fed into a neural network. About two years ago, Zaninovic began Googling to find an AI expert to collaborate with. He found one just across campus in Olivier Elemento, director of Weill Cornell’s Englander Institute for Precision Medicine.

For years, Elemento had been collecting all kinds of medical imaging data—MRIs, mammograms, stained slides of tumor tissue—from any colleague who would give it to him, to develop automated systems to help radiologists and pathologists do their jobs better. He’d never thought to try it with IVF but could immediately see the potential. There’s a lot going on in an embryo that’s invisible to the human eye but might not be to a computer. “It was an opportunity to automate a process that is time-consuming and prone to errors,” he says. “Which is something that’s not really been done before with human embryos.”

To judge how their neural net, nicknamed STORK, stacked up against its human counterparts, they recruited five embryologists from clinics on three continents to grade 394 embryos based on images taken from different labs. The five embryologists reached the same conclusion on only 89 embryos, less than a quarter of the total. So the researchers instituted a majority voting procedure—three out of five embryologists needed to agree to classify an embryo as good, fair, or poor. When STORK looked at the same images, it predicted the embryologist majority voting decision with 95.7 percent accuracy. The most consistent volunteer matched results only 70 percent of the time; the least, 25 percent.

For now, STORK is just a tool embryologists can upload images to and play around with on a secure website hosted by Weill Cornell. It won’t be ready for the clinic until it can pass rigorous testing that follows implanted embryos over time, to see how well the algorithm fares in real life. Elemento says the group is still finalizing the design for a trial that would do that by pitting embryologists against the AI in a small, randomized cohort. Most important is understanding if STORK actually improves outcomes—not just implantation rates but successful, full-term pregnancies. On that score, at least some embryologists are skeptical.

“All this algorithm can do is change the order of which embryos we transfer,” says Eric Forman, medical and lab director at Columbia University Fertility Center. “It needs more evidence to say it helps women get pregnant quicker and safer.” On its own, he worries that STORK might make only a small contribution to improving IVF’s success rate, while possibly inserting its own biases.

In addition to embryo grading, the Columbia clinic uses pre-implantation genetic screening to improve patients’ odds of pregnancy. While not routine, it is offered to everyone. Forman says about 70 percent of the clinic’s IVF cycles include the blastocyst biopsy procedure, which can add a few thousand dollars to a patient’s tab. That’s why he’s most intrigued about what Elemento’s team is cooking up next. They’re training a new set of neural networks to see if they can detect chromosomal abnormalities, like the one that causes Down Syndrome. With an embryo developing under a camera’s watchful gaze, Elemento’s algorithm would monitor the feed for telltale signs of trouble. “We think the patterns of cell division we can capture with these movies could potentially carry information about these defects, which are hidden in just the snapshots,” says Elemento. They’re also looking into using the technique to predict miscarriages.

There’s plenty of room to improve the performance of IVF, and these algorithmic upgrades could make a dent—in the right circumstances. “If it could provide accurate predictions in real time with minimal risk for harm and no additional cost, then I could see the potential to implement AI like this for embryo selection,” says Forman. But there would be barriers to its adoption. Most IVF clinics in the US don’t have one of these fancy time-lapse recording systems because they’re so expensive. And there are a lot of other potential ways to improve embryo viability that could be more affordable—like tailoring hormone treatments and culturing techniques to the different kinds of infertility that women experience. In the end, though, the number one problem IVF clinics contend with is that sometimes there just aren’t enough high-quality eggs, no matter how many cycles a patient goes through. And no AI, no matter how smart, can do anything about that.

Spread the love
Continue Reading

AI News

Google Employees Sign Petition to Remove Conservative from AI Ethics Pane;

mm

Published

on

Over 1,720 Google employees have signed a petition requesting the company to remove Kay Cole James, the president of the Heritage Foundation, from a Google panel that was new.

The petition says that James’s positions on civic and transgender rights must disqualify her from sitting on Google’s new artificial intelligence (AI) ethics board, which was declared last week.

The controversy introduces a struggle for Google, which is already facing criticism over a host of issues.

Thus far, the business has been publicly silent about the request as pressure builds and conservatives demanding the leadership of Google stand their own ground.

Lawmakers and industry watchers told The Hill that James’s inclusion on the AI integrity council was likely an attempt to allay issues over bias by Google and other online platforms.

, chairman of the Senate Judiciary Committee, told The Hill for the ethics panel Regarding James’ Collection.

Graham added that it was”good for Google to know they’ve got an issue.”

Google and james didn’t respond for comment to The Hill’s requests.

Google has faced criticism in particular LGBTQ groups which pressured the company to eliminate an app that critics said promoted conversion therapy, a discredited idea that someone could change their sexual orientation. The program was removed by google month. But critics noted that the business acted following a rights group that was LGBTQ suspended Google out of its corporate ranks.

James’s comments about transgender people have Google back.

James last month called the Equality Act, federal legislation that would enshrine civil rights for LGBTQ people,”anything but equality.”

“This bill would… open every female toilet and sports group to biological males,” James wrote.

The petitioners wrote that her addition on the regulation could indicate Google”values closeness to power over the health of trans folks, other LGBTQ individuals, and immigrants.”

“That is unacceptable.”

“There’s this attempt to integrate each the views of as many stakeholders as possible, but a total ignorance of the fact that a stakeholder group that warrants the validity of nonbinary men and women, for example, isn’t a plausible, inclusive practice,” Ali Alkhatib, a computer science student at Stanford University and a petition signer, told The Hill.

For conservatives, the request is ammunition for their claims that Google is hostile to conservative views, and they have rallied to James’s defense.

Sen. Ted Cruz (R-Texas) called the Google worker protest”consistent with a persistent pattern.”

“We have observed Google and all of big tech acting with nude partisan and ideological bias,” Cruz told The Hill. “It is more than ironic that leftists at Google, in the name of inclusivity, are pushing to bar one of the most respected African American women in the country from participating in discussions of coverage.”

Google has repeatedly denied claims that its search results are biased against conservatives and has noted that there is evidence for all those allegations. Google CEO Sundar Pichai only last week met with President Trump to discuss”political fairness,” Trump shown in a tweet.

The Google employees, coordinating under the title Googlers Against Hate and Transphobia, say the issue is not that she has lobbied against expanded rights for LGBTQ men and women, although that James is a conservative.

The brand new AI ethics committee, that has fewer than 10 members of google, is tasked with providing an ethical test on AI technologies as new cloud computing enterprise is pursued by the firm.

Googlers Against Transphobia and Hate state that there are civil rights issues such as research demonstrating it misrecognize transgender men and women and may discriminate against, about AI technology.

Kate Crawford, co-founder of this AI Now Institute at New York University, stated”respecting human rights for everybody should be a simple pre-requisite for membership of an ethics board.”

“There’s no greater obligation for major companies making AI tools that affect the lives of countless people,” Crawford said in a statement to The Hill.

The Google protesters wrote that the company must”place agents from vulnerable communities in the middle of conclusion” about AI technology.

Google so far has not responded to any of the concerns raised about the AI integrity council and James.

Workers have pushed the business on different difficulties. Google last year finished work from workers about working with the military after criticism together with the Pentagon on an AI project. And the firm gave up pursuit of a Pentagon cloud computing agency.

The latest controversy only highlights the issues in balancing the issues of Google’s activist workforce with the bottom line of the company.

“This is truly unacceptable, & we anticipate an on the record answer from Google.”

Spread the love
Continue Reading

AI News

Using AI to Target Liver Cancer

mm

Published

on

A genomics company claims it’s discovered a way to detect liver cancer linked to hepatitis B months before other methods can detect it.

The conclusion has been based on a study from Genetron Health and the Australian Academy using a method named HCCscreen, which applies intelligence in blood.

The researchers found that the new method could pick up early signs of the cancer in people who had tested negative based on traditional alpha-fetoprotein (AFP) and ultrasound tests.

Genetron Health chief executive Wang Sizhen explained early detection was important because it significantly improved the chances of survival.

“The study is a breakthrough in genomics technology and it’s very likely to aid hepatitis B virus carriers, whose risk of liver cancer is much higher,” Wang explained.

The researchers used AI engineering to identify biomarkers frequent in famous instances of a kind of liver cancer called hepatocellular carcinoma, or HCC.

The group used it with hepatitis B that had tested negative for liver cancer in AFP and ultrasound tests on people and developed the HCCscreen technique to look for those markers.

Individuals tested positive and have been tracked over eight months, with four finally being diagnosed with pericardial liver .

The four patients had surgery to remove the tumours and another 20 from the group had a HCCscreen test that is second . Wang reported all participants in the group of 20 would continue to be tracked.

“This is the very first large-scale potential study on early identification [of liver cancer],” he said.

The results were published in the Proceedings of the National Academy of Sciences earlier this month.

There are approximately 93 million people with hepatitis B in China and carriers of this virus have a higher chance of developing liver disease.

Liver cancer is generally tough to find in its early stages, also AFP tests and twice-yearly ultrasounds for the disease are advocated for groups such as people with hepatitis B virus infections, or cirrhosis — scarring of liver tissue.

However, in China HCC cases were discovered at stage, the authors of the study wrote.

According to the National Cancer Centre, 466,000 people were diagnosed with liver cancer and 422,000 died in 2015 from the disease in China.

Wang said the company aimed to commercialise the technology but even then it would take the time to make it cheap.

“[High-risk] individuals need to have regular screening. This is important for public health but the technology has to be affordable enough to become widespread,” Wang said. “The ultimate goal of the study is to develop a product that people in China can manage.”

Spread the love
Continue Reading