Connect with us

AI News

Finland Using Prison Inmates for AI Labor

mm

Published

 on

“Prison labor” is generally related to physical labour, but offenders in two prisons in Finland do a new kind of labour: classifying information to train artificial intelligence algorithms to get a startup. Although the startup in query, Vainu, sees the partnership as a sort of prison reform which teaches valuable skills, other specialists say it plays to the exploitative economics of prisoners needing to work for low wages.

Vainu is building a detailed database of organizations across the globe which helps companies find contractors to operate with, says co-founder Tuomas Rasila. In order for this to function, people will need to browse countless thousands of business articles scraped by the world wide web and tag whether, as an instance, an guide is all about Apple the tech business or a fruit firm which has”apple” in the title. (This branded data is subsequently utilized to train an algorithm which handles the database)

There is no issue for posts in English: Vainu just put up an Amazon Mechanical Turk accounts to get people do these tiny tasks. But Mechanical Turk is”not actually that helpful once you wish to do anything [using the] Finnish language,” Rasila states, and the firm had only 1 trainee tagging a great deal of information from the language. “We found that and said,’alright, this isn’t likely to be sufficient,” he adds. The Vainu offices chance to be in precisely the exact same building as the headquarters of their Criminal Sanctions Agency (CSA), the government agency which manages Spartan prisons, and thus, states Rasila, the creators had an idea:”Hey, we can really utilize prison labour.”

Vainu sent 10 computers to those prisons and pays the CSA for every task the offenders complete. The amount is like just how much the startup could have paid to get a job performed on Mechanical Turk, although the CSA is in charge of figuring out just how much of this goes to the offenders, in addition to picking the offenders who perform the information classification.

Officials in the agency were eager to partner, based on Rasila, particularly because the new tasks do not require anything aside from a notebook computer. “There is no danger of violence,” he states, adding that when it comes to other types of prison labour, such as metalsmithing, accessibility to resources which may be turned into makeshift weapons are able to effect a prison workspace”a dangerous location.” Rasila estimates that, now, a bit less than 100 offenders are operating on Vainu’s job for a couple hours per day.

At the moment, Vainu as well as the CSA have an yearly contract dependent on the amount of jobs. To them, it is a win-win circumstance. 1 motivation for those offenders is to generate income, naturally, but”a selling point of this was that the requirement for coaching AI is really rising appreciably, internationally,” Rasila states. Likewise the CSA composed in a launch the app a part of its attempts to come up with work tasks that match”that the needs of contemporary working life,” along with also a PR representative pitched the partnership into The Verge as”an chance for offenders to have work which could enable them.”

It is not surprising, either, there could be a particularly large demand for this kind of work in different nations, based on Lilly Irani, a professor of communication at the University of California in San Diego. AI calculations will need to be trained in specific ways, she states, and many Mechanical Turk employees are in the united states.

Although Rasila states this is a good illustration of creating skills which could be useful later on, he also states that the jobs have”zero learning curve” and just need (presumably preexisting ) literacy, that calls into question just how beneficial this ability is. This sort of job has been”rote, menial, and insistent,” states Sarah T. Roberts, a professor of information science in the University of California in Los Angeles who analyzes data employees. It doesn’t need building high degree of ability, and when a college researcher attempted to associate with prison laborers in precisely the exact same style,”which wouldn’t pass an ethics review board to get a research.” While it’s great that the offenders have been paid a comparable wage according to Mechanical Turk, Roberts points out that salaries on Mechanical Turk are incredibly low anyhow. 1 recent study paper found that employees created a median salary of $2 an hour.

For Irani, there is nothing particular about AI in this narrative. From the US at least, prison labour has been contentious , with a few stating that it efficiently exploits employees while some assert it may help them. For her, the public relations drive round the cooperation is much more sudden than the fact that electronic work is now a part of prison labour. “They’re linking social moves, decreasing it to hype, and using this to market AI.”

Spread the love

A science fiction nerd a heart who grew up reading everything written by Robert A. Heinlein and Isaac Asimov, Alan loves to report on the future and AI.

AI News

AI to Assist with Selection of Embryo

mm

Published

on

IF A WOMAN (or non-female-identifying person with a uterus and visions of starting a family) is struggling to conceive and decides to improve their reproductive odds at an IVF clinic, they’ll likely interact with a doctor, a nurse, and a receptionist. They will probably never meet the army of trained embryologists working behind closed lab doors to collect eggs, fertilize them, and develop the embryos bound for implantation.

One of embryologists’ more time-consuming jobs is grading embryos—looking at their morphological features under a microscope and assigning a quality score. Round, even numbers of cells are good. Fractured and fragmented cells, bad. They’ll use that information to decide which embryos to implant first.

It’s more gut than science and not particularly accurate. Newer methods, like pulling off a cell to extract its DNA and test for abnormalities, called preimplantation genetic screening, provide more information. But that tacks on additional costs to an already expensive IVF cycle and requires freezing the embryos until the test results come back. Manual embryo grading may be a crude tool, but it’s noninvasive and easy for most fertility clinics to carry out. Now, scientists say, an algorithm has learned to do all that time-intensive embryo ogling even better than a human.

In new research published today in NPJ Digital Medicine, scientists at Cornell University trained an off-the-shelf Google deep learning algorithm to identify IVF embryos as either good, fair, or poor, based on the likelihood each would successfully implant. This type of AI—the same neural network that identifies faces, animals, and objects in pictures uploaded to Google’s online services—has proven adept in medical settings. It has learned to diagnose diabetic blindness and identify the genetic mutations fueling cancerous tumor growth. IVF clinics could be where it’s headed next.

“All evaluation of the embryo as it’s done today is subjective,” says Nikica Zaninovic, director of the embryology lab at Weill Cornell Medicine, where the research was conducted. In 2011, the lab installed a time-lapse imaging system inside its incubators, so its technicians could watch (and record) the embryos developing in real time. This gave them something many fertility clinics in the US do not have—videos of more than 10,000 fully anonymized embryos that could each be freeze-framed and fed into a neural network. About two years ago, Zaninovic began Googling to find an AI expert to collaborate with. He found one just across campus in Olivier Elemento, director of Weill Cornell’s Englander Institute for Precision Medicine.

For years, Elemento had been collecting all kinds of medical imaging data—MRIs, mammograms, stained slides of tumor tissue—from any colleague who would give it to him, to develop automated systems to help radiologists and pathologists do their jobs better. He’d never thought to try it with IVF but could immediately see the potential. There’s a lot going on in an embryo that’s invisible to the human eye but might not be to a computer. “It was an opportunity to automate a process that is time-consuming and prone to errors,” he says. “Which is something that’s not really been done before with human embryos.”

To judge how their neural net, nicknamed STORK, stacked up against its human counterparts, they recruited five embryologists from clinics on three continents to grade 394 embryos based on images taken from different labs. The five embryologists reached the same conclusion on only 89 embryos, less than a quarter of the total. So the researchers instituted a majority voting procedure—three out of five embryologists needed to agree to classify an embryo as good, fair, or poor. When STORK looked at the same images, it predicted the embryologist majority voting decision with 95.7 percent accuracy. The most consistent volunteer matched results only 70 percent of the time; the least, 25 percent.

For now, STORK is just a tool embryologists can upload images to and play around with on a secure website hosted by Weill Cornell. It won’t be ready for the clinic until it can pass rigorous testing that follows implanted embryos over time, to see how well the algorithm fares in real life. Elemento says the group is still finalizing the design for a trial that would do that by pitting embryologists against the AI in a small, randomized cohort. Most important is understanding if STORK actually improves outcomes—not just implantation rates but successful, full-term pregnancies. On that score, at least some embryologists are skeptical.

“All this algorithm can do is change the order of which embryos we transfer,” says Eric Forman, medical and lab director at Columbia University Fertility Center. “It needs more evidence to say it helps women get pregnant quicker and safer.” On its own, he worries that STORK might make only a small contribution to improving IVF’s success rate, while possibly inserting its own biases.

In addition to embryo grading, the Columbia clinic uses pre-implantation genetic screening to improve patients’ odds of pregnancy. While not routine, it is offered to everyone. Forman says about 70 percent of the clinic’s IVF cycles include the blastocyst biopsy procedure, which can add a few thousand dollars to a patient’s tab. That’s why he’s most intrigued about what Elemento’s team is cooking up next. They’re training a new set of neural networks to see if they can detect chromosomal abnormalities, like the one that causes Down Syndrome. With an embryo developing under a camera’s watchful gaze, Elemento’s algorithm would monitor the feed for telltale signs of trouble. “We think the patterns of cell division we can capture with these movies could potentially carry information about these defects, which are hidden in just the snapshots,” says Elemento. They’re also looking into using the technique to predict miscarriages.

There’s plenty of room to improve the performance of IVF, and these algorithmic upgrades could make a dent—in the right circumstances. “If it could provide accurate predictions in real time with minimal risk for harm and no additional cost, then I could see the potential to implement AI like this for embryo selection,” says Forman. But there would be barriers to its adoption. Most IVF clinics in the US don’t have one of these fancy time-lapse recording systems because they’re so expensive. And there are a lot of other potential ways to improve embryo viability that could be more affordable—like tailoring hormone treatments and culturing techniques to the different kinds of infertility that women experience. In the end, though, the number one problem IVF clinics contend with is that sometimes there just aren’t enough high-quality eggs, no matter how many cycles a patient goes through. And no AI, no matter how smart, can do anything about that.

Spread the love
Continue Reading

AI News

Google Employees Sign Petition to Remove Conservative from AI Ethics Pane;

mm

Published

on

Over 1,720 Google employees have signed a petition requesting the company to remove Kay Cole James, the president of the Heritage Foundation, from a Google panel that was new.

The petition says that James’s positions on civic and transgender rights must disqualify her from sitting on Google’s new artificial intelligence (AI) ethics board, which was declared last week.

The controversy introduces a struggle for Google, which is already facing criticism over a host of issues.

Thus far, the business has been publicly silent about the request as pressure builds and conservatives demanding the leadership of Google stand their own ground.

Lawmakers and industry watchers told The Hill that James’s inclusion on the AI integrity council was likely an attempt to allay issues over bias by Google and other online platforms.

, chairman of the Senate Judiciary Committee, told The Hill for the ethics panel Regarding James’ Collection.

Graham added that it was”good for Google to know they’ve got an issue.”

Google and james didn’t respond for comment to The Hill’s requests.

Google has faced criticism in particular LGBTQ groups which pressured the company to eliminate an app that critics said promoted conversion therapy, a discredited idea that someone could change their sexual orientation. The program was removed by google month. But critics noted that the business acted following a rights group that was LGBTQ suspended Google out of its corporate ranks.

James’s comments about transgender people have Google back.

James last month called the Equality Act, federal legislation that would enshrine civil rights for LGBTQ people,”anything but equality.”

“This bill would… open every female toilet and sports group to biological males,” James wrote.

The petitioners wrote that her addition on the regulation could indicate Google”values closeness to power over the health of trans folks, other LGBTQ individuals, and immigrants.”

“That is unacceptable.”

“There’s this attempt to integrate each the views of as many stakeholders as possible, but a total ignorance of the fact that a stakeholder group that warrants the validity of nonbinary men and women, for example, isn’t a plausible, inclusive practice,” Ali Alkhatib, a computer science student at Stanford University and a petition signer, told The Hill.

For conservatives, the request is ammunition for their claims that Google is hostile to conservative views, and they have rallied to James’s defense.

Sen. Ted Cruz (R-Texas) called the Google worker protest”consistent with a persistent pattern.”

“We have observed Google and all of big tech acting with nude partisan and ideological bias,” Cruz told The Hill. “It is more than ironic that leftists at Google, in the name of inclusivity, are pushing to bar one of the most respected African American women in the country from participating in discussions of coverage.”

Google has repeatedly denied claims that its search results are biased against conservatives and has noted that there is evidence for all those allegations. Google CEO Sundar Pichai only last week met with President Trump to discuss”political fairness,” Trump shown in a tweet.

The Google employees, coordinating under the title Googlers Against Hate and Transphobia, say the issue is not that she has lobbied against expanded rights for LGBTQ men and women, although that James is a conservative.

The brand new AI ethics committee, that has fewer than 10 members of google, is tasked with providing an ethical test on AI technologies as new cloud computing enterprise is pursued by the firm.

Googlers Against Transphobia and Hate state that there are civil rights issues such as research demonstrating it misrecognize transgender men and women and may discriminate against, about AI technology.

Kate Crawford, co-founder of this AI Now Institute at New York University, stated”respecting human rights for everybody should be a simple pre-requisite for membership of an ethics board.”

“There’s no greater obligation for major companies making AI tools that affect the lives of countless people,” Crawford said in a statement to The Hill.

The Google protesters wrote that the company must”place agents from vulnerable communities in the middle of conclusion” about AI technology.

Google so far has not responded to any of the concerns raised about the AI integrity council and James.

Workers have pushed the business on different difficulties. Google last year finished work from workers about working with the military after criticism together with the Pentagon on an AI project. And the firm gave up pursuit of a Pentagon cloud computing agency.

The latest controversy only highlights the issues in balancing the issues of Google’s activist workforce with the bottom line of the company.

“This is truly unacceptable, & we anticipate an on the record answer from Google.”

Spread the love
Continue Reading

AI News

Using AI to Target Liver Cancer

mm

Published

on

A genomics company claims it’s discovered a way to detect liver cancer linked to hepatitis B months before other methods can detect it.

The conclusion has been based on a study from Genetron Health and the Australian Academy using a method named HCCscreen, which applies intelligence in blood.

The researchers found that the new method could pick up early signs of the cancer in people who had tested negative based on traditional alpha-fetoprotein (AFP) and ultrasound tests.

Genetron Health chief executive Wang Sizhen explained early detection was important because it significantly improved the chances of survival.

“The study is a breakthrough in genomics technology and it’s very likely to aid hepatitis B virus carriers, whose risk of liver cancer is much higher,” Wang explained.

The researchers used AI engineering to identify biomarkers frequent in famous instances of a kind of liver cancer called hepatocellular carcinoma, or HCC.

The group used it with hepatitis B that had tested negative for liver cancer in AFP and ultrasound tests on people and developed the HCCscreen technique to look for those markers.

Individuals tested positive and have been tracked over eight months, with four finally being diagnosed with pericardial liver .

The four patients had surgery to remove the tumours and another 20 from the group had a HCCscreen test that is second . Wang reported all participants in the group of 20 would continue to be tracked.

“This is the very first large-scale potential study on early identification [of liver cancer],” he said.

The results were published in the Proceedings of the National Academy of Sciences earlier this month.

There are approximately 93 million people with hepatitis B in China and carriers of this virus have a higher chance of developing liver disease.

Liver cancer is generally tough to find in its early stages, also AFP tests and twice-yearly ultrasounds for the disease are advocated for groups such as people with hepatitis B virus infections, or cirrhosis — scarring of liver tissue.

However, in China HCC cases were discovered at stage, the authors of the study wrote.

According to the National Cancer Centre, 466,000 people were diagnosed with liver cancer and 422,000 died in 2015 from the disease in China.

Wang said the company aimed to commercialise the technology but even then it would take the time to make it cheap.

“[High-risk] individuals need to have regular screening. This is important for public health but the technology has to be affordable enough to become widespread,” Wang said. “The ultimate goal of the study is to develop a product that people in China can manage.”

Spread the love
Continue Reading