Connect with us

Facial Recognition

U.S. Sees First Case of Wrongful Arrest Due to Bad Algorithm

Published

 on

Last week, the New York Times reported the first case of a wrongful arrest caused by a bad algorithm in the United States. The incident took place in Detroit when Robert Julian-Borchak Williams, an African-American man, was arrested after being falsely matched to security footage showing an individual committing shoplifting. 

The American Civil Liberties Union (ACLU) quickly took action and filed a complaint against the Detroit police. After the ACLU pushed for Williams’ case to be dismissed and his information removed from the criminal databases in Detroit, the prosecutors went ahead with those actions.

This development is the first of its kind in the United States, and it highlights some serious concerns that are beginning to arise throughout the globe with the use of facial recognition technology by the state.

Problems With Facial Recognition Systems

Facial recognition systems have been a subject of controversy for a while, increasingly becoming a point of debate among those concerned about privacy and false accusations.

With the recent protests throughout the country and in many parts of the world against police brutality and discrimination, that scrutiny has only increased. 

These algorithms have brought a whole new aspect to law enforcement and are full of flaws. 

The False Accusation

The robbery that Williams was falsely identified as having committed took place in October 2018. The surveillance video was then uploaded to Michigan state’s facial recognition database in March 2019. 

Williams’ photo managed to make it into a photo lineup, where a security guard identified Williams as the one who committed the crime. 

According to the ACLU, that guard never actually witnessed the robbery firsthand.

In January, Williams received a phone call from the Detroit Police Department making him aware of his arrest. When he disregarded the call as a prank, the police arrived at his home just an hour later.

Williams was then taken to a detention center where he had his mugshot, fingerprints, and DNA taken, and he was subsequently held all night at the station. 

What followed was an interrogation for a crime that he never committed, all because of a faulty recognition system.

Williams’ case was dismissed two weeks after his arrest, but on a larger scale, this incident meant a lot more and has massive implications for privacy. With the increasing use of facial recognition software by governments and law enforcement, this case could be the beginning of serious violations, which have already been unfolding in nations like China but have yet to arrive in the U.S., at least to the public’s knowledge.

One such violation is that Williams’ DNA sample, mugshot, and fingerprints are now all on file as a direct result of the technology. Not only that, but his arrest is on the record.

Private Companies and Law Enforcement

Williams’ case comes as major companies like IBM, Microsoft, and Amazon have stopped providing their facial recognition technology to law enforcement.

The first major company to do so was IBM when CEO Arvind Krishna sent a letter to Congress about no longer offering general purpose facial recognition or analysis software. On top of that, the company halted its research and development of the technology.

“IBM firmly opposes and will not condone uses of any [facial recognition] technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency,” according to the letter. “We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”

Amazon followed suit when they announced a one-year moratorium on allowing law enforcement to use the company’s Rekognition facial recognition platform. 

The announcement came just days after IBM’s decision.

One of the most important pieces of work done regarding the issue of discrimination and facial recognition technology was a 2018 paper co-authored by Joy Buolamwini and Timnit Gebru. Buolamwini is a researcher at MIT Media Lab, and Gebru is a member at Microsoft Research. 

The 2018 paper found that “machine learning algorithms can discriminate based on classes like race and gender,” among other things. 

The case of Robert Julian-Borchak Williams is extremely concerning for many who live in the United States, but it is also an indicator of what is taking place throughout the globe. The use of facial recognition technology by governments and law enforcement agencies is just taking off, and there is very little in place to prevent it from being used unethically. Whether it is China’s widespread use of facial recognition technology for surveillance, or the case of Williams being falsely identified in the United States, the technology opens the world population up to a set of new privacy and human rights violations that previously did not exist.

 

Spread the love

Facial Recognition

Clearview AI Halts Facial Recognition Services in Canada Amid Investigation

Published

on

According to federal privacy officials in Canada, the U.S.-based Clearview AI will halt facial recognition services in the nation amid an investigation into the company. 

Clearview’s contract with the Royal Canadian Mounted Police (RCMP), which is the company’s last client in Canada, will be indefinitely suspended.

The investigation is being jointly run by privacy protection authorities in Canada, Alberta, British Columbia, and Quebec.

According to a statement issued by the Office of the Privacy Commissioner of Canada, one of the issues surrounding the investigation by authorities is “the deletion of the personal information of Canadians that Clearview has already collected as well as the cessation of Clearview’s collection of Canadians’ personal information.”

Clearview AI Controversy

The investigation comes as media reports have revealed that Clearview AI was collecting images through its technology and providing facial recognition services to law enforcement. 

Other reports have detailed the company’s cooperation with various countries and organizations, such as retailers, financial institutions, and government institutions.

It was earlier this year when Clearview AI first came under fire, following a report done by the New York Times

The New York Times report detailed how the start-up, not widely known at the time, was helping law enforcement identify unknown individuals from their online images.

Clearview responded by saying the tool was meant to allow law enforcement to “identify perpetrators and victims of crimes.” 

However, that was nowhere near enough to ease the concerns of privacy advocates who fear this type of technology and relationship could lead to major abuses by the state. There is no consent, and individuals could be recognized within a matter of seconds with the technology.

Back in April during an Illinois court filing, the company vowed to cut its relationships with private companies and no longer sell facial recognition services to them.

According to the filing, “Clearview is canceling the accounts of every customer who was not either associated with law enforcement or some other federal, state, or local government department, office, or agency. Clearview is also canceling all accounts belonging to any entity based in Illinois.”

Just a month later, a lawsuit was filed against the company by the ACLU in Illinois due to alleged privacy and safety violations.

Law enforcement agencies in Canada such as RCMP, Toronto police, and Calgary police have all used the software to some extent. 

Recent Controversy Surrounding Facial Recognition Technology

The news about Clearview AI and Canada comes as public scrutiny over facial recognition technology is gaining steam. Whether it is this case, the use of the technology for surveillance in nations like China, or the recent wrongful arrest due to a bad algorithm in the United States, which was the first of its kind, there will undoubtedly be more concerns raised about this technology in the future. 

Privacy advocates and individuals worried about abuses with this technology are beginning to speak out more, and companies like Google, IBM and others are taking action

 

Spread the love
Continue Reading

Ethics

AI Now Institute Warns About Misuse Of Emotion Detection Software And Other Ethical Issues

mm

Published

on

The AI Now Institute has released a report that urges lawmakers and other regulatory bodies to set hard limits on the use of emotion-detecting technology, banning it in cases where it may be used to make important decisions like employee hiring or student acceptance. In addition, the report contained a number of other suggestions regarding a range of topics in the AI field.

The AI Now Institute is a research institute based at NYU, possessing the mission of studying AI’s impact on society. AI Now releases a yearly report demonstrating their findings regarding the state of AI research and the ethical implications of how AI is currently being used. As the BBC reported, this year’s report addressed topics like algorithmic discrimination, lack of diversity in AI research, and labor issues.

Affect recognition, the technical term for emotion-detection algorithms, is a rapidly growing area of AI research. Those who employ the technology to make decisions often claim that the systems can draw reliable information about people’s emotional states by analyzing microexpressions, along with other cues like tone of voice and body language. The AI Now institute notes that the technology is being employed across a wide range of applications, like determining who to hire, setting insurance prices, and monitoring if students are paying attention in class.

Prof. Kate Crawford, co-founder of AI Now explained that its often believed that human emotions can accurately be predicted with relatively simple models. Crawford said that some firms are basing the development of their software on Paul Ekman’s work, a psychologist who hypothesized there are only six basic types of emotions that register on the face. However, Crawford notes that since Ekman’s theory was introduced studies have found that is far greater variability in facial expressions and that expressions can change across situations and cultures very easily.

“At the same time as these technologies are being rolled out, large numbers of studies are showing that there is… no substantial evidence that people have this consistent relationship between the emotion that you are feeling and the way that your face looks,” said Crawford to the BBC.

For this reason, the AI Now institute argues that much of affect recognition is based on unreliable theories and questionable science. Hence, emotion detection systems shouldn’t be deployed until more research has been done and that “governments should specifically prohibit the use of affect recognition in high-stakes decision-making processes”. AI Now argued that we should especially stop using the technology in “sensitive social and political contexts”, contexts that include employment, education, and policing.

At least one AI-development firm specializing in affect recognition, Emteq, agreed that there should be regulation that prevents misuse of the tech. The founder of Emteq, Charles Nduka, explained to the BBC that while AI systems can accurately recognize different facial expressions, there is not a simple map from expression to emotion. Nduka did express worry about regulation being taken too far and stifling research, noting that if “things are going to be banned, it’s very important that people don’t throw out the baby with the bathwater”.

As NextWeb reports, AI Now also recommended a number of other policies and norms that should guide the AI industry moving forward.

AI Now highlighted the need for the AI industry to make workplaces more diverse and stated that workers should be guaranteed a right to voice their concerns about invasive and exploitative AI. Tech workers should also have the right to know if their efforts are being used to construct harmful or unethical work.

AI Now also suggested that lawmakers take steps to require informed consent for the use of any data derived from health-related AI. Beyond this, it was advised that data privacy be taken more seriously and that the states should work to design privacy laws for biometric data covering both private and public entities.

Finally, the institute advised that the AI industry begin thinking and acting more globally, trying to address the larger political, societal, and ecological consequences of AI. It was recommended that there be a substantial effort to account for AI’s impact regarding geographical displacement and climate and that governments should make the climate impact of the AI industry publically available.

Spread the love
Continue Reading

Facial Recognition

Former Intelligence Professionals Use AI To Uncover Human Trafficking

mm

Published

on

Business-oriented publication Fast Company reports on recent AI developments designed to uncover human trafficking by analyzing online sex ads.

Kara Smith, a senior targeting analyst with DeliverFund, a group of former CIA, NSA, special forces, and law enforcement officers who collaborate with law enforcement to bust sex trafficking operations in the U.S. gave the publication an example of an ad she and her research colleagues analyzed. In the ad, Molly, a ‘new bunny’ in Atlanta supposedly “loves her job selling sex, domination, and striptease shows to men.”

In their analysis, Smith and her colleagues found clues that Molly is performing all these acts against her will. “For instance, she’s depicted in degrading positions, like hunched over on a bed with her rear end facing the camera.”

Smith adds other examples, like “bruises and bite marks are other telltale signs for some victims. So are tattoos that brand the women as the property of traffickers—crowns are popular images, as pimps often refer to themselves as “kings.” Photos with money being flashed around are other hallmarks of pimp showmanship.”

Until recently researchers like Smith had to spot markers like these manually. Then, approximately a year ago DeliverFund, her research group received an offer from a computer vision startup called XIX to automate the process with the use of AI.

As explained, “the company’s software scrapes images from sites used by sex traffickers and labels objects in images so experts like Smith can quickly search for and review suspect ads. Each sex ad contains an average of three photos, and XIX can scrape and analyze about 4,000 ads per minute, which is about the rate that new ones are posted online.”

After a relatively slow start in its first three years of operation, it only had three operatives, DeliverFund was able to uncover four pimps. But, after staffing up and starting its cooperation with XIX, in just the first nine months of 2019, “DeliverFund contributed to the arrests of 25 traffickers and 64 purchasers of underage sex. Over 50 victims were rescued in the process.” Among its accomplishments, it also provided assistance in the takedown of Backstage.com, “which had become the top place to advertise sex for hire—both by willing sex workers and by pimps trafficking victims.”

It is also noted that “XIX’s tool helps DeliverFund identify not only the victims of trafficking but also the traffickers. The online ads often feature personally identifiable information about the pimps themselves.”

The report explains that “XIX’s computer vision is a key tool in a digital workflow that DeliverFund uses to research abuse cases and compile what it calls intelligence reports.” Based on these reports, DeliverFund has provided intel to 63 different agencies across the U.S., but it also has a relationship with the attorney general’s offices of Montana, New Mexico, and Texas.

The organization also provides “free training to law officers on how to recognize and research abuse cases and use its digital tools. Participating agencies can research cases on their own and collaborate with other agencies, using a DeliverFund system called PATH (Platform for the Analysis and Targeting of Human Traffickers).”

According to the information of the Human Trafficking Institute, about half of trafficking victims worldwide are minors, and Smith ads that “the overwhelming majority of sex trafficking victims are U.S. citizens.”

 

Spread the love
Continue Reading