Last week, the New York Times reported the first case of a wrongful arrest caused by a bad algorithm in the United States. The incident took place in Detroit when Robert Julian-Borchak Williams, an African-American man, was arrested after being falsely matched to security footage showing an individual committing shoplifting.
The American Civil Liberties Union (ACLU) quickly took action and filed a complaint against the Detroit police. After the ACLU pushed for Williams’ case to be dismissed and his information removed from the criminal databases in Detroit, the prosecutors went ahead with those actions.
This development is the first of its kind in the United States, and it highlights some serious concerns that are beginning to arise throughout the globe with the use of facial recognition technology by the state.
Problems With Facial Recognition Systems
Facial recognition systems have been a subject of controversy for a while, increasingly becoming a point of debate among those concerned about privacy and false accusations.
With the recent protests throughout the country and in many parts of the world against police brutality and discrimination, that scrutiny has only increased.
These algorithms have brought a whole new aspect to law enforcement and are full of flaws.
The False Accusation
The robbery that Williams was falsely identified as having committed took place in October 2018. The surveillance video was then uploaded to Michigan state’s facial recognition database in March 2019.
Williams’ photo managed to make it into a photo lineup, where a security guard identified Williams as the one who committed the crime.
According to the ACLU, that guard never actually witnessed the robbery firsthand.
In January, Williams received a phone call from the Detroit Police Department making him aware of his arrest. When he disregarded the call as a prank, the police arrived at his home just an hour later.
Williams was then taken to a detention center where he had his mugshot, fingerprints, and DNA taken, and he was subsequently held all night at the station.
What followed was an interrogation for a crime that he never committed, all because of a faulty recognition system.
Williams’ case was dismissed two weeks after his arrest, but on a larger scale, this incident meant a lot more and has massive implications for privacy. With the increasing use of facial recognition software by governments and law enforcement, this case could be the beginning of serious violations, which have already been unfolding in nations like China but have yet to arrive in the U.S., at least to the public’s knowledge.
One such violation is that Williams’ DNA sample, mugshot, and fingerprints are now all on file as a direct result of the technology. Not only that, but his arrest is on the record.
Private Companies and Law Enforcement
Williams’ case comes as major companies like IBM, Microsoft, and Amazon have stopped providing their facial recognition technology to law enforcement.
The first major company to do so was IBM when CEO Arvind Krishna sent a letter to Congress about no longer offering general purpose facial recognition or analysis software. On top of that, the company halted its research and development of the technology.
“IBM firmly opposes and will not condone uses of any [facial recognition] technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency,” according to the letter. “We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”
Amazon followed suit when they announced a one-year moratorium on allowing law enforcement to use the company’s Rekognition facial recognition platform.
The announcement came just days after IBM’s decision.
One of the most important pieces of work done regarding the issue of discrimination and facial recognition technology was a 2018 paper co-authored by Joy Buolamwini and Timnit Gebru. Buolamwini is a researcher at MIT Media Lab, and Gebru is a member at Microsoft Research.
The 2018 paper found that “machine learning algorithms can discriminate based on classes like race and gender,” among other things.
The case of Robert Julian-Borchak Williams is extremely concerning for many who live in the United States, but it is also an indicator of what is taking place throughout the globe. The use of facial recognition technology by governments and law enforcement agencies is just taking off, and there is very little in place to prevent it from being used unethically. Whether it is China’s widespread use of facial recognition technology for surveillance, or the case of Williams being falsely identified in the United States, the technology opens the world population up to a set of new privacy and human rights violations that previously did not exist.