Opinion#Curation
22 June 2020
Unsplash © Jose Martin Ramirez

Inherent bias is a bigger threat than privacy in facial recognition

The Black Lives Matter movement has pushed facial recognition technology into the spotlight once more, with privacy concerns being overshadowed by a remerged focus on its inherent bias. Nick Finegold, Founder & CEO of Curation explores the situation and the actions of tech giants.

IBM will stop selling facial recognition services and has called for a national debate on the technology’s future use. In a letter, CEO Arvind Krishna wrote that the company opposes and would not use the technology offered by vendors which could lead to “violations” of human rights and that a “national dialogue” is needed. AI systems, especially those used in law enforcement, should be tested for bias in a transparent way, he added.

Why does this matter?

IBM had been attempting to solve this issue, last year adding one million diverse faces into its dataset to broaden its system and better train its AI. Its pledge to stop selling the technology could point to the fact this action had not resulted in significant improvements or, as TechCrunch points out, it could be to do with the fact the technology is not making IBM money.

Amazon’s Rekognition face recognition technology is perhaps more notorious and controversial due to its association with law enforcement. Amazon this week also made moves to cut back its use in policing – for at least one year. The company said it was doing so until stronger regulations are developed and amid criticism on the technology’s disproportionate effect on minorities.

study by Comparitech found Rekognition wrongly linked police mugshots with 105 politicians in the US and the UK, and MIT research has previously shown it to perform more poorly with darker-skinned women.

Such issues led AI researchers to write an open letter to Amazon last year calling on it to stop selling Rekognition to police agencies. Following this, its shareholders then rejected a proposal to limit the sale of the technology to government agencies and law enforcement.

It’s interesting to note police body cameras in the US were introduced with the aim of increasing the accountability of officers, and their utilisation was accelerated over the years due to Black Lives Matter protests. Their effectiveness has subsequently been questioned, and so regulators looking at tackling US policing issues may have to address both human and machine learning to improve police transparency and bias.

Microsoft has also announced this week it will wait until federal laws are in place in the US before selling its facial recognition technology to the police. The move prompted angst from President Donald Trump, who endorsed banning the tech giant from federal contracts via social media.

Google, which does not sell facial recognition products at present, employed contractors last year that were reported to have duped black homeless people in Atlanta to have their likenesses taken and used for facial-recognition research.

It’s not just facial recognition systems that display inherent bias. Speech recognition systems from Amazon, Apple, Google, IBM and Microsoft have been shown to make fewer mistakes with white users compared to black users. AI hiring systems are also argued to be biased by default, and research is now underway to tackle this issue.

Nick Finegold is the Founder & CEO, Curation an emerging and peripheral risks monitoring service.

Sign up for the ESG Radar