Bias in Facial Recognition Technology

Neha Bhardwaj. 07/10/2021


Facial recognition technology works by recognizing and matching up key points on a person’s face, as shown in the picture. However, people of color in particular face much higher error rates with this software. (ACLU)


For years, bias in policing and law enforcement has been a fiercely debated issue. Last year, the death of George Floyd provoked immense outcry, resulting in resurgences of the Black Lives Matter movement and calls for radical reformation of the criminal justice system. However, while there has been extensive discussion of the implicit biases and discriminatory actions of human police officers, few recognize the other segment of law enforcement in desperate need of reform: facial recognition technology.

Most of us interact with facial recognition on a daily basis in relatively benign ways, such as unlocking our phones or tagging our friends in a post on social media. However, on a broader scale, this software is used in everything from recognizing known criminals to diagnosing certain diseases. As the use of AI technology has expanded, so too have concerns about the potential biases in the software.

According to Harvard University’s Science in the News, while facial recognition boasts an impressive accuracy rate for certain groups, the technology is plagued by a far higher rate of errors among young, Black, and female subjects. In 2018, the “Gender Shades” project offered strong experimental proof of this bias. The algorithms tested in the experiment showed up to 34% higher error rates for darker-skinned female subjects than for lighter-skinned males, putting a global spotlight on the implications of using facial recognition technology. A year later, a federal study further showed egregious disparities on the basis of race, gender, and age. For instance, “Asian and African American people were up to 100 times more likely to be misidentified than white men,” according to the Washington Post. Other minority groups, women, the elderly, and children were all subject to drastically higher rates of inaccuracy than middle-aged Caucasian men.

The implications of these biases are staggering. As pointed out by Jay Stanley of the American Civil Liberties Union, “One false match can lead to missed flights, lengthy interrogations, tense police encounters, false arrests, or worse.” In the realm of law enforcement in particular, not only is the technology itself biased, but there are several compounding factors that further disadvantage Black and Brown people. Many jurisdictions in the U.S. use mugshot databases to train facial recognition algorithms, subjecting the technology to prejudice of years past. As the American Civil Liberties Union explains, “Since Black people are more likely to be arrested than white people for minor crimes… their faces and personal data are more likely to be in mugshot databases. Therefore, the use of face recognition technology tied into mugshot databases exacerbates racism in the criminal legal system that already disproportionately polices and criminalizes Black people.” Moreover, there is a higher proportion of surveillance cameras installed in predominantly Black and Brown neighborhoods, further influencing the accuracy of facial recognition algorithms that process surveillance footage. Ultimately, these realities boil down to an undeniable truth: facial recognition is contributing to inequality in everything from suspect identification to alibi corroboration.

This naturally begs the question: how can the world of facial recognition be improved? The primary solution is improving training datasets. When facial recognition algorithms are created, they are “trained” on large databases of various people’s faces. However, these datasets are often lacking in diversity. The underrepresentation of Black, Brown, female, and young subjects in these datasets makes it harder for the algorithms to differentiate between darker-skinned female subjects. As a result, many are calling for algorithms to be trained on more representative and comprehensive datasets. Other solutions include establishing standards of image quality for people of all skin colors, regular auditing, and new legislation to regulate how mass surveillance is conducted and how law enforcement officers use the technology.

Facial recognition software is undeniably a powerful tool with great potential. The inherent bias in the technology is certainly not a reason to scrap it altogether. Rather, the facial recognition landscape must be transformed into a tool for equality and justice.

Cover Photo: (IEEE Spectrum)


Neha Bhardwaj