The views expressed by contributors are their own and not the view of The Hill

Bans on facial recognition are naïve — hold law enforcement accountable for its abuse


The use of facial recognition technology has become a new target in the fight against racism and brutality in law enforcement. Last week IBM announced it was leaving the facial recognition business altogether and its CEO questioned whether it was an appropriate tool for law enforcement. Within days, Microsoft and Amazon each announced that they would – at least temporarily – prohibit law enforcement agencies from using their facial recognition software. The Justice in Policing Act, also introduced in Congress last week, would specifically ban facial recognition analysis of officers’ body camera footage. 

Facial recognition technologies – with the assumptions of their developers embedded in their code – often perform poorly at recognizing women, older people and those with darker skin. There’s little question that these flaws exist. But banning facial recognition isn’t necessarily the best response.

We do not blind ourselves just because our eyes are imperfect. We learn to calibrate our trust in our vision — or we buy glasses.

Technology is not so different. Even systems with known weaknesses remain important for scaling up public services. Many of us file taxes or apply for benefits on the internet, for example, even though we know such sites are vulnerable to inadvertent or malicious disruptions. Facial recognition has useful government applications as well, including airport security screening, contract tracing and identifying missing children or trafficked people. 

The current controversy over facial recognition purports to be about bias — inaccurate results related to race or gender. That could be fixed in the near future, but it wouldn’t repair the underlying dilemma: The imbalance of power between citizens and law enforcement. On this, facial recognition ups the ante. These tools can strip individuals of their privacy and enable mass surveillance. Civil libertarians may argue that facial recognition violates due process, but there is no guarantee that courts will agree.

Likewise, can police departments be trusted to monitor how their facial recognition tools are used or abused? If harms are discovered, will the criminal justice system be responsive to those whose lives hang in the balance? The massive protests following the killings of George Floyd and Breonna Taylor at the hands of law enforcement suggests that large portions of Americans think the answer to these questions is no.

If people distrust police officers’ human interactions, how can we ever start to trust them to deploy an imperfect but potentially valuable tool like facial recognition? As a start, we need mechanisms to help independent stakeholders – regulators and the community – detect defects and hold institutions accountable. 

For instance, facial recognition tools could be made open to the public for independent review. Algorithmic decisions, at either an individual or systemic level, could be open to challenge from the community. Complaint procedures would need to be fair and efficient, without placing undue burden on those reporting suspected abuse.

Public service institutions also could be discouraged from deploying unexplainable (“black box”) algorithms and encouraged to have independent evaluations of the equity of products’ outcomes

The genie of facial recognition is not going back in the bottle. If not IBM, Microsoft or Amazon, someone else will sell facial recognition tech to police agencies. In a free market society, strong governance is the way to provide a robust defense against its improper use. 

Although such broader police reform may prove more be more difficult to achieve, in the long run it will be more effective than any specific technology ban.

Osonde A. Osoba is an information scientist at the nonprofit, nonpartisan RAND Corporation, co-directs the RAND Center for Scalable Computing and Analysis and is on the faculty of the Pardee RAND Graduate School. Douglas C. Yeung is a behavioral scientist at the nonprofit, nonpartisan RAND Corporation and on the faculty of the Pardee RAND Graduate School.