The views expressed by contributors are their own and not the view of The Hill

Don’t ban facial recognition

At approximately 2:50 p.m. on April 15, 2013, two bombs exploded close to the finish line of the Boston Marathon, killing three people and wounding another 250.

The authorities were able to gain images of the terrorists, but were frantically seeking to establish their identities, to catch them before they planted more bombs, which turned out not to be an idle fear. Unable to identify the terrorists over three days, the FBI released their photographs, asking the public to help. Using current versions of facial recognition (FR), law enforcement could have identified them within three minutes.

This astonishing number is not a rhetorical flourish; it is based on a six-month study by a Canadian robbery-investigation unit. In New York City, FR has been used in a variety of ways, including the arrests of a suspected rapist, a person who pushed another onto the subway tracks, the identification of a hospitalized woman suffering from Alzheimer’s and the identification of a child sex trafficker sought by the FBI. Over the course of 2018, New York City detectives requested 7,024 FR searches, resulting in 1,851 possible matches and 998 arrests.

Like other new surveillance technologies, FR encounters a storm of concerns. Some critics sound alarms, based largely on what they fear FR will lead to not on observations of what it does. Thus Jay Stanley, a senior policy analyst at the American Civil Liberties Union, warns that “[t]he ultimate nightmare is that we lose all anonymity when we step outside our homes. If you know where everyone is, you know where they work, live, pray. You know the doctors they visit, political meetings they attend, their hobbies, the sexual activities they engage in and who they’re associating with.” Representative Jim Jordan (R-OH) likened FR to George Orwell’s Big Brother from “1984.” Representative Alexandria Ocasio-Cortez (D-NY) believes that “[w]e have a technology that was created and designed by one demographic, that is mostly effective on one demographic, and they’re trying to sell it and impose it on the entirety of the country.”  

In response, some localities, namely San Francisco, California, and Somerville, Massachusetts, have already banned the use of FR by their local government agencies. In the nation’s capital, legislators on both sides of the aisle are calling on Congress to ban FR or at least introduce a moratorium on its use.  Representative Jim Jordan (R-Ohio) advocates “a timeout.” Representative Elijah Cummings (D-MD), the chairman of the House Oversight and Reform Committee, is weighing both options.

 There may be no way to address the overarching fears all new surveillance technologies raise. But major concerns can be addressed without slowing the use of FR. First, lawmakers should ensure that police guidelines indicate what in effect is already often the case: FR should be used merely to identify suspects, but not by itself as a sufficient cause for arrest or conviction. As described in Commissioner O’Neill’s explanation of the New York Police Department’s use of FR, the Facial Identification Section is a separate unit of the Detective Bureau. Its investigators examine the matches found by the FR software and, if they have identified a strong match, they search social media and other publicly available databases to gather more information before passing along their findings. Furthermore, in his words: “the facial identification team will provide only a single such lead to the case detective.”

Another problem arises because some FR software poorly identifies people of dark pigmentation, leading to a high number of false positives and charges of racial discrimination. Other FR is better at identifying people of dark pigmentation, but has more of an issue with those of light complexion. It seems that some combinations of both or continued development of FR may solve the problem. It surely is not a reason to ban FR, given its high value for numerous public goods.

The claim that people have an expectation of privacy when they show their face in public, and hence it is unconstitutional to use FR, fails the reasonable person test. Surely a sample of Americans would agree that a police officer with a picture of a criminal should not be prohibited from scrutinizing people on the street to see if they are that criminal.

Further, the use of driver’s license pictures and pictures posted on those parts of Facebook that are meant to be seen by others (as distinct from family albums and “closed” sites) meet the third party doctrine, which holds that once a person voluntarily releases information to a third party, that party is free to share that information with law enforcement authorities.

If the usages of FR are properly supervised, as described above, it fully meets the requirements of the Fourth Amendment, which protects people from “unreasonable searches and seizures.” That is, it recognizes on the face of it that there are searches that are reasonable and, thus, fully constitutional.

The courts have repeatedly established that, when the public interest is high and the intrusion into people’s personal lives is small, the search is reasonable. FR, if properly supervised, clearly meets this constitutional test. Indeed, the rate of crimes that are unresolved is very high: 54 percent of violent crimes and 82 percent of other crimes remained unsolved, according to 2017 data. Given these high rates of unsolved crime, the threat of terrorism, and the waves of human trafficking, and given that FR uses information voluntarily shared by the public and that it is much less invasive than DNA tests and even the routine screening conducted in airports, Congress should ensure that FR is used properly, but not ban it nor slow it down.

Amitai Etzioni is a university professor and professor of international affairs at The George Washington University. His next book, “Reclaiming Patriotism,” will be published by the University of Virginia Press on September 10.