The views expressed by contributors are their own and not the view of The Hill

A better prescription for algorithms


Last week, doctors and staff at Stanford University’s hospital protested against an algorithm used to determine priority for COVID-19 vaccines.  The algorithm skipped front-line doctors in favor of high-level administrators and doctors who are not likely to treat COVID-19 patients. Out of 5,000 scheduled vaccines, only seven were reserved for medical residents and fellows. 

Protestors were captured on video in their white coats and blue scrubs, shouting “algorithms suck” — among other, unprintable slogans — while Stanford officials apologized and blamed “unintended missteps.”

Given that algorithms will be used nationwide to apportion vaccines, the time is right to understand how they work. As a clinical law professor who researches data privacy and represents clients impacted by automated decision-making systems, I have seen firsthand how  algorithms can reach unfair, incorrect and even discriminatory outcomes if they are not carefully designed and monitored.

An algorithm is a set of mathematical instructions that tells a computer how to complete a task. Algorithms can range from the very simple to the very complex. At the more complex level, some algorithms use machine learning — a form of artificial intelligence — to analyze large sets of data to recognize patterns or make predictions. 

Algorithms fuel the automated decision-making systems that govern modern life. Some uses are fairly benign, such as Netflix’s movie recommendations or romantic matches on a dating application. Others, however, are quite consequential. Algorithms determine your credit score, whether you can access housing and employment, what you will pay for insurance, and even whether the police will consider you a suspect.

Computer outputs have a veneer of objectivity and neutrality, giving rise to what New York University professor Meredith Broussard calls “techno-chauvinism,” or the misguided belief that technological solutions are inherently superior. The reality is that human beings impart their own conscious and implicit biases, along with human error, into the design of algorithms. 

At each stage of building an algorithm — and there are many — people must exercise their judgment and make decisions. Developers must determine which variables are linked to the desired outcome, and these decisions involve value judgments. This seems to have been the problem at Stanford, where the algorithm penalized younger people and required location data, which medical residents could not provide because they rotate through multiple departments.  Then, no one tested the algorithm to ascertain its fairness.

Algorithms also can lead to unfair outcomes because the massive sets of data upon which they rely inevitably contain some inaccuracies and omissions. They also can lock in the historical disparities of the past, replicating biases against groups that are protected under civil rights laws, even when their makers have no intent to discriminate. More often than not, the people making design and data selection decisions do not reflect the diversity of society, given the demographics of Silicon Valley.

Biased outcomes result. For example, Harvard professor LaTanya Sweeney discovered that Google searches for her name and other names typically associated with Black people resulted in advertisements offering links to criminal records, while searches for “white-sounding names” did not.  

Researchers Joy Buolamwini and Timnit Gebru uncovered that facial recognition technology has error rates as much as 34 times higher percent for women of color, but is 99 percent accurate for white men. The training data sets that the algorithms learned from were predominantly images of white men. 

When Apple rolled out a new credit card, a prominent software developer tweeted that his credit limit was twenty times that of his wife, despite her higher credit score and their joint tax returns and accounts. Other customers reported similar gender disparities, and yet no explanation was forthcoming. The New York Attorney General is investigating.

These stories are the tip of the iceberg. Far more frequently, people have no idea that algorithms, rather than people, decided whether they got a loan, a job, or a home. Even with this knowledge, the internal operations of algorithms are a “black box,” shrouded in secrecy. And when challenged, many companies that deploy algorithmic systems claim trade secrecy protection, making it difficult to peer inside the black box to ensure it is fair and accurate. 

Meanwhile, there is little government oversight over the adoption and use of algorithms. Legal remedies are few because the United States, unlike our counterparts in Europe, lacks a comprehensive data privacy law that would establish a baseline of algorithmic accountability.

In protesting the algorithm, the Stanford residents had the right diagnosis: “It just doesn’t make sense.” They also had the right prescription, that they deserve a voice in the development of the algorithm. Indeed, the public has every right to know how vaccine algorithms — and all the algorithms impacting our lives — are being designed and deployed, and to demand a seat at the table when they are adopted.

Michele Gilman is the Venable Professor of Law at the University of Baltimore School of Law, where she directs the Civil Advocacy Clinic. She is also an affiliate at Data & Society Research Institute. Follow her on Twitter @profmgilman.