The views expressed by contributors are their own and not the view of The Hill

Worried about the dangers of AI? They’re already here.

Getty Images

On May 16, the U.S. Senate Subcommittee on Privacy, Technology, and the Law held a hearing to discuss regulation of artificial intelligence (AI) algorithms. The committee’s chairman, Sen. Richard Blumenthal (D-Conn.), said that “artificial intelligence urgently needs rules and safeguards to address its immense promise and pitfalls.” During the hearing, OpenAI CEO Sam Altman stated, “If this technology goes wrong, it can go quite wrong.”

As the capabilities of AI algorithms have become more advanced, some voices in Silicon Valley and beyond have been warning of the hypothetical threat of “superhuman” AI that could destroy human civilization. Think Skynet. But these vague concerns have received an outsized amount of airtime, while the very real, concrete but less “sci-fi” dangers of AI bias are largely ignored. These dangers are not hypothetical, and they’re not in the future: They’re here now.

I am an AI scientist and physician who has focused my career on understanding how AI algorithms could perpetuate biases in the medical system. In a recent publication, I showed how previously developed AI algorithms for identifying skin cancers performed worse on images of skin cancer on brown and Black skin, which could lead to misdiagnoses in patients of color. These dermatology algorithms aren’t in clinical practice yet, but many companies are working on securing regulatory approval for AI in dermatology applications. In speaking to companies in this space as a researcher and adviser, I’ve found that many have continued to underrepresent diverse skin tones when building their algorithms, despite research that shows how this could lead to biased performance.

Outside of dermatology, medical algorithms that have already been deployed have the potential to cause significant harm. A 2019 paper published in Science analyzed the predictions of a proprietary algorithm already deployed on millions of patients. This algorithm was meant to help predict which patients have complex needs and should receive extra support, by assigning every patient a risk score. But the study found that for any given risk score, Black patients were actually much sicker than white patients. The algorithm was biased, and when followed, resulted in fewer resources being allocated to Black patients who should have qualified for extra care. 

The risks of AI bias extend beyond medicine. In criminal justice, algorithms have been used to predict which individuals who have previously committed a crime are most at risk of re-offending within the next two years. While the inner workings of this algorithm are unknown, studies have found that it has racial biases: Black defendants who did not recidivate had incorrect predictions at double the rate of white defendants who did not recidivate. AI-based facial recognition technologies are known to perform worse on people of color, and yet, they are already in use and have led to arrests and jail time for innocent people. For Michael Oliver, one of the men wrongfully arrested due to AI-based facial recognition, the false accusation caused him to lose his job and disrupted his life.

Some say that humans themselves are biased and that algorithms could provide more “objective” decision-making. But when these algorithms are trained on biased data, they perpetuate the same biased outputs as human decision-makers in the best-case scenario — and can further amplify the biases in the worst. Yes, society is already biased, but don’t we want to build our technology to be better than the current broken reality?

As AI continues to permeate more avenues of society, it isn’t the Terminator we have to worry about. It’s us, and the models that reflect and entrench the most unfair aspects of our society. We need legislation and regulation that promotes deliberate and thoughtful model development and testing ensuring that technology leads to a better world, rather than a more unfair one. As the Senate subcommittee continues to ponder the regulation of AI, I hope they realize that the dangers of AI are already here. These biases in already deployed, and future algorithms must be addressed now.

Roxana Daneshjou, MD, Ph.D., is a board-certified dermatologist and a postdoctoral scholar in Biomedical Data Science at Stanford School of Medicine. She is a Paul and Daisy Soros fellow and a Public Voices fellow of The OpEd Project. Follow her on Twitter @RoxanaDaneshjou.

Tags Algorithmic bias Artificial intelligence Crime Medicine Race Richard Blumenthal

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Bottom ↴

Top Stories

See All

Most Popular

Load more