The views expressed by contributors are their own and not the view of The Hill

How loopholes and opt-outs can tear apart US AI policy

President Joe Biden signs an executive on artificial intelligence in the East Room of the White House, Monday, Oct. 30, 2023, in Washington. Vice President Kamala Harris looks on at right.
President Joe Biden signs an executive on artificial intelligence in the East Room of the White House, Monday, Oct. 30, 2023, in Washington. Vice President Kamala Harris looks on at right. (AP Photo/Evan Vucci)

Last month, the White House published new rules establishing how the federal government uses artificial intelligence systems, including baseline protections for safety and civil rights.

Given AI’s well-documented potential to amplify discrimination and supercharge surveillance, among other harms, the rules are urgently needed as federal agencies race to adopt this technology.

The good news is that, for the most part, the new rules issued in a memo by the Office of Management and Budget are clear, sensible and strong. Unfortunately, they also give agencies far too much discretion to opt out of key safeguards, seriously undercutting their effectiveness. 

Before federal agencies head further down the AI road, the Biden administration must make changes to ensure that opt-outs are the exception, not the rule.

But let’s start with the good — and there’s a lot. The OMB memo sets up extensive “minimum risk mitigation practices” that federal agencies must implement. 

Before using AI that may affect people’s rights or safety, agencies must conduct an impact assessment, with particular attention to the “potential risks to underserved communities” — such as wrongful arrests of racial minorities based on facial recognition errors or denied benefits to low-income families because of faulty algorithms. 

They must also evaluate whether AI is “better suited” than other means to accomplish the same goal — an important threshold, given that AI systems that are not a good fit for their tasks frequently also cause harm. And they must test for real-world performance and mitigate emerging risks through ongoing monitoring.

If agencies fail to implement these practices, or testing reveals that the AI is unsafe or violates people’s rights, they are prohibited from using the technology. All this underscores the OMB memo’s main principle: When the government can’t guarantee that people are protected from algorithmic harms, AI is off the table.  

But given how robust these new rules are, it’s all the more troubling that OMB grants agencies such wide latitude to bypass them.

One loophole permits agencies to waive the minimum practices if they — and they alone — deem that compliance would “increase risks to safety or rights overall,” or “create an unacceptable impediment to critical agency operations.” Such vague criteria are prone to abuse; moreover, it’s difficult to see how practices to mitigate risks could increase them.

Agencies are also given leeway to opt out if they decide the AI is not a “principal basis” of a given decision or action. A similar loophole under New York City’s law to counter AI bias in hiring has undermined its effectiveness. The law requires employers to audit their use of AI-powered hiring tools for racial and gender bias and to post the results, but only if these tools “substantially assist or replace” human decision-making. Few employers have posted audits as a result.

We don’t have to look far to see what broad regulatory exemptions like these can lead to. Government agencies have already integrated AI into a range of functions with few safeguards in place. The results have not been encouraging.

An app used by Customs and Border Protection to screen migrants and asylum seekers, for example, relies on a facial recognition feature that has proven to be less accurate at identifying people with darker skin tones. This has disproportionately blocked Black asylum seekers from applying for asylum. 

In 2021, the Department of Justice found that the algorithm it uses to assess who to grant early release from federal prison overpredicted that Black, Asian and Hispanic people would re-offend, making them less likely to qualify.

AI harms have also crept into programs jointly administered by the federal government and states, such as a Medicaid benefit that provides home care support to older people and people with disabilities. More than two dozen states are using algorithms that have been linked to arbitrary and unfair cuts in home care hours, with thousands of beneficiaries inappropriately denied care and some forced to skip medical appointments, forgo meals and sit in urine-soaked clothing.

To make matters worse, decisions to opt out of the OMB’s minimum practices will be solely at the discretion of “Chief Artificial Intelligence Officers” — agency-designated leads responsible for overseeing the use of AI. These officers must report these decisions to OMB, and explain them to the public in most circumstances, unless, for example, the decision contains sensitive information. But these decisions are final and not subject to appeal.

And longstanding weaknesses in how agencies police themselves could undermine the chief AI officers’ critical oversight role. To cite one example, the Department of Homeland Security’s privacy and civil rights watchdogs are chronically understaffed and isolated from operational decision-making. Under their watch, the department has shirked basic privacy obligations, and engaged in intrusive and biased surveillance practices of questionable intelligence value.

These flaws need not doom the OMB memo. Federal agencies should limit waivers and opt-outs to truly exceptional situations, ensuring that their exercise of discretion privileges public trust over expediency and secrecy. OMB should also carefully scrutinize such decisions and confirm they are clearly explained to the public. If they find that waivers and opt-outs have been abused, they should reconsider whether these should even be allowed. 

Ultimately, however, the responsibility to enact comprehensive protections rests with Congress, which can codify these safeguards and establish independent oversight of how they are enforced. The stakes are too high, and the harms too great, to leave broad loopholes in place.

Amos Toh is senior counsel at the Brennan Center for Justice.

Tags AI bias Applications of artificial intelligence artificial intelligence regulation Politics of the United States

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Bottom ↴

Top Stories

See All

Most Popular

Load more