The views expressed by contributors are their own and not the view of The Hill

Runaway bureaucracy could make common uses of AI worse, even mail delivery

A letter carrier is bundled up in winter clothing while making afternoon deliveries during a cold weather snap, Friday, Feb. 3, 2023, in Portsmouth, N.H.
A letter carrier is bundled up in winter clothing while making afternoon deliveries during a cold weather snap, Friday, Feb. 3, 2023, in Portsmouth, N.H. (AP Photo/Charles Krupa, File)

You probably don’t think much about it, but it’s still a minor miracle. You write a letter, slide it in an envelope, affix a stamp and drop it in a mailbox. Within days, the letter finds its way to where it’s addressed, with a trivial error rate.

Other than waving to your mail carrier in the afternoon, you never see the exquisitely complex processes that allow the U.S. Postal Service to deliver more than 400 million of mail every day. You just know that it’s cheap, reliable and easy. The stamp probably cost you just 66 cents.

But the White House’s new rules on artificial intelligence, unless clarified, could degrade the quality of government operations as basic — and uncontroversial — as delivering the mail.

President Biden’s executive order on artificial intelligence takes a light touch when it comes to AI in the private sector. Not so for the public sector. 

In newly proposed rules adopted under the executive order, the White House is looking to place strict safeguards on how government agencies use AI.

On the one hand, there’s much to admire. Many potential government uses of artificial intelligence are risky: incorporating AI into air traffic control, for example, or using it to improve the nation’s electrical grid. The incautious use of AI can also infringe on people’s rights. In Michigan, for instance, broken computer code led to 34,000 people being falsely accused of defrauding the unemployment insurance system. Some victims faced a bill of $100,000.

But where the rules could go astray is in their breadth. The rules prohibit the use of AI in “government services” unless agencies comply with an extensive set of procedures. Those procedures include public consultation with outside groups, the creation of a mechanism to appeal the AI’s decision, and a requirement to allow individuals to opt out of AI review.

Imposing those requirements across the board — to all government services — would be a huge blunder

To see why, keep in mind that categorization is a big part of what government does. The Internal Revenue Service has to decide which tax returns have math errors and require follow-up. The Social Security Administration must distinguish claimants who are disabled from those who are not. Veterans Affairs (VA) physicians have to figure out what medical tests to order for which patients. And so on.

All these tasks are a little like delivering the mail, in that they require the government to categorize things that, at first blush, look pretty similar. And the government makes millions of categorization decisions every year. As one VA official says, “Right now, each time you breathe out, the VA just produced an expert medical opinion on a claim.”

Historically, humans have made those decisions. But humans are expensive, fallible and slow. Sometimes, AI can help do the job faster, more accurately, and at less taxpayer expense.

This is already happening. Veterans Affairs hospitals use videos to quickly detect patient seizures. The Social Security Administration employs a tool to help its judges spot errors in draft decisions for disability benefits. And, since 1965, the Postal Service has relied on a crude form of AI to read ZIP codes to route letters and verify postage.

The new rules could put such modernization efforts in jeopardy. Read literally, for example, the Postal Service may have to do an “impact assessment,” launch a public outreach campaign to “affected groups,” set up an appeal process, provide explanations for the AI’s decision and allow people to opt out of the computerized review of letters.

The example may seem absurd, but that’s the point. The new rules could tie hundreds of agencies up in red tape for no obvious reason. While agencies can waive the application of the rules, the culture of risk aversion in the alphabet soup of agencies is infamous. The do-it-by-the-book mentality is why former U.S. Deputy Chief Technology Officer Jen Pahlka writes that “government is failing in the digital age.”

The White House’s approach assumes that the adoption of AI is inherently riskier than the status quo. But there are many benign and valuable uses of technology. Agencies need the freedom to experiment with those uses without getting snarled in bureaucracy.

The deepest irony is that the harms will fall hardest on the people the rules are supposed to protect. Roughly 1 in 5 Americans are on Social Security. Some 46 million people relied on unemployment insurance benefits during the pandemic. More than 5 million veterans get disability benefits for service-related conditions.

Most of these benefits programs rely on computer systems that are clunky and outdated. Backlogs and errors are common. Deploying AI could aid millions to promptly receive the benefits to which they are entitled. 

Although the Biden administration gets that in the abstract, agencies will read these rules to send a very different message — that it’s worse to try and fail with AI than never to try at all.

Daniel E. Ho is a professor of law, political science, and, by courtesy, of computer science at Stanford University. Nicholas Bagley is a professor at Michigan Law School.

Tags artificial intelligence regulation Biden executive order Joe Biden Politics of the United States US Postal Service

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Bottom ↴

Top Stories

See All

Most Popular

Load more