The views expressed by contributors are their own and not the view of The Hill

Rise of the tax machines: IRS algorithms are coming for you

Daniel Werfel
Greg Nash
Nominee for Commissioner of Internal Revenue Service Daniel Werfel answers questions during his Senate Finance Committee nomination hearing on Wednesday, February 15, 2023.

New research has cast a harsh spotlight on digital discrimination stemming from the Internal Revenue Service’s (IRS) use of algorithms and artificial intelligence — a key factor in disproportionate auditing of poor families and taxpayers of color. Earlier this week, the Senate questioned President Biden’s nominee for IRS commissioner, Danny Werfel, about how an IRS algorithm targeted Black taxpayers for audit up to 4.7 times as often as other races. Amid controversy over billions of dollars in new IRS enforcement funding, Congress should recognize that letting computers decide whom to audit could destroy more lives than an army of newly hired IRS agents ever could.

Black taxpayers are between three and five times more likely to be audited than taxpayers of other races — a disparity that is largely driven by an IRS algorithm, according to a study published last month by the Stanford Institute for Economic Policy Research. The study, based on an analysis of IRS data, highlights an important development the agency would like to keep hidden: its escalating use of computer algorithms and artificial intelligence (AI) to drive its enforcement agenda without transparency or meaningful human supervision.

When confronted with this study at his confirmation hearing, Werfel committed to providing the Senate with a report on the offending algorithm within 60 days. That cannot be the end of congressional oversight into the IRS AI program. To prevent discrimination, Congress should mandate that the IRS set up effective and transparent safeguards on its use of AI to prevent biased algorithms from disproportionately targeting certain taxpayers.

Last year, the IRS received $45.6 billion in new enforcement funding on top of its annual budget. This has generated considerable uproar over how the agency will spend this windfall — most of which has focused on whether the IRS would use it to hire thousands of new revenue agents to harass individual taxpayers. That criticism misses the point: Like many businesses and governments, the IRS is more likely to spend its money on machine learning and data analytics than on thousands of human IRS agents.

Artificial intelligence helps auditors analyze transactions and spot patterns that can indicate potential fraud, using technology to help maximize revenue. But the rise of AI-driven decisionmaking by the IRS creates far greater perils than staffing up with human capital. Algorithms are often “black boxes,” and the reasons for the decisions they make can be difficult or impossible to determine. What’s more, the IRS shrouds its algorithms in the cloak of bureaucratic secrecy. Taxpayers may never know if their fates were determined by a computer with no effective human safeguards.

A simple example illustrates the point. In 2012, the Treasury Inspector General for Tax Administration (TIGTA) studied an IRS algorithm to determine whether taxpayers were entitled to relief for certain penalties. TIGTA’s report found that 89 percent of the time, a taxpayer should have had penalties removed, the IRS algorithm got it wrong and kept the penalties in place. This meant taxpayers were assessed severe penalties that the IRS never should have pursued. Even more disturbing, not a single one of these incorrect determinations was corrected by a human IRS employee — a 100 percent failure rate for human supervision of the algorithm’s decisions. But for the TIGTA investigation, nobody – least of all the affected taxpayers – would ever have known about the damage caused by the rogue IRS algorithm.

President Biden and other officials have promised that the extra IRS funding will not be used to audit taxpayers making under $400,000 a year. But what if an IRS algorithm decides that poorer taxpayers are a better target? For example, an algorithm reviewing IRS collection data may determine that low-income taxpayers are less likely to fight – and more likely to pay – tax assessments than wealthy taxpayers. It may then conclude that the IRS should audit more low-income taxpayers and fewer wealthy ones, to maximize the success rate.

The perverse result would be increased IRS audits of poor American families, which are already audited at five times the rate of all other taxpayers, according to an analysis of 2021 internal IRS reports. But because the algorithm is a “black box,” nobody – perhaps not even the IRS itself – would ever know that low-income families were being targeted regardless of the $400,000 limitation, until it is too late.

It is not enough to rely on assurances that the IRS is monitoring its AI program. Congress must exercise its oversight to ensure that the IRS does not rely on biased algorithms in deciding which taxpayers to target. Further, Congress must force the IRS to be transparent about how it uses AI in making enforcement decisions so that taxpayers can defend themselves from being harmed by a rogue IRS algorithm. Otherwise, thousands of new human IRS agents may turn out to be the least of our worries.

Robert J. Kovacev is a tax lawyer at Miller & Chevalier Chartered in Washington, D.C., and a former senior litigator at the U.S. Department of Justice, Tax Division.

Tags Danny Werfel Danny Werfel Internal Revenue Service IRS IRS audit Joe Biden

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Bottom ↴

Top Stories

See All

Most Popular

Load more