The views expressed by contributors are their own and not the view of The Hill

Why a single AI regulator just won’t work

Artificial intelligence (AI) is transforming so much of how we live and work. New generative AI systems have unlocked groundbreaking possibilities and captured public attention. Meanwhile, this rapid proliferation of AI capabilities has raised important questions about misuse, misinformation, ethical implications and more.

Those questions were central to last week’s Senate Judiciary subcommittee hearing on “Oversight of AI: Rules for Artificial Intelligence,” which featured calls to create a new U.S. agency to regulate AI. While a well-intentioned and natural response to a real societal need, a stand-alone AI regulator in the U.S. would not fulfill its intended purpose. The reasons are clear and pragmatic, grounded in historical precedent, not ideology or corporate self-interest.

Instead of building a new regulator from scratch, Congress should focus on making every agency an AI agency.

Today’s regulatory agencies have deep, invaluable expertise in their domains. The Federal Railroad Administration knows the ins and outs of the railroad industry. A new agency focused solely on AI would have none of this expertise, and therefore no clear understanding of how to regulate the specific use of this rapidly evolving technology in America’s railroads.

Further, a new AI regulator would be plagued by the challenges that vex today’s regulators, from budget and resource constraints to redundancy and inefficiency. Creating such an agency would not only take years, it would add another layer of red-tape to decision-making and further complicate existing overlaps in regulatory authority.


In my testimony before the Senate subcommittee, I advocated for a precision regulation approach to AI, where regulation focuses on specific use cases of AI, not the technology itself.

Ultimately, AI’s risk to society occurs at the point where it touches people. Regulating specific AI use cases, therefore, brings oversight and controls closer to the people they are meant to protect. And that underscores the logic of equipping every agency to provide necessary guardrails in the age of AI.

How should Congress enact precision regulation of AI? First, legislation to regulate AI must leverage agencies’ existing authority to tackle AI-related issues in their specific domains. AI for autonomous driving systems, for example, should be regulated by the National Highway Traffic Safety Administration, which already knows how to regulate cars on America’s highways.

Second, existing agencies need resources to boost their AI expertise, including understanding how the technology is applied in their domain and how its use impacts their work. Hiring AI experts is one way to build this understanding, but it is not the only way. Congress should empower agencies to partner with businesses and academia to understand how their work must evolve.

Third, Congress should charter an AI Center of Competence under the General Services Administration (GSA) or the White House Office of Science and Technology Policy (OSTP). This entity would not  regulate, but rather provide expertise that includes and consults with leading global AI thought leaders. It would assist agencies, help them stay current on the technology and its implications, and maintain their expertise to break the cycle where innovation outpaces regulation.

Finally, since no one should be tricked into interacting with an AI system or have their data collected by and used to train AI against their will, Congress should pass a national privacy or stand-alone AI law that provides consumers with rights over their personal data. This law should ensure people are not subject to consequential decisions influenced by AI unless that use of AI is transparent and accompanied by basic information on how the AI model was trained, how it works, and how it performs in tests for bias.

Precision regulation does not mean no regulation. It means making the best use of current regulatory capacity and updating it for a new era of technology. It means creating a future-proof regulatory framework that can evolve as AI evolves and that preserves U.S. leadership in AI innovation. And it means creating clear guidelines and principles to ensure AI is built and used responsibly while also ensuring control of the technology is not locked in the hands of a few large companies.

America has a checkered history with creating new regulatory agencies. When it comes to AI, Congress should invest in and make smart use of what we already have, not create something new from scratch.

Christina Montgomery is IBM’s Chief Privacy and Trust Officer, and testified at the Senate Judiciary hearing on AI alongside OpenAI CEO Sam Altman and NYU professor Gary Marcus.