The views expressed by contributors are their own and not the view of The Hill

A tale of two AI futures 

Photo by Lionel BONAVENTURE / AFP) (Photo by LIONEL BONAVENTURE/AFP via Getty Images
This picture taken on January 23, 2023 in Toulouse, southwestern France, shows screens displaying the logos of OpenAI and ChatGPT, a conversational artificial intelligence software application developed by OpenAI.

Two key U.S. Senate committees held contemporaneous artificial intelligence (AI) hearings on May 16 that covered different angles with varying degrees of partisanship. One focused on regulating private sector use of AI, while the other examined the improvement challenges of federal government AI use. A contextual takeaway from both hearings is clear: if Congress wants to regulate AI, it must comprehend the complexities and potential pitfalls that come with oversight of the transformative technology. 

The Judiciary Subcommittee on Privacy, Technology, and the Law heard testimony from OpenAI CEO Sam Altman and others in a hearing titled “Oversight of A.I.: Rules for Artificial Intelligence.” The witnesses and senators discussed the promise and perils of AI with remarkable clarity. In a breath of bipartisan fresh air, senators from both sides of the aisle appeared to agree on the potential need for a federal regulatory response to mass-deployed AIs, such as the forthcoming ChatGPT 5 and the like.  

Each senator who spoke recognized that the tech industry and Congress have an opportunity to collaboratively avoid the regulatory failures surrounding social media when it comes to AI. They expressed that a laissez-faire approach to AI would risk an unprecedented invasion of privacy, manipulation of personal behavior and opinion, and even a destabilization of American democracy. 

There was a consensus among the senators that many members of Congress do not know enough about AI, and that the subject matter would be difficult to regulate outside of a delegated context. Thus, perhaps a new independent agency with the authority to regulate AI via licensing and enforcement, or a newly empowered Federal Trade Commission (FTC) or Federal Communication Commission (FCC) could be the answer.  

At the same time, a very different conversation was taking place in the Homeland Security & Government Affairs Committee. In a more partisan tone, there was a robust discussion on government data collection practices, politicized pressure on private industry, and excesses in the adjudication of what constitutes misinformation. One key takeaway from the hearing was the testimony of Stanford Law School Professor Daniel Ho, whose research team concluded that the federal government is severely lagging behind private industry in its AI expertise and has a long way to go to achieve best practices in its own use of AI. 

These crucial Senate committee discourses gives rise to a tremendously important question: How can an executive branch agency be expected to regulate AI if the federal government insufficiently understands the responsible use of AI? 

To answer this, let’s first unpack what a hypothetical AI oversight agency might look like. A degree of federal AI regulation is needed because the marketplace, alone, will not solve the societal problems that will emerge from unguided AI development. Any agency that prospectively regulates AI will need to be given express delegations of authority, in light of the Supreme Court’s possible abrogation or elimination of the doctrine of Chevron deference to federal agencies’ gap-filling interpretations of their authorizing statutes.  

Congress tends to regulate infrequently on the same subject. This is due to the challenge associated with fostering majorities on contentious issues and the adverse consequences that flow from ineffective policy choices. 

The concept that appears to have initial bipartisan support is to establish a powerful federal agency with general authorities in a branch of government that has very limited subject-matter expertise and an objectively poor record of process legitimacy or use of the very same technology that it will be overseeing. This could be a recipe for severe politicization of AI.  

Adding any additional power to the FTC, and to a lesser degree the FCC, will inject unnecessary partisan concerns into the discussion. The establishment of a new agency will be more likely to secure legislative consensus.   

The best course is for Congress to keep its options open — to resist the impulse to delegate permanent authority to executive branch experts who simply do not exist right now. Instead, it should focus on maintaining structural constraints with a biannual reauthorization requirement for the new agency that regulates AI, requiring robust reporting and congressional oversight. Congress must employ its political will to set clear guardrails for how such an agency will oversee, enforce, and report on the executive branch’s use of AI, in addition to the use of AI by the private sector. 

Congress can build on this moment of bipartisan AI policy, allowing innovation to flourish and America’s strategic advantage over global competitors to remain unhindered. If Congress chooses to continuously regulate AI through soft-touch and narrow legislation instead of passing an expansive statute and washing its hands of the details, we will all be better off for it.  

Aram A. Gavoor is associate dean for academic affairs and professorial lecturer in law at the George Washington University Law School. He previously served for more than a decade in the U.S. Department of Justice and is an internationally recognized U.S. public law expert. 

Tags AI Artificial Intelligence Artificial intelligence Chat GPT congress FTC Politics Senate Judiciary Committee

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Top ↴

THE HILL MORNING SHOW

Main Area Bottom ↴

Top Stories

See All

Most Popular

Load more