The views expressed by contributors are their own and not the view of The Hill

The time to regulate AI is now

Getty Images

Last month’s Senate Judiciary subcommittee hearing on oversight of AI offered glimmers of hope that policymakers are ready to tackle the regulatory challenge posed by a rapidly advancing frontier of AI capabilities.

We saw a remarkable degree of consensus across Democrats, Republicans, representatives from an industry stalwart (IBM) and a hot new trailblazer (OpenAI), and in Gary Marcus, a critical voice around AI hype. Microsoft and Google swiftly released their own overlapping policy recommendations, and an impressive array of academics, AI scientists and tech executives signed onto a statement that “mitigating the risk of extinction from AI” should be a global priority.

That’s not to say we didn’t see the usual theatrics play out — senators pressing witnesses for soundbites, veering toward pet topics, and occasionally having their reach exceed their grasp as they referenced technological details. But past the theatrics, three areas of emerging agreement were particularly notable.  

First, the magnitude of AI’s regulatory challenge likely necessitates a new regulatory body. One urgent question is to now delineate the exact remit of this proposed regulator.

OpenAI CEO Sam Altman proposed a focus on the most computationally intensive models, such as the largest “foundation” models that power systems such as ChatGPT, trained with thousands to billions times more computation than most other models. This approach has merit — it’s a practical line to draw in the sand, will capture systems with the most unpredictable and potentially transformative capabilities, and will not capture the vast majority of AI systems (their use and effects can likely be handled within existing regulatory structures). Altman also suggested increased scrutiny for models that demonstrate capabilities in national security-relevant domains like discovering and manufacturing chemical and biological agents.

Defining these thresholds will be challenging, yet the significant risks associated with the wide proliferation of such models justify regulatory attention before models are deployed and distributed. Once someone has the source file of an AI system, it can be copied and distributed just like any other piece of software, making limiting proliferation effectively impossible.

Second, it’s high time that policymakers examined how current liability rules apply to potential harms from AI and whether changes are needed to accommodate the unique difficulties posed by the current frontier of systems. These challenges include opaque internal logic, an expanding ecosystem of autonomous decision-making by cutting-edge internet-connected models, wide availability to users of varying means to compensate any victims of AI-fueled harms, and a lack of consensus on what constitutes reasonable care in developing and deploying systems that are scarcely understood even by their creators.  

Third, the notion of a blanket pause on scaling up AI systems has turned out to be a non-starter. Even Marcus, a signatory of the Future of Life Institute’s Pause Giant AI Experiments open letter, acknowledged greater support for its spirit than its letter. Instead, discussion quickly coalesced around establishing standards, auditing and licenses for responsible scaling up of future systems. (The letter also proposed many of these measures but was overshadowed by the call for a pause). This approach would set the “rules of the road” for building the largest, most capable models and provide early warnings when it’s time to apply the brakes.

These are necessary steps, but they alone won’t guarantee that the benefits of advanced AI systems outweigh the risks. Democratic values such as transparency, privacy and fairness are essential components of responsible AI development, but current technical solutions to ensure them are insufficient. Licensing and auditing measures alone can’t ensure adherence to these principles without further development of effective technical approaches. Policymakers, industry and researchers need to work together to ensure that efforts to develop trustworthy and steerable AI keep pace with overall AI capabilities.

There are some signs the White House is beginning to grasp the magnitude of the challenge ahead. Following the vice president’s meeting with Altman and other frontier lab CEOs, it followed up with announcements that these labs had signed onto a public red-teaming of their systems, and that the National Science Foundation had allocated $140 million to establish new AI research institutes.

But such efforts need to extend beyond merely nibbling at the edges of the research challenge; a significant portion should involve working with the largest and most capable systems, with the goal of laying the groundwork for powerful AI to eventually exhibit trustworthy characteristics with a high degree of confidence, along the lines of the NSF’s $20 million Safe Learning-Enabled Systems solicitation.  

Promisingly, among its many priorities, the new National AI R&D Strategic Plan acknowledged the need for “further research […] to enhance the validity[,] reliability[,] security and resilience of these large models,” and articulated the challenge of determining “what level of testing is sufficient to ensure the safety and security of non-deterministic and/or not fully explainable systems”. With billions of dollars flowing into the labs developing these systems, these priorities now need to be matched with proportionate focus and direction of the research ecosystem.

The emerging consensus around the need for regulation has not been accepted uncritically. Senators were amused to see a Silicon Valley executive all but pleading for more regulation. Some expressed concerns over the potential for regulatory capture or stifled innovation. Some commentators went further and characterized Altman’s pleas as a cynical attempt to erect barriers to potential competition. Policymakers who find themselves skeptical of Altman should call his bluff and, as he requested, focus the most stringent regulatory attention on the most advanced models. At present, these regulations would apply to only a few very well-resourced labs like OpenAI.

Policymakers should also be under no illusion that a light regulatory touch will somehow prevent a degree of concentration at AI’s frontier. Costs to train the most advanced models — now in the tens of millions of dollars in computing costs alone — have been rapidly rising, all but ensuring smaller players are priced out. 

To be an effective regulator, the government will need to develop its expertise in understanding and stress-testing these cutting-edge models — along with an associated ecosystem of credible third-party evaluators and auditors — so that it can go toe-to-toe with these leading labs. Likewise, as Congress continues to grapple with the issues raised in last month’s hearing, it should maintain a similar level of bipartisanship and expert engagement, so it can swiftly get a coherent and effective regulatory framework in place for the most powerful and transformative AI systems.

Caleb Withers is a researcher at the Center for a New American Security, focusing on AI safety and stability.

Tags Artificial intelligence ChatGPT Gary Marcus OpenAI Regulation Sam Altman White House

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Bottom ↴

Top Stories

See All

Most Popular

Load more