The views expressed by contributors are their own and not the view of The Hill

Biden is making strides in AI governance but still playing catch-up

President Joe Biden delivers remarks about government regulations on artificial intelligence systems during an event in the East Room of the White House, Monday, Oct. 30, 2023, in Washington.
President Joe Biden delivers remarks about government regulations on artificial intelligence systems during an event in the East Room of the White House, Monday, Oct. 30, 2023, in Washington. (AP Photo/Evan Vucci)

In a serious effort to catch up to the runaway train that is artificial intelligence, President Biden’s comprehensive executive order seeks to set rules and regulations for AI safety, security and trust without sacrificing innovation.

The executive order was, not coincidentally, released on the eve of the United Kingdom’s major international conference on AI. It seeks to leapfrog the U.S. into the forefront of efforts to devise global rules for AI use to quell fears of the technology leading to human extinction

Additionally, in an unprecedented move today during the U.K.’s AI Safety Summit, the U.S. and 27 other nations issued the Bletchley Declaration, an agreement to cooperate so that AI develops in a way that is “human-centric, trustworthy and responsible.”

Biden’s executive order requires that those developing cutting-edge AI systems share their test results with the government before launching new products and “red-teaming,” which dedicates teams to testing technologies for flaws and vulnerabilities. It directs the National Institute for Standards and Technology to develop red team standards.

This process is designed to prevent powerful new AI from helping non-experts to design or acquire biological or nuclear weapons or build powerful offensive cyber capabilities, and to ensure AI does not evade human control. These are only a few of the nightmare scenarios that this technology could produce.

The White House has increasingly stepped in as congressional dysfunction and inertia have left the U.S. behind much of the world in exerting control over Big Tech, particularly, any “foundation model that poses a risk to national security, national economic security, or national public health and safety.” 

Last year’s release and widespread use of ChatGPT, a large language model generative AI that can simulate human conversation, answer questions, produce images and write stories or papers, raised dystopian warnings about the future from leading technologists. Microsoft and Google released their own versions in what appears an AI arms race among startups and tech giants.

In addition to measures like mandating companies have a chief AI officer, the executive order seeks to protect against AI generating false content such as “deep fakes” by creating standards, requiring verification of authentic AI content with watermarks. It also protects privacy by setting guidelines for how data is collected and shared. Some of the executive order provisions are requests or guidelines, leaving ample wiggle room for AI developers to evade, though Commerce Department licensing rules may constrain them.  

Acknowledging that administrative steps are not enough, the executive order admonished Congress to pass needed legislation. As Senate Majority Leader Chuck Schumer (D-N.Y.) conceded: “There’s probably a limit to what you can do by executive order… everyone admits the only real answer is legislative.”

The U.S. lags behind other major tech players such as the European Union, China and Japan. 

The EU has produced the most comprehensive legal framework for AI on top of equally thorough digital privacy legislation to protect the public from unwanted algorithms. It also has a Digital Markets Act aimed at Big Tech.

In July, China published generative AI services regulations, following earlier restrictive digital commerce and data protection laws. For its part, Japan also has digital commerce laws that cover some AI services but is still in the process of devising comprehensive regulations. While there is overlap in many of the AI and data governance laws in leading countries, there remains a large global governance deficit on the issue.

In sharp contrast, Congress has yet to pass any comprehensive data privacy protection or AI legislation. As power abhors a vacuum, Big Tech and its army of lobbyists have shaped the debate on both topics. After meetings with seven Big Tech firms, the White House announced the companies’ agreement to abide by voluntary guidelines for AI.

To be fair, the pace of technology is exponential, while governance tends to be incremental. The imperative to commercialize AI has led Big Tech to push for regulations so that customers and the public have confidence that their products are safe. Both Microsoft and Google, for example, proposed ethical principles for AI interaction with humans in 2018 and 2019.

The challenge to Big Tech is to balance innovation with safety and accountability. Current large learning models like ChatGPT can misinterpret the data fed into it, sometimes yielding false or nonsensical answers, sometimes hallucinating. Yet OpenAI, Microsoft, Google and Meta rolled out these products despite reservations from safety experts.

The challenge to government is to set rules and standards that safeguard the public interest while not unduly setting back innovation from which the public would benefit. Today’s declaration issued at the U.K. summit is an encouraging sign. Also, Vice President Kamala Harris’s participation in the AI conference underscores Biden’s effort to play catch up.

The White House effort is a belated but positive step. But to get a handle on AI, Congress is dangerously delinquent in legislating data governance in general and AI in particular. Other U.S. leadership managing the tech revolution faces credibility issues. 

At stake is a larger risk of a race to the bottom if consensus on basic global rules and standards for using AI proves elusive.

Robert A. Manning is a distinguished fellow at the Stimson Center. He previously served as senior counselor to the undersecretary of State for global affairs, as a member of the U.S. secretary of state’s policy planning staff and on the National Intelligence Council Strategic Futures Group. Follow him on Twitter @Rmanning4.

Tags artificial intelligence regulation Biden executive order ChatGPT Chuck Schumer generative AI Joe Biden Politics of the United States

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Top ↴
Main Area Bottom ↴

Most Popular

Load more