Technology

Schumer unveils groundwork for AI regulation

Majority Leader Charles Schumer (D-N.Y.) addresses reporters following the weekly policy luncheon on Wednesday, March 22, 2023.

Senate Majority Leader Chuck Schumer (D-N.Y.) unveiled a framework for regulation of the booming artificial intelligence (AI) industry on Thursday. 

Schumer’s framework aims to increase transparency and accountability of AI technologies, on the heels of warnings from experts about the rise in AI following the popularity of the ChatGPT chatbot. 

“Given the AI industry’s consequential and fast moving impact on society, national security, and the global economy, I’ve worked with some of the leading AI practitioners and thought leaders to create a framework that outlines a new regulatory regime that would prevent potentially catastrophic damage to our country while simultaneously making sure the U.S. advances and leads in this transformative technology,” Schumer said in a statement. 

The framework would require companies to allow independent experts to review and test AI technologies ahead of public releases and updates. The disclosure will include four guardrails: Who, Where, How, and Protect, according to Schumer’s announcement. 

The guardrails seek to inform users and give the government data to regulate, as well as align systems with American values.


Senate Majority Leader Chuck Schumer (D-N.Y.) speaks during a press event to discuss the the Richard L. Trumka Protecting the Right to Organize Act at the Capitol on Feb. 28.

Schumer will work with stakeholders in academia, advocacy organizations, industry and the government in coming weeks to refine the proposal, according to the announcement. 

As majority leader, Schumer has control over the legislative calendar giving him a better shot at bringing his proposal to the floor, although it would still be subject to meeting the 60-vote threshold to pass. 

The senator’s proposal follows other voluntary guidance on AI issued by the federal government. 

The National Institute of Standards and Technology released an AI Risk Management Framework in January that aimed to improve the ability of companies to incorporate trustworthiness into the design, development and use of AI products and services. 

The White House released an AI Bill of Rights in October that also included a voluntary framework.