The views expressed by contributors are their own and not the view of The Hill

The Senate’s failure on AI policy leaves legislation up to the states

A poster displayed behind Senate Majority Leader Chuck Schumer, a Democrat from New York, during a news conference at the US Capitol in Washington, D.C., U.S., on Wednesday, May 15, 2024.
A poster displayed behind Senate Majority Leader Chuck Schumer, a Democrat from New York, during a news conference at the U.S. Capitol in Washington, D.C., US, on Wednesday, May 15, 2024. The U.S. needs to shield Americans from the risks posed by artificial intelligence while promoting the emerging technology with at least $32 billion in annual government spending to stay ahead of rivals like China, according to a highly anticipated policy blueprint from a bipartisan group of senators. (Photographer: Graeme Sloan/Bloomberg via Getty Images.)

Almost exactly a year ago, in announcing his intentions to create a set of guardrails that would protect the American people from the worst outcomes of AI systems, Senate Majority Leader Chuck Schumer (D-N.Y.) said “There’s no time for waste or delay” and “we’ve got to move fast.” 

But with the release of the Senate’s roadmap for AI policy earlier this month it is clear that Schumer and his colleagues have decided that protecting industry is more urgent than protecting people. 

“It’s very hard to do regulations because A.I. is changing too quickly,” Schumer told the New York Times. 

Tell that to the senator’s counterparts in Europe, who, pending final European Union protocols, will have succeeded in passing a landmark law this year that, while not perfect, creates some standards and accountability for the AI industry. 

Or to his colleagues in the Biden administration following the president’s executive order, which requires a whole of government response to protect American’s rights and safety regarding AI and mandatory evaluations ordered by the Office of Management and Budget before agencies use AI systems that could negatively impact people.

In fact, two days after the Senate AI roadmap dropped, Colorado’s Governor Polis signed into law the Colorado AI Act, despite a “tidal wave” of pressure from the tech industry urging him to veto the bill. The law takes important steps to protect Colorado residents against algorithmic discrimination from AI systems, requires notice of the use of high-risk AI and mandates basic accountability infrastructure like impact assessments.

If other governance bodies can apply basic protections like those laid out in the Blueprint for an AI Bill of Rights in October of 2022, why can’t Congress? 

Schumer’s statements also ignore the very real harms that automated systems are wreaking on everyday Americans right now — in the workplace and the housing market, at their doctor’s offices and in the courts

Schumer’s rhetoric reflects the tech industry’s enormous influence on the policymaking process. The false choice between protecting people and fostering “innovation” (the senator’s self-described “north star” in AI regulation) is dangerous and wrong.  

As we’ve seen in many other industries, smart regulation can incentivize innovation that benefits society and protects people — think drug safety and air quality standards. Americans should expect the tech industry to operate with the same accountability. 

Further, what we need in legislation is an approach that mirrors President Biden’s executive order through the creation of an accountability ecosystem with a comprehensive vision for how, at the most basic level, civil and worker rights are protected when AI is in use. The Senate’s roadmap, instead, proposes a haphazard, piecemeal approach to AI regulation that all but ensures gaping holes and unfunded, performative mandates. 

While the roadmap notes “that some have concerns about the potential for disparate impact, including the potential for unintended harmful bias,” it provides scant direction other than “exploration” to address this. It fails to acknowledge the overwhelming evidence base that warrants more funding and capacity for agencies’ civil rights divisions to map violations and enforce our existing laws.  

The Senate’s abdication of leadership on these important issues means state legislatures are likely the last remaining venue for meaningful legislation on AI in the near term. This is not an ideal solution; as we see with privacy, guns and abortion, a patchwork, state-by-state approach to life-impacting legislation leaves far too much in question. And the industry is already hard at work convincing state legislators that the false choice between regulation and innovation is real. 

In tech’s home state of California, where a package of bills addressing both harms and opportunities is making its way through the legislative process, some hope still remains. But the tech industry is spending more money than ever to ensure their delay tactics succeed in the states too.

We’ve seen this movie before. 

We were both working on tech policy 20 years ago at the dawn of the social web. Back then, we had a choice: the government could play a role in reigning in the industry to ensure a safer, more trustworthy internet or it could let the industry pursue unfettered innovation. In a decision that haunts us now, our government chose the latter path. 

This time, as we come dangerously close to making the same mistake again, state leaders can draw on the lessons of the social media era to do three important things to protect their residents while encouraging innovation: 

First, ensure that people can understand AI’s role in their lives, decide whether, where and when they use AI and seek redress when they experience harm. 

Second, place the onus of responsibility for demonstrating the safety of these tools on the companies who build them, and enforce those safety mechanisms before tools are released to the public, much as we do with new drugs. 

Finally, leaders must address the concentration of corporate power that makes these companies too big to fail, raising the stakes of the fallout for the rest of us and keeping the wealth and influence these new technologies will create in too few hands.

The outcomes of technological advancement are not inevitable. They are shaped by people, companies, and yes, governments. But the window of opportunity to shape AI is closing quickly. 

If the Senate won’t put forward any real solutions to govern AI and protect people from algorithmic harms, state lawmakers will need to step up.

Catherine Bracy is the CEO of TechEquity. Janet Haven is the executive director of Data & Society.

Tags AI bias ai regulation Applications of artificial intelligence Artificial intelligence Chuck Schumer Joe Biden Politics of the United States

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Top ↴
Main Area Bottom ↴

Top Stories

See All

Most Popular

Load more