Schumer’s slow-walk on AI ‘regulation’ is a nothing but a boon for Big Tech
Artificial intelligence poses serious risks to privacy and safety, and we urgently need protective federal regulation in the United States. The European Union and China have already taken steps to protect their citizens. But, yet again, the U.S. efforts have been slow — and, to be frank, embarrassing.
In an unfortunate turn of events, any efforts to move forward with AI legislation in Congress requires the support of one man: Senate Majority Leader Chuck Schumer (D-N.Y.).
Last spring, Schumer made a series of high-profile announcements to show he was serious about tackling AI. Since then, the few concrete actions Schumer has taken have yet to produce results, and have been overly reliant on the very industry insiders who need to be reined in by regulation.
Now, at long last, we have the Schumer AI report, and it is a clown car of the obvious, vague and toothless.
Schumer himself has acknowledged the risks of letting regulation be driven by a “few big powerful companies” — deep-pocketed Big Tech CEOs — and publicly vowed not to let that happen. And yet he appears to value their opinion over anyone else’s.
So what has he done?
Schumer hosted a set of secretive, closed-door insight “forums” — invite-only, featuring tech CEOs and others who are already banking big money off AI in a Wild West environment. These forums were unprecedented in nature: he locked out reporters, the public, and numerous experts who have contributed to the large evidence base about the impacts of AI that need to be addressed by regulation.
The most high-profile of these forums were held in September, where Schumer invited an exclusive set of the most influential tech CEOs, including Mark Zuckerberg, Sam Altman and Elon Musk. The CEOs delivered scripted remarks, while senators were barred from asking questions, leading to bipartisan frustration.
Schumer convened a working group almost a year ago. Its report was expected months ago. Finally, after much delay, secrecy and territorialism — in addition to new AI tools, features and acquisitions — we have it. And, as predicted, it reflects the views and desires of Big Tech CEOs.
Nothing that advances the debate in terms of substance, prioritization or corporate accountability. Nothing about the implications of AI that doesn’t work or causes harm. Nothing but a single head nod to the environmental costs of AI systems — exactly what the industry wants. It’s almost all “consider, evaluate, review, encourage,” directed at a laundry list that conspicuously excludes anything other than industry talking points. And where government will “act” (or encourage itself to act?), it apparently will do so solely to enable an unchecked industry to continue to trample society with shoddy systems and little accountability.
If the likes of Mark Zuckerberg and Sam Altman must accept regulation at all, they’re determined to get an opaque and industry-friendly bill out of Congress that will hamstring enforcement, so they can continue to harvest enormous profits from the fertile AI fields they’re now farming. And Schumer is helping them.
“These tech billionaires want to lobby Congress behind closed doors with no questions asked,” said Sen. Elizabeth Warren (D-Mass.). “There’s no opportunity to be heard on any of this stuff,” complained John Thune (R-S.D.). Sen. Josh Hawley (R-Mo.) was blunter: “I think it’s ridiculous that all these monopolists are all here to tell senators how to shape the regulatory framework so they can make the maximum amount of money,” he said.
For example, AI policy advocate Sarah Myers West says that enforcing existing regulations barring Big Tech from privacy abuses and anticompetitive behavior need to be core elements of any effective regulatory scheme. But only a few civil society groups and experts were brought into the framework discussions, which also omitted policy recommendations from the White House’s AI Bill of Rights blueprint. This selected inclusion smacks of tokenism; civil society, workers, labor, community organizations and the like should be the leading voices, not merely “consulted.”
Excluding the experts who understand the topic best, in favor of corporate lobbyists who have a substantive financial interest in obstruction and sabotage, is exactly the wrong way to make complex regulatory policy, especially given that AI will touch virtually every area of life in the coming decades.
We can’t allow Big Tech to rig the rules in their favor — and to the extent that these giant corporations are involved in shaping legislation, the public deserves full transparency.
The only way we will create and pass meaningful guardrails for AI that serves the public’s interest is by allowing independent experts and concerned citizens to participate in the process. Lawmakers should be consulting actual experts on the potential harms, particularly those who do not have conflicts of interest from potential regulation.
In the meantime, Sen. Schumer has burned another year and wasted another opportunity to advance the conversation, as the dangers of AI overreach and huge corporate concentration have grown more grave.
Alix Dunn is founder and CEO of Computer Says Maybe, a sociotechnical firm supporting non-profits, foundations, and companies on a broad range of issues related to technology’s impact on society.
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts