The views expressed by contributors are their own and not the view of The Hill

We need legal protections from AI’s risks

The first mortgage-backed security trader I met in the summer of 2007 wore a rumpled T-shirt and a look of consternation while telling me that he’d just quit. I’d recently been working at a housing clinic, so I knew something strange was going on with risky mortgages, but he was adamant that I didn’t have a clue about the scale of the mess.

Surely, I asked, banking regulations would prevent the worst of it? He laughed.

We know what happened next. The financial markets collapsed, devastating individual lives and the global economy. We learned that without consequences for reckless behavior, the powerful have every incentive to chase massive profits, knowing that others will pay the price if things go wrong.

Now we’re embarrassingly on track to repeat the same mistakes with artificial intelligence. As in the run-up to 2008, we’ve let powerful systems shape our daily lives with little understanding of their workings and almost no say in how they’re used.

AI can now decide whether you get a mortgage, how long you go to prison and whether you’re evicted from public housing for minor rule breaches. These systems scan the transactions you make online, influence the products you buy and mediate the information you consume.


But this is just the start. AI chatbots weren’t widely used 18 months ago; now researchers can produce long-form videos from a prompt. AI agents, which work without needing constant human oversight, already exist (for example, your social media feed), but the next frontier, their mass proliferation, is almost here.

At my company, we’re enthusiastic about what agentic AI can do, but we also understand firsthand how it can be misused and exacerbate the harms that we already see from less powerful AI systems.

A just society prepares for this. It doesn’t allow the powerful to take risks at our expense or exploit gaps in the law, the way banks and lenders did when they raked in wild profits while undermining financial markets. 

But we’re falling shamefully behind on meaningful accountability. Elon Musk’s Tesla can sell cars with a feature called “full self-driving” and yet avoid responsibility when the feature causes a crash. Imagine if airlines or aircraft manufacturers could deny liability for crashing planes. This failure also explains why courts can still use AI to decide prison sentences, despite the demonstrated unreliability of such systems, and why law enforcement agencies use AI to predict crime, incorrectly and with racial bias, despite congressional scrutiny.

Most proposed AI laws ignore oversight and liability, instead trying to make the AI systems themselves safe. But this doesn’t make sense — you can’t make AI inherently safe, just as you can’t make power drills or cars or computers inherently safe. We need to use our laws and regulations to minimize long-term risks by addressing near-term harms.

To do this we need to make much better use of our existing institutions to regulate AI. I see three main priorities.

First, banning harmful actions. Governments and agencies should not surveil citizens without explicit justification, just as police cannot invade your home without a warrant. 

Second, enshrining rights of explanation. The 1970 Supreme Court ruling Goldberg v. Kelly held that the government can’t arbitrarily withhold benefits without a right of explanation and appeal. As AI decision-making becomes more pervasive, we need to enshrine a similar right for the judgments that govern the most important areas of our lives.

Third, we need to bolster our liability doctrines. The legal principle that if you hurt someone you need to remedy the harm is centuries old — but we seem strangely reluctant to apply the same principle to AI companies. This is a mistake.

A simple but powerful idea is to make AI developers above a certain threshold strictly liable for the misuse of their products, just as we have for injuries that are caused by product defects. We can soften this with a safe harbor for companies to register ambiguous uses, conditioned on accepting government oversight and guidelines. Combined with an outright ban on egregious applications, putting the cost of AI’s harms on the people and companies causing it can shield us from the bulk of what can go wrong.

As builders of powerful AI systems, we reject the argument that laws governing AI will hold us back. It’s the opposite. Good rules level the playing field. They take the burden off individual entities to fight for the public good, instead letting us focus on building things that people find valuable in their lives within clear parameters mandated by a democratic process.

The reason to chase the wild dream of AI is to create a world worth celebrating. Better laws will help ensure that future includes everyone — not just the handful of billionaires who control it today.

Matt Boulos is head of policy and safety at the AI research company Imbue, a member of NIST’s US Artificial Intelligence Safety Institute Consortium.