The views expressed by contributors are their own and not the view of The Hill

Don’t let Big AI fool you: Piracy isn’t a business model

OpenAI CEO Sam Altman answers questions during a Senate Subcommittee on Privacy, Technology, and the Law hearing to discuss oversight of artificial intelligence on Tuesday, May 16, 2023.

Ask Midjourney, a popular artificial intelligence image generator, to produce an image of a “cartoon sponge,” a “cartoon 90’s family with yellow skin” or a “video game plumber” and it will produce images of SpongeBob SquarePants, the Simpsons and Mario. 

This is the result found by AI scientist and author Gary Marcus and film industry concept artist Reid Southern, as well as others who have repeatedly demonstrated that image generators like Midjourney and OpenAI’s DALL-E can “regurgitate” near-perfect recreations of scenes from Marvel and Star Wars movies — even when given innocent prompts that do not reference these works explicitly. 

Is this copyright infringement? 

That’s the same question raised by the New York Times’s lawsuit against OpenAI and Microsoft, which alleges the unauthorized use of its journalistic content to train artificial intelligence models. A key fact in the Times’ complaint is that OpenAI’s chatbots are capable of reproducing text nearly verbatim from that publication. 

On Jan. 8, OpenAI responded, claiming that while its technology sometimes regurgitates article text, that behavior is a “rare bug” that it was working to solve. 


“Intentionally manipulating our models to regurgitate is not an appropriate use of our technology and is against our terms of use,” it continued. 

The lawsuit against OpenAI highlights a wider concern: When an artificial intelligence product like ChatGPT, DALL-E or Midjourney reproduces copyrighted content, it’s part of a broader pattern of piracy, flouting of the law and advocacy for “rules for thee but not for me” that has long been baked into the business model of tech businesses. We have more than two decades of experience with these companies, and the results are clear: Piracy only benefits the pirates in the long run, and they won’t stop until lawmakers make them. 

The Times has drawn parallels between OpenAI and the now two-decade-old story of Napster, but a better analogy is to the business models of early Uber, Amazon and Google News.

First, disrupt and reshape markets until consumer expectations evolve past a point of no return, using a combination of capital-intensive technology and evasion of existing laws and regulations, whether they be taxicab medallion regulations, sales tax collection laws or copyright. Second, settle lawsuits and use lobbying power to preserve the advantage that’s been obtained. 

OpenAI has already begun lobbying governments for an exemption to copyright enforcement, claiming that it would be impossible to create products like ChatGPT if it could not rely on copyrighted works going forward. 

Sam Altman, the OpenAI CEO, is basically saying that he can’t make his product unless he steals from others. In making this argument, he is breaking one of the most fundamental moral principles: Thou shalt not steal. His excuse — that he “needs” to do this in order to innovate, is not a get-out-of-jail-free card. Theft is theft.

Altman has built his business model around the assumption that our desire to get the latest shiny new tech product will override our basic moral and legal principles — at least for long enough until his products get so deeply entwined in our daily lives that it will be too hard and too expensive to undo it all. 

Even regulatory skeptics like Sen. Ted Cruz (R-Texas) have said that Big Tech businesses  “represent the greatest accumulation of power,” specifically, “market power and monopoly power . . . that the world has ever seen.” 

“They behave as if they are completely unaccountable,” Cruz added.  

OpenAI is well on its way to joining the ranks of Amazon, Uber and Google in the club of companies skirting the law, while the rest of us live by rules that ensure markets function, hard work has dignity and people are paid for it and businesses compete on a level playing field rather than legislators picking winners through their inaction.

In a world where the largest content creation businesses in the world — Disney, Nintendo, the New York Times — have been unable to stop their property from being stolen to power another company’s products, what power do individuals have? If SpongeBob isn’t safe, how can we be? 

The time is now for Congress and state legislatures — as Vermont is doing with recently introduced AI liability legislation — to act to make clear that this business model cannot continue, that these companies must be held to account for the harms their products create and that they must pay for what they take from others, just like any other business. Lawmakers must change the incentives these businesses operate under, for good.

Casey Mock is the chief policy & public affairs officer at the Center for Humane Technology and a lecturing fellow at Duke University.