Generative AI tools like ChatGPT could test bounds of tech liability shield

FILE - The OpenAI logo is seen on a mobile phone in front of a computer screen displaying the ChatGPT home screen, March 17, 2023, in Boston. Chinese police said they recently detained a ChatGPT user for allegedly using the AI-powered chatbot to create a fake news story about a nonexistent train crash. (AP Photo/Michael Dwyer, File)
FILE – The OpenAI logo is seen on a mobile phone in front of a computer screen displaying the ChatGPT home screen, March 17, 2023, in Boston. Chinese police said they recently detained a ChatGPT user for allegedly using the AI-powered chatbot to create a fake news story about a nonexistent train crash. (AP Photo/Michael Dwyer, File)

Generative artificial intelligence (AI) tools are testing a provision that protected the tech industry for decades from lawsuits over third-party content.

As applications like ChatGPT and rival products rise in popularity, experts and stakeholders are split on whether and how Section 230 of the Communication Decency Act — a liability shield for internet companies over third-party content — should apply to the new tools. 

Ashley Johnson, a senior policy analyst at the Information Technology and Innovation Foundation said the “most likely scenario” is if generative AI is challenged in court, it “probably won’t be considered covered by Section 230.” 

“It would be very difficult to argue it is content that the platform, or service, or whoever is being sued in this case had no hand in creating if it was their AI platform that generated the content,” Johnson said. 

Even Sam Altman, the CEO of OpenAI, the company behind ChatGPT, testified during a Senate hearing last week that Section 230 is not the “right framework” for tools like his company has put out. 

“We’re claiming we need to work together to find a totally new approach,” Altman said. 

The debate comes as lawmakers race to regulate the emerging AI technology, and as a recent Supreme Court decision kept the controversial Section 230 provision untouched. 

At the core of the debate is whether the content created by a generative AI tool is considered third-party content.

Tech industry groups are arguing Section 230 could apply to content generative AI tools put out and protect companies from being held liable for the content the tools produce. 

“We should really slow down and see if generative AI technology provides an exceptional case where we would need to re-imagine or reconsider current existing intermediary liability law,” said Jess Miers, legal advocacy counsel at Chamber of Progress, a group that lists tech giants like Google, Meta and Amazon among its corporate partners. 

“At this current point in time, we don’t believe that generative AI tools and the companies that offer them demand the exceptionalism.”

The nearly three-decade-old Section 230 provision is already facing pushback from lawmakers on both sides of the aisle, and the influx of generative AI tools may be a trigger for new tech regulation.

“We now kind of have more concrete answers about algorithms, which is obviously a very relevant technology,” Johnson said, referring to the court’s decisions last week.

“If we get cases similar to that on generative AI, that would be one way to solve the issue, although that may take a while for cases like that to make their way up in the court system,” she added.

Congressional action will be even more crucial in deciding the fate of expanding or carving out Section 230 after the Supreme Court punted any reform to the protection back onto the legislative branch in a decision issued last week that focused on how platforms recommend and serve content. 

Samir Jain, vice president of policy at the Center for Democracy and Technology, said outcomes will depend on the facts of specific cases that could come forward, but he said there are “almost certainly places in which generative AI isn’t going to be protected by Section 230.” 

Jain said it will be hard to envision how the service provider wouldn’t be responsible, at least in part, for creating and developing content generated when an AI tool authoritatively states false information — what the industry calls a “hallucination.”

Public Knowledge, an advocacy group that promotes an open internet, says Section 230 shouldn’t apply to generative AI.

Even so, the group’s legal director, John Bergmayer, acknowledged that tech companies shouldn’t be held accountable if users are deliberately prompting AI tools to generate objectionable content.

“If I say, ‘Hey ChatGPT, repeat after me,’ and I just tell it some libelous stuff, and then it repeats that libelous stuff back to me, of course, there’s no liability in that case,” he said. 

Miers added that generative AI tools do not work in a “vacuum” and rely on user inputs.

“It can’t create its own outputs without somebody teeing off the inquiry, teeing off the input,” Miers said.

“The only way in which you can get specific output is if you provide specific inputs. This is really no different than when we provide specific queries to Google search, and Google search comes back with a list of results that they’ve curated and contextualized, and all those results are third-party content.” 

Tags Big tech ChatGPT ChatGPT ChatGPT content moderation OpenAI OpenAI Sam Altman Section 230 Section 230 Section 230 of the Communications Decency Act Supreme Court

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Bottom ↴

Top Stories

See All

Most Popular

Load more