The views expressed by contributors are their own and not the view of The Hill

Social media algorithms are not protected speech

In late May, Surgeon General Vivek Murthy warned of the “profound risk of harm” that social media poses to young people, confirming the view advanced by numerous child advocates. In so doing, Murthy called on platforms to impose minimum age limits and to create protective default settings. He further charged governments to apply new health and safety standards to social media platforms. The call for more intense regulation of the platforms will play out in Congress, state legislatures and the courts.

The issue of social media regulation will likely divide progressives. Some will support government measures to protect the vulnerable. Others see regulation as an impermissible limit on the exercise of free speech. The First Amendment will likely set the bounds of possible regulatory responses.

In many of its forms, social media builds on user-provided content. These third-party contributions — videos, comments, music, photographs — are protected speech (with the arguable exception of fakes generated by artificial intelligence (AI)) under conventional First Amendment analysis.

However, it is the manner of delivering that content to users that is a major cause for concern. Social media platforms deploy AI-driven recommendation algorithms to maximize “user engagement,” which is social-media speak for keeping users glued to the platform for as long as possible, thereby driving up ad-generated platform revenue. Maximizing engagement can be a polite way of saying addicting users to a platform’s content.

If you thought that ChatGPT and similar AI tools were disruptive, just wait until they perfect intimacy with you, their human partner, through the use of time-tested psychological techniques. That is how a recommendation algorithm can best achieve its goal of keeping you glued to the platform; not by simply “recommending” what you watch or do next, but by leading you to believe that it is your own reasoned decision.

Platforms claim the recommendations they deliver to users are a form of free speech protected by the First Amendment. That argument fails to distinguish between the videos posted to the platform and the output of the AI algorithms. The former typically do enjoy First Amendment protection, even where they promote harmful reactions. But the latter — the actual recommendations and their manner of delivery — are products of autonomous machines. And where that output causes addiction or other harm, as is often the case, it is neither “speech” nor otherwise exempt from consumer safety regulation.

Social media platforms typically employ two different AI systems. The first aids in content moderation, determining which user content to permit on the platform and which to reject. Platforms enjoy wide latitude in these efforts, as well they should, whether automated or undertaken by human moderators. Content moderation is the gateway to public discourse and we rely on platforms to responsibly filter their content. Some may disagree with choices made by a platform, but, when undertaking a filtering function that is the equivalent of traditional editorial functions, platforms enjoy protection under the First Amendment.

But the second AI mechanism, the one that drives a recommendation algorithm, does not serve any of the purposes that underlie the First Amendment. The inner workings of AI are merely computer code that perform a function; it does not communicate in human-understandable terms. The output of the AI’s calculations (its “recommendations”) is similarly functional and not expressive: mainly to keep users “engaged.”

Even actual speech that bypasses a listener’s cognitive functions and elicits an immediate visceral response — such as the case with threats, incitement, fighting words and “falsely shouting fire in a crowded theatre” — is categorically outside of the First Amendment. When words are the trigger of action, they are treated as conduct, not speech.

But the biggest problem with treating AI recommendations as speech is that they are generated by autonomous machines. All platform operators do is give the AI a goal — typically to maximize time users spend on the platform. The AI learns on its own how to structure and deliver its recommendations in pursuit of that goal. There is no human discretion, judgment or editorial input into the “decisions” made by the AI. In short, your prompts to ChatGPT are protected speech; the AI’s output is not.

Already, AI output is making “recommendations” across the spectrum of society, from whether you get a loan, government benefits or a job; the terms of a defendant’s sentence; what news to read, products to buy, music to hear, search results you get; how better to defraud you, surveil you, discriminate against you and separate you from your money. Do we really want to say that all of these autonomous machine actions are protected by the First Amendment? If so, and we treat AI outputs as speech, AI will be immune to regulation.

Jeffery Atik and Karl Manheim are, respectively, professor and emeritus professor at Loyola Law School in Los Angeles, where they have taught Artificial Intelligence and the Law.

Tags algorithms Artificial intelligence first amendment free speech Social media Vivek Murthy Vivek Murthy

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Top ↴
Main Area Bottom ↴

Most Popular

Load more