The views expressed by contributors are their own and not the view of The Hill

The unbearable weight of defining disinformation and misinformation on the internet

istock

The past few weeks have been momentous for regulation of internet disinformation and misinformation (D&M): Elon Musk agreed to purchase Twitter largely to change its approach to D&M; the U.S. government announced — and then suspended — a Disinformation Governance Board to oversee some D&M; the European Union completed historic, new internet laws, some of which regulate D&M; and former President Obama changed his longstanding hands-off approach and called for government regulation of D&M.

No exchange better illustrates the difficulty of defining D&M than the recent one between President Biden and Amazon-founder/Washington Post owner Jeff Bezos. Following Biden’s tweet “You want to bring down inflation? Let’s make sure the wealthiest corporations pay their fair share,” Bezos replied: “The newly-created Disinformation Board should review this tweet, or maybe they need to form a new Non Sequitur Board instead.” An industry and a global regulatory structure are emerging to address internet D&M, but how difficult is the task?

Two conditions about regulating D&M are important to note. The first is that most advocacy for regulating D&M is only for very large platforms, usually defined as having many millions of subscribers, leaving smaller platforms less regulated. This includes the European Union, many countries and several U.S. states. The second is that regulation of D&M would be consistent with a range of pre-existing internet content regulations covering areas that have been regulated or prohibited both on and off of the internet going back centuries — including infringements, child pornography, false advertising, slander, threats of immediate harm, obscenity, rebellion, and more. These areas have robust history of national definition, refinement, legislation and litigation.

The majority of content moderation that occurs on internet platforms today involves these existing forms of illegal/regulated content, and definitions tend to be similar among nations.

Regulating or prohibiting D&M breaks new ground by moving into previously less-defined categories such as politics, health, science, etc. — and by attempting to do so on a global scale. 

When looking at something this complex, it’s often best to start at the beginning — and the beginning is July 3, 1995, the day that everyone in any American supermarket checkout line encountered a stunning Time magazine cover showing a young boy behind a computer keyboard who was obviously in complete shock as he looked at the computer screen, with a huge headline blaring “CYBERPORN.”

An explosion of political concern about content on this new medium called the internet followed, leading to groundbreaking internet content laws, rules and regulations, the most important of which insulated internet platforms from liability for content created by others and allowed platforms to edit content in any way they wished, virtually without oversight or liability.

As I explained in an earlier piece, nearly all of this early attention to internet content was about cyberporn and it unequivocally established a clear right for the government to oversee internet content. Previously, the government’s role in managing content in computer bulletin boards and chat rooms was much less clear.

Twenty-seven years later, few talk about regulating cyberporn: The focus is almost entirely on D&M — but those initial laws on cyberporn established the foundation for government regulation of D&M, and they lead to some of the same difficult questions. 

Most notably: If the governments or platforms prohibit D&M, then they must define with some exactness what is — and is not — D&M, just as governments tried to define obscene pornography during the last century. Precisely defining D&M today is far more complicated than defining obscenity in the 1900s — because large internet platforms serve myriad different nations, societies, religions, jurisdictions, languages, etc. Accordingly, there is a temptation to simply hark back to Justice Potter Stewart’s 1964 definition of obscene pornography — “I know it when I see it” — and to rely on “fact checkers” instead of justices to call out D&M “when they see it.”

Not surprisingly, there is no universally agreed-upon definition of either “disinformation” or “misinformation,” although many definitions of disinformation center on the concept of “false” and misinformation on “misleading.” Webster defines D as “false information deliberately and often covertly spread (as by the planting of rumors) in order to influence public opinion or obscure the truth” and M as “incorrect or misleading information.” Some of the time, establishing truth/falsity is straightforward, but we all know that many times, it is not. My fourth-grade teacher explained this by showing us a partially filled glass and asking whether it was “half full” or “half empty” … we immediately divided into respective camps. By seventh grade, we learned in debate club that advocates emphasize truthful facts that support their opinion and discredit truthful facts that do not.

In a far more sophisticated way, President Obama explained that “any rules we come up with to govern the distribution of content on the internet will involve value judgements. None of us are perfectly objective. What we consider unshakeable truth today may prove totally wrong tomorrow. But that doesn’t mean some things aren’t truer than others or that we can’t draw lines between opinions, facts, honest mistakes, intentional deceptions.” Sometimes, as Obama explained, what is considered true or false may change. As evidence of evolving internet D&M truth, Knight First Amendment Institute’s Evelyn Douek recently described in Wired magazine how multiple D&M classifications were later revised or even reversed.

Regardless, dozens of governments have criminalized or regulated Internet D&M and made large platforms responsible for illegal D&M posts by third parties. According to the Poynter Institute, posting “false information” on internet platforms is a crime in many countries and more are on the way. In these situations, governments — through their courts or bureaucracies — will decide what is and is not disinformation or misinformation. At the same time, public demands are increasing for internet platform corporate executives to more actively regulate or prohibit D&M outside of (or in conflict with?) any government regulations. 

Governments or business executives are in for a very difficult task.

NOTE: This post has been updated from the original to correct the date of the Time magazine cover mentioned in the sixth paragraph.

Roger Cochetti provides consulting and advisory services in Washington, D.C.  He was a senior executive with Communications Satellite Corporation (COMSAT) from 1981 through 1994. He also directed internet public policy for IBM from 1994 through 2000 and later served as Senior Vice-President & Chief Policy Officer for VeriSign and Group Policy Director for CompTIA. He served on the State Department’s Advisory Committee on International Communications and Information Policy during the Bush and Obama administrations, has testified on internet policy issues numerous times and served on advisory committees to the FTC and various UN agencies. He is the author of the Mobile Satellite Communications Handbook.

Tags Barack Obama content moderation Disinformation Disinformation Governance Board Elon Musk Fact Checker fact checking internet regulation Jeff Bezos Joe Biden misinformation Precedent Section 230 of the Communications Decency Act Social media disinformation social media platforms social media regulations

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Bottom ↴

Top Stories

See All

Most Popular

Load more