The views expressed by contributors are their own and not the view of The Hill

The left and right agree on one thing: We need more social media data

FILE - The Twitter application is seen on a digital device, April 25, 2022, in San Diego. Election administrators across the U.S. say they’re concerned their offices will be targeted for fake Twitter accounts that will confuse or mislead voters after the social media platform altered its long-standing verification service. Some are trying to take steps to ensure that voters can tell the difference between the official election office account and any impostors that might pop up in elections this year or during the 2024 cycle. (AP Photo/Gregory Bull, File)
The Twitter application is seen on a digital device, April 25, 2022, in San Diego. (AP Photo/Gregory Bull)

We live in a moment where facts are drowning in an ocean of manipulation enabled by social media. Trust in information and institutions has plummeted everywhere.

While the public square has become increasingly polarized and shrill, those on the left and the right share one perspective: They both feel tech platforms are biased against, or worse, censor, their views.

The reality is more complex, but we will never know whether their grievances are justified without a complete picture of social media moderation and curation systems. This includes a fuller understanding of the activities of those using social media to try to manipulate us, from hostile states like Russia and China to extremist groups of all ideological persuasions.

Russia continues to wage information war around the world, focused now on weakening opposition to its unbridled aggression in Ukraine. My organization, the Institute for Strategic Dialogue, an independent think and do tank, has documented extensive Russian information operations. We also evidenced how denial of Russian war crimes in Bucha earned three times more engagement than factual accounts. We can expect Russia to launch substantial information operations targeting the 2024 elections with the aim of disrupting U.S. support for Ukraine, critical to the positive outcome of that conflict.

We have also documented the evolution of Chinese Communist Party (CCP) information operations in recent years including efforts to amplify domestic divides in Western countries. Congress, in a bipartisan fashion, is rightly concerned about the national security implications of its ownership of TikTok, the data it has collected, and the info war it could launch with that data, but CCP information operations aren’t limited to TikTok.  

Extremists of every ilk use online platforms to promote violence against our communities. It remains shockingly easy to find terrorist and violent extremist content online like videos celebrating the terrorist attack in Christchurch, New Zealand in 2019. The perpetrators of the tragedies in BuffaloPittsburgh and San Bernadino were all immersed in the hateful content and communities that we’ve tracked online. Post Jan. 6, we have evidenced the increased normalization of calls to violence against elected and public officials and institutions. To be clear, the First Amendment is not carte blanche to incite insurrection or violence.

But the problem isn’t just that bad actors are using and abusing social media. It is that in their drive for profit, social media companies have ended up prioritizing engagement over safety. Products and algorithms serve users content that will keep them on their platforms as long as possible: that which engages or enrages.

Our own research has shown how platforms have served teenage boys material which encourages hate and violence towards women, suggested hashtags that abuse female political candidates and algorithmically recommended white nationalist literature.

This is not a free speech environment, this is a curated speech environment, in which the major social media platforms determine who sees what.

Should bots have free speech rights? Should company algorithms, responsible for amplifying harmful content or one view over another, be protected from liability?

Whatever your answer to these questions, it should be clear that exposing the nature and scale of these activities is not censorship: It is critical, not least from a national security perspective. It can help save lives and protect our citizens and democratic processes from covert information operations. It is also the necessary basis for informed policy responses. 

More information is clearly in everybody’s interest. Yet access to platform data is currently severely circumscribed. In fact, many platforms have been shutting down that access over recent years.

So, how do we fix this?

Firstly, we need strong federal privacy laws, with enhanced protections for children, that allow people greater control over the data they share online, with whom and how it is used by platforms. 

Secondly, we need consumer protection legislation to ensure people can trust the products that social media companies provide. If a private company chooses to allow X, and not allow Y, then users should be able to trust that this is the product they, or their children, will get. 

Thirdly, we need improved transparency requirements for social media platforms. This includes access to data for independent researchers and regulators that allow us to better evaluate everything from how they moderate their content to the ways in which their platforms boost — or de-boost — and target certain types of content.  Public interest research into the scale, nature and extent of rights infringements online — including privacy, freedom of speech and protections against harassment, abuse and incitement — is vital.

With these policies in place, consumers would be able to make truly informed choices about the online platforms they use based on what those platforms do or do not allow, the data they collect, and independent assessments of whether they can back up their claims.  

To protect free speech online we need transparency around the invisible hands that guide our digital experiences.

This should be a common cause for both left and right.

Sasha Havlicek is the co-founder and CEO of the Institute for Strategic Dialogue.

Tags disinformation campaigns Politics of the United States social media bias Social media disinformation User data

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Top ↴
Main Area Bottom ↴

Top Stories

See All

Most Popular

Load more