The views expressed by contributors are their own and not the view of The Hill

Disinformation’s next frontier: your texts and private messages

iStock

This election season, disinformation on various platforms and in various languages is spreading yet again. Democratic Congress members are calling for social media companies like Twitter and Facebook to do more to combat election disinformation. 

They are right to do so but are already missing the next trend: Disinformation is increasingly spreading via private messaging, such as on iMessage, Telegram and WhatsApp. The private nature of the exchanges poses a threat not present on more open platforms. 

Corporations cannot manage this emerging threat without help. Instead of a reliance on tech companies to regulate election disinformation, our democracy needs public-private partnerships that put resources into community-led programs to counter disinformation.  

This new approach is necessary to combat a threat with heightened stakes. Disinformation sent via text or on encrypted messaging apps can pose more harm than other social media because people don’t expect it, they believe it more easily and it’s more difficult to flag as suspicious. Unlike Facebook, where users are more broadly aware of disinformation, people are less likely to detect false or politically motivated information via text. In texts and direct messages, information is seen as familiar and informal and is usually not related to news or politics.     

Disinformation becomes especially dangerous when it spreads through close ties, as networks of family and friends appear much more trustworthy than one’s Facebook timeline. It can also spread much faster.

For example, WhatsApp, which is highly popular in Latino and Asian American communities and used by over 85 million Americans, allows for large family and friend group chats of up to 512 members where disinformation can spread to entire private networks with one original message. Moreover, it uses end-to-end encryption, meaning messages sent between users are unreadable to the platform itself, or another third party. This makes existing countering mechanisms like removal from the platform — if it violates their community standards — impossible. Given these limited means for intervention and removal, disinformation can spread uninterrupted from chat to chat with people particularly relying on the forwarding feature.        

It is especially harmful during elections. For example, false information related to voting procedures — such as what occurred during a recent campaign in Kansas, where text messages misled voters on how to vote for abortion access — can immediately produce damage.  

At the Propaganda Research Lab at the University of Texas at Austin, we interviewed parents who said political and social justice policies are frequently discussed on parent group chats, as they are concerned with protecting their children’s safety. Among the parents, especially in Asian American communities, this angle was targeted for disinformation — one example is a false claim that gubernatorial candidate Beto O’ Rourke would fire a majority of the police force if elected. 

In our research, several community members told us how false information, along with a general trend of members not bothering to fact-check information received via text, had people making voting decisions based on incorrect data.    

All of us should expect disinformation to heighten around elections. Fact-checking all information, even information received via text or seen on WhatsApp, is the first step to countering its negative effects. For what it’s worth, there is preliminary evidence that fact-checking is more impactful on WhatsApp than on Facebook. The second step is building more sustainable counter mechanisms through public-private partnerships that follow a bottom-up approach.  

Policymakers can get ahead by moving away from top-down models that mostly derive from the points of view of people in power. Legislative discussions about disinformation, regulating the tech sector and content moderation should include representatives from minority groups — which were disproportionately targeted for disinformation during the 2020 election —  so that their experiences and opinions inform these discussions better. The Spanish Language Disinformation Coalition, a recently announced campaign in Texas, is right in emphasizing the problem for Spanish speakers, but coalitions like this only create a real impact if they play the long game, outside of election season.    

The spread of disinformation will also be curbed if individuals understand its ubiquity across all platforms. Tech companies should continue to educate the public and suggest new solutions, such as interventions based on metadata, which includes information such as forwarding patterns and frequency. WhatsApp has introduced metadata-based forwarding limits that seem to be inhibiting the spread of disinformation.    

Tech companies will never be able to solve the disinformation problem alone, particularly as it finds new pathways to reach end users. More companies should understand their responsibility for taking down harmful content, but given the hidden nature of some disinformation, our democracy is in peril if legislators and citizens bet on nothing beyond corporations finding technological solutions.    

Inga Kristina Trauthig is a senior research fellow with the Propaganda Research Lab at The University of Texas at Austin where she leads research projects looking at disinformation on messaging apps specifically. Katlyn Glover, a graduate researcher working at the Propaganda Research Lab who is focused on election-related misinformation, contributed to this piece. 

Tags 2022 midterm elections disinformation campaign Politics of the United States Social media disinformation Telegram WhatsApp

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Top ↴
Main Area Bottom ↴

Most Popular

Load more