The views expressed by contributors are their own and not the view of The Hill

India’s TikTok ban didn’t even slow disinformation. We need a better solution.

In this Sept. 27, 2015, file photo, Prime Minister of India Narendra Modi, left, speaks seated next to Facebook CEO Mark Zuckerberg at Facebook in Menlo Park, Calif.
In this Sept. 27, 2015, file photo, Prime Minister of India Narendra Modi, left, speaks seated next to Facebook CEO Mark Zuckerberg at Facebook in Menlo Park, Calif. (AP Photo/Jeff Chiu, File)

2024 has been declared the “biggest election year in history.” Amongst the countries going to the polls are the world’s biggest democracy — India — and the world’s overall richest (but unequal) — the U.S.  

Digital propaganda and online disinformation aiming to manipulate public opinion are proliferating in both countries. However, in India TikTok has been banned since June 2020, as China is one of India’s biggest rivals. 

In the U.S., President Biden signed legislation that could ban TikTok — one of the reasons cited is to limit the spread of disinformation. But banning TikTok has not solved India’s disinformation problem. Instead, three main trends are hurting Indian democracy as voters are being duped and flooded with false and misleading content: influencers, WhatsApp and generative artificial intelligence.  

These are warning signs for the upcoming presidential elections in the U.S. Policymakers, journalists and the public would be well-advised to keep alert and jointly work towards solutions. 

Throughout my field research in India this year, the rise in the political importance of influencers was omnipresent. Prime Minister Narendra Modi is said to love influencers but in reverse, “the influencers also love him,” as one seasoned political consultant explained to me. 

Influencers are courted by Modi and his Bharatiya Janata Party as these accounts can reach parts of the electorate that have previously been outside campaigners’ reach. Further, influencers are seen as more authentic as their political content is mixed in between topics such as beauty or workout advice and hence perceived as less campaign-y. Finally, influencers are considered more controllable than trained journalists — even in countries like India, where critical journalism has severely diminished.  

While TikTok was a beloved platform by influencers in India, Instagram Reels and YouTube Shorts managed to fill that gap after the app’s ban. Therefore, instead of focusing on specific platforms, we should rather address the underlying factors facilitating the spread of false and misleading information via influencers.

The first is establishing transparency of paid political advertising online. Similar to India, a convoluted system of intermediaries is often how influencers get paid by political stakeholders while not declaring the respective posts and videos as ad content.  

More imminently, we should trust our strengths in the U.S., such as a free press

In the U.S., broadcast news anchors can generally operate without intimidation — and reports from across the political spectrum reach millions of people with their interpretation of events. In India, Modi shuns journalists, interpretations that are not aligned with Modi’s are barely broadcasted and online outlets that pursue independent reporting are targeted with tax raids or other forms of suppression. 

In order to push back against untrained and secretly paid influencers, journalists in the U.S. should trust their strengths and capitalize on their freedoms while recognizing that their profession is undergoing transformative changes. 

The importance of WhatsApp for Indian political campaigning was undisputed by everyone I spoke to. WhatsApp plays a dual role as a main organizing platform for all sorts of political groups — ranging from local party chapters to violent nationalist groups — as well as a broadcasting tool for political messaging. 

WhatsApp differs from other social media, as it uses end-to-end encryption for private chats and groups, meaning messages sent between users are unreadable to the platform itself or another third party. This makes existing countering mechanisms — like removals or content labeling — impossible.  

In 2024, WhatsApp has grown into a wide-reaching platform — far from the private one-to-one or small group chat app it once was. This election cycle is the first global one in which political actors can employ channels, utilize WhatsApp communities (which subsume several groups) and rely on groups of up to 1,024 members. It therefore allows for large family and friend group chats where disinformation can spread to entire private networks with one original message. All while avoiding scrutiny from corporate content moderators.  

In the U.S., WhatsApp is highly popular in Latino and Asian American communities and used by over 85 million Americans. Previous research by the Propaganda Research Lab at The University of Texas, where I am head of research, has shown that disinformation via WhatsApp can harm those communities. 

If we want to avoid a storm of disinformation targeting minority communities in the U.S. in the lead-up to the presidential election, we need to focus on this now. Instead of targeting encryption, however, we need policy discussions that follow a bottom-up approach and aim for public-private partnerships, given the sensitivity of the issue.  

Several community-based interventions have arisen in the U.S. that deserve recognition, such as tiplines by factchequeado or Indian American Impact. For what it’s worth, there is preliminary evidence that fact-checking is more impactful on WhatsApp than on Facebook. Finally, Meta, WhatsApp’s parent company, needs to be held accountable to continue cracking down on inauthentic exploitation of its app, such as mass forwarding or forced creation of groups.  

However, tech companies will never solve the disinformation problem alone, particularly as actors find new pathways to reach end users. In this line, banning or divesting TikTok might alleviate concerns of policymakers fearful of Chinese influence, but it does not establish a better public sphere online. Our democracy is in peril if legislators and citizens bet on nothing but bans and corporations finding technological solutions. 

Finally, all actors in India are experimenting with generative AI — especially regarding content creation, such as fabricated audio and videos, so-called deepfakes. While widespread, its potential for persuasion was questioned by the propagandists I spoke to — mainly due to the lack of a track record for the manipulation of public opinion (yet).

Overall, it’s another layer applied to the existing disinformation ecosystem. However, there is already content coming from generative AI influencers as well as ubiquitous deepfake audios spreading on WhatsApp created by propagandists.   

We are heading to a future online space where it’s unclear what is real, synthetic or something in between. This underlines the relevance of the influencer trend: If it’s harder (or impossible) to know what is real online, the importance of the messenger, rather than the message, rises. This highlights the necessity to counter election disinformation with a multi-stakeholder coalition that includes public-private partnerships and builds on strengths such as a free press and active civil society. 

One final truth is that threats to our democratic public sphere also come from the top. Prime Minister Modi in India as well as Republican frontrunner Donald Trump have utilized manipulative means to spread some false and misleading content (and they are not the only ones). Only a coalition of actors committed to liberal democracy across the political spectrum can counter these challenges. 

In India, several organizations are holding strong against increasing targeting by Modi, but their future is uncertain. In the US, we are not affected by democratic backsliding on the same scale and should take advantage of that. 

Ultimately, we must recognize how technology affects our lives while asserting agency as democratic citizens — from policymakers to journalists to everyone else.

Inga Trauthig is head of research at the Propaganda Research Lab at the Center for Media Engagement at The University of Texas at Austin. Gabrielle Beacken, a graduate researcher working at the Propaganda Research Lab focused on democratic backsliding internationally, contributed to the research informing this piece.

Tags 2024 presidential election disinformation campaign Donald Trump India election Joe Biden Narendra Modi Politics of the United States Social media disinformation TikTok ban

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Top ↴
Main Area Bottom ↴

Top Stories

See All

Most Popular

Load more