The views expressed by contributors are their own and not the view of The Hill

Why media literacy is key to tackling AI-powered misinformation

Getty Images

With elections in the United States, United Kingdom, India, Taiwan, the European parliament and more than a dozen other European, Asian and African democracies all taking place in 2024, a huge percentage of the global population will be casting ballots next year.

But this stream of political activity arrives at a time of an explosion in online misinformation, as well as growing concerns surrounding the potential impact of a wave of AI-generated content.

Recent advances in generative AI offer the promise of massive benefits to society, from huge advances in healthcare and scientific research to education and other areas. But at the same time, the continued maturing of this technology has the potential to amplify existing challenges relating to mis- and disinformation, and create new ones. Even the CEO of OpenAI, the company behind ChatGPT, has himself admitted to being “a little bit scared” of this threat. 

And with so much political activity occurring globally next year, the risks and challenges could not be higher.  

Most citizens seem to agree. New research from Logically Facts of over 6,000 online users in the United States, the United Kingdom and India showed that almost three-quarters (72 percent) agree that society and politics are being undermined by inaccurate and false information circulating in the media and across social media channels.

This concern touches every sphere of public life: the climate crisis and environmental issues; public and personal safety during emergencies; and healthcare choices. But concern among online users is highest when it comes to elections and the circulation of false information across the internet.

A lack of trust

These concerns over the negative effects of misinformation and disinformation on public life appear to have stemmed from a broader erosion of trust. The same research found a worrying lack of trust among most people — not just of the mainstream media and social media platforms, but in some cases even of their own ability to parse fact from fiction.

The mainstream media, including newspapers and public service broadcasters, is at just 13 percent, while only 9 percent of online users find platforms like Facebook trustworthy. In fact, almost a quarter of people (22 percent) said they trust no sources when presented with over 10 social media platforms.  This lack of trust seems to be instilling self-doubt, with around one in six people saying they do not trust their own ability to sort fact from fiction on the internet.  

Combating misinformation

The internet has greatly broadened access to information, and social media platforms have enabled it to be easily shared. These are undeniably positive developments. But separating truth from falsehoods is a genuine challenge, so much so that we have reached a point where almost a quarter of us trust no one, and almost as many do not even trust themselves.

Countermeasures have been evolving, particularly when it comes to change at the platform level, whether that’s making algorithmic adjustments or legal frameworks. There has also been maturity in other measures such as fact-checking on platforms, which is not about suppressing or censoring opinions but the methodical process of determining the veracity of public statements, images and news stories circulating on the web.

Our research shows that most people want and expect social media firms to do more to tackle misinformation. Six out of ten people (61 percent) believe more could be done by social media companies and media organizations, particularly when it comes to fact-checking and verification. Only one in ten believes these companies should not be fact-checking anything. In fact, the majority (55 percent) of online users are more likely to trust a social media platform that uses fact-checking.

Yet, while much progress has been made in this area, the growth in generative AI means that the threat landscape continues to evolve and scale. So what more can be done ahead of 2024?

The case for greater media literacy

One measure lies in fostering greater media literacy, or empowering and equipping people themselves to become more critical consumers of online content.

Improving media literacy is not about telling people what to think, but empowering them by equipping them with the tools and skills to think critically. It means asking questions like “Where did this piece of information come from?” and “Why was this piece of content created?” This is vital, because however successful media and social media platforms are in filtering out misinformation, online users that will still see and consume content, and they need to have the skills and knowledge to critically engage with it.

For most online users, they are confident in their own media literacy. In the same study, over four in five (84 percent) of online users trust their own ability to sort fact from fiction on the internet. Despite this, there is a gap — consumers are still more likely to search the internet (47 percent) to seek or verify information if they were unsure about the facts, rather than trust their own judgment, and even discuss with friends (28 percent), indicating a hesitancy to rely solely on their own judgment, skills and knowledge.

While progress has been made in literacy programs, there is still more work to be done to provide users with the tools they need, and to overcome the perception in some quarters that such programs are about teaching people what to think, rather than how to think.

It’s important to remember that media literacy, and other measures to counter mis- and disinformation, are not a silver bullet. But taken together they can not only bolster those faltering levels of trust but actually lead to a healthier, more vibrant and robust level of public discourse. That is surely desirable, and worthy of our aspiration. I can’t think of a more important issue for the health of democracy next year and beyond.

Baybars Orsek is managing director at Logically Facts, a tech company combining AI and human expertise to tackle mis- and disinformation. Previously he was director of the International Fact-Checking Network at the Poynter Institute.

Tags Artificial intelligence ChatGPT Disinformation misinformation OpenAI

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Bottom ↴

Top Stories

See All

Most Popular

Load more