The views expressed by contributors are their own and not the view of The Hill

Artificial intelligence and the looming misinformation society

AP Photo/Jenny Kane
This 2021 photo shows a person typing on a laptop while traveling on a train in New Jersey.

ChatGPT, a new artificial intelligence (AI) tool from OpenAI, has caused much amazement and apprehension. An exemplar of generative AI, ChatGPT combines powerful text generation capabilities and state-of-art conversational AI, to startling effect. This is made possible by advances in AI such as language processing, transformer neural networks, and reinforcement learning. ChatGPT has been trained on datasets that have about 500 billion words of text. For comparison, the English language Wikipedia, one of the sources on which it has been trained, totals about 4 billion words. The end result is the closest to a collective hive brain that we have, and is seemingly capable of an “ask me anything” on practically any topic. You do not have to take our word for it; try it for yourself.

ChatGPT has caught the imagination of the general public for its versatility and ease of use. One does not have to be a coder or possess technical competence to use the tool. Type a question and presto, it can produce school and college essays, come up with cooking recipes and workout regimens, pass practice bar exams, write working software code, compose poetry, set examination questions, solve high school math problems, explain scientific concepts, and more. It can even edit all the above based on user feedback. The list of things that it can generate in a jiffy really seems endless, limited only by one’s imagination. ChatGPT seems so impressive that some experts speculate it may end Google’s reign as search king.

Granted, for such a tool there are several potential applications in AI-assisted learning, business, research and more. However, there is a dark side to consider. One is the lack of factual accuracy in the generated text, which is acknowledged by OpenAI itself. The next potential nefarious use is the generation of offensive content. Even if the makers of AI tools can install some guardrails as OpenAI has attempted to do, it is relatively easy to sidestep them. Another downside is misinformation, a scourge that has assumed pandemic proportions across the world. Will tools like ChatGPT become weapons of mass deception?

We worry about deep-fake videos and images, but now we must also contend with deep-fake text. We are still trying to figure out satisfactory solutions to stop the spread of misinformation on social media, but now we will have to deal with a potentially bigger problem. In existing social media, propagandists have the means of distribution and (mis)information dissemination.  Adding fuel to fire, AI puts an efficient means of production in their hands to turbocharge their mischief. The result may well be malarkey at scale.

What complicates matters is that AI-generated content will commingle fact and fiction, and the end result will often seem believable enough. How will we know whether the AI-written output is accurate? Some parts of the content make sense and some parts may not. It is not always clear which is which. Well-meaning users can actually end up believing in falsehoods and unwittingly spread misinformation. Subject matter experts may be needed to spot the logical and factual errors hidden amidst the glib AI-content.

How do we deal with a flood of intentional misinformation? Fact-checking is slow and expensive. The effort involved is an order of magnitude higher than spreading a lie, but now our societies may be further stress-tested. In short, when misinformation is created by machines that can provide automated “answers,” there are serious implications for our polity, which is based on a common agreement over facts.

And, how do our institutions deal with not-so-friendly foreign powers deploying these smarter tools to sway public opinion? With rival countries developing their own generative AI tools, misinformation risks can multiply manifold. If we are drowning in misinformation, what safe harbors can the public seek? Will there be a “flight to quality,” and a renewed trust in legacy media outlets and platforms with a reputation for objectivity?

Like it or not, the genie is out of the bottle. Our institutions cannot regulate their way out of this. Blanket rules for generative AI tools may not be feasible because of freedom of speech protections, restraint of trade concerns, national industrial policies, competing permissive jurisdictions, and other factors. Instead, we must devise creative and technological, individual and institutional, solutions to cope with the looming misinformation tsunami headed our way.

Kashyap Kompella, CFA, is CEO of RPA2AI Research and visiting faculty at the Institute of Directors. James Cooper is a professor of law at California Western School of Law in San Diego and a research fellow at Singapore University of Social Sciences. 

Tags Artificial intelligence ChatGPT generative AI tools Internet Technology

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Bottom ↴

Top Stories

See All

Most Popular

Load more