The views expressed by contributors are their own and not the view of The Hill

AI advocates must address a massive 2024 risk: global elections

Today, the biggest challenge standing between us and the benefits of artificial intelligence is AI backlash. In the year since ChatGPT’s release, both AI’s capabilities and skepticism have only expanded. According to a September Gallup poll, only 10 percent of Americans believe AI does more good than harm.

Ten percent. This remarkable statistic suggests the backlash has barely begun. Few see AI’s promise, and many won’t flinch at regulating it with a harsh cudgel. 

Those who do see AI’s amazing promise to help humanity in tangible ways — including continued cheap and rapid drugmaterials and mathematical discoveries — must prove the tech to the skeptical 90 percent. That means we must be willing to put real work into analyzing and solving AI’s real, near-term challenges.

Entering 2024, one risk stands far above the rest: forthcoming elections.

2024 is set to be the biggest election year in history, with over half the world heading to the polls. This coincides with the AI-powered transition of the online environment and already-increasing levels of disinformation.

In the last year, forms of generative AI have grown truly capable of creating believable, generated content, and the political implications are serious. Since October, the Israel-Hamas War has provided a prelude. AI generators are widespread, actively propagandizing, misleading and deceiving on behalf of both sides. Truly horrible content has been generated and tensions inflamed.

Likewise, for elections, the world just experienced a taste of the AI “fake vs. real” problem.

Forty-eight hours before voters hit the polls for Slovakia’s September election, a slanderous viral deepfake audio track featured a leading candidate allegedly discussing election rigging. While the impact was uncertain, it may have contributed to the slandered candidate’s loss. At a minimum, it sewed uncertainty, distrust and democratic angst.

While “fake news” isn’t novel, AI represents a tectonic shift in scale, complexity and ease. Generated content is now practically free to produce at an industrial scale. The result has been a sudden deluge; as of August, AI-generated images had eclipsed the number of photos taken in the first 150 years of photography. Even two years ago, generated content was limited by the few minutes it takes to craft a meme. That ceiling has caved.

Perhaps the biggest shift is content complexity. As Wharton professor Ethan Mollick recently demonstrated, using minimal data, AI can craft stunning audio-video replicas of real people. Such easy-to-produce, deeply convincing content wasn’t possible during past cycles and could shake the foundations of any election.

This could also fuel so much backlash that it puts the breaks on the necessary process of responsible AI diffusion. To realize any technology’s promise — and to solve its safety problems — small experiments have limits. A handful of AI researchers cannot spot and fix everything. And no matter the backlash, AI is here to stay.

If we want AI safe and subject to the most effective scrutiny, it must be responsibly diffused so millions of individuals can identify, measure and mitigate its weaknesses. Only through society’s laboratory can we fully discover complementary inventions and novel uses to realize its potential. Reactionary regulations written amid an intense backlash risk halting this process, tying responsible, law-abiding hands looking for solutions while failing to stop the worst AI uses as the diffusion moves underground.

As the bedrock of liberal society, elections are worth unique, immediate attention. So how do we proceed?

The U.S. Cybersecurity and Infrastructure Security Agency has taken a solid first step by preparing existing election infrastructure for this AI election moment. For most problems, however, neither this nor the regulatory pen can stop underground adversaries from pumping out content from the hundreds of thousands of generators already out there. Innovating around the problem, therefore, is the best course.

Policymakers should consider further innovation prize challenges to incentivize the development of trust-building tech, including AI watermarking and AI forensics. Perhaps more importantly, both the public and private sectors should begin compiling data on this year’s election content. It’s unlikely we can completely solve the problem come November, but we can collect enough data to understand it and its sources and create testing and training tools for 2026.

The private sector’s actions will be pivotal. Innovators must recognize that AI progress proceeds on a razor’s edge; investments will fail if public trust fails. Just as tech backlash put the brakes on the once-world-leading U.S. nuclear sector, it could stall our leading AI position and create a vacuum for someone else to fill. 

The industry must take the lead and redirect focus from imagined sci-fi problems and “superalignment” to immediate problems like elections. While perhaps costly, it will pay dividends by laying the building blocks of trust and ensuring society tilts towards progress and true safety.

Matthew Mittelsteadt is a research fellow and technologist with the Mercatus Center at George Mason University.

Tags Artificial intelligence artificial intelligence regulation Deepfakes Politics of the United States

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Top ↴
Main Area Bottom ↴

Most Popular

Load more