The views expressed by contributors are their own and not the view of The Hill

How to safeguard democracy from AI disinformation — in 2024 and beyond 

iStock

The 2024 presidential election is officially one year away, which means voters will soon face a familiar flood of campaign ads seeking to influence their choices at the ballot box — that is, if they’re not seeing them already. This is a well-established routine in American politics, and voters are used to parsing through these messages while deciding how to vote. 

However, the 2024 election may very well depart from this known terrain and feature the widespread use of a newer campaign tactic: political ads and communications generated through artificial intelligence (AI) technology.  

AI is already changing the game due to its unprecedented ability to create deceptively realistic false content. This could infringe on voters’ fundamental right to make informed decisions about the candidates on their ballot — a right already threatened by many groups’ ability to hide their donors’ identities and conceal who is spending to influence elections. 

As a former chairman of the Federal Election Commission (FEC), and the founder of a nonprofit that advocates for pro-democracy reforms, I have seen how new technologies change campaigns. Social media platforms like Facebook, TikTok and X (the platform formally known as Twitter), as well as digital ads on streaming channels and mobile apps, are clear examples of innovations that have fundamentally altered how candidates, parties and political groups seek to persuade voters. 

Historically, our lawmakers and regulators have struggled to keep pace with technological changes, and the rules governing elections have lagged years behind what is actually happening on the airwaves and online. When it comes to AI, we cannot afford a delay in addressing its likely impact on elections. 

AI has the power to manipulate what voters are seeing and hearing in a way that is as convincing as it can be misleading. Although political disinformation is not new, the ease with which AI tools can make the false appear true is a unique challenge for our democracy. Unchecked, the deceptive use of AI could make it virtually impossible to determine who is truly speaking in a campaign ad, whether the message being communicated is authentic, or even whether something being depicted actually happened. 

Consider a few recent cases.  

Earlier this year, the presidential campaign of Florida Gov. Ron DeSantis shared a video on social media that contained AI-generated images showing former President Donald Trump hugging Dr. Anthony Fauci. This event never happened, but reasonable voters seeing the picture could easily conclude it had, and form political opinions based on a fabrication.  

Artificial intelligence can affect candidates on both sides of the aisle: an AI-generated deepfake video recently published on TikTok depicted Sen. Elizabeth Warren of Massachusetts saying Republicans should not be allowed to vote. She never said that, but the fake video garnered nearly 200,000 views in a week on Twitter alone. 

Alarmingly, recent reports indicate that suspected Chinese operatives have already used AI-generated fake images to spread disinformation among voters and create controversy along America’s cultural fault lines. This new avenue for foreign electoral interference is a real threat to both our elections and national security. 

These examples demonstrate AI’s potential to significantly increase misinformation, deception and distrust across the political spectrum. That is why we must approach this threat without regard for partisanship or political gain. 

I believe there are at least three concurrent paths for proactively addressing AI’s use in elections. 

First, Congress should pass a new law specifically prohibiting the deceptive use of AI to engage in electoral fraud or manipulation, an area where the government has a clear, compelling interest in protecting voters and the integrity of the electoral process. A bipartisan group of senators recently introduced legislation to achieve this specific goal, and Congress should pass it as soon as possible. 

Second, Congress should enhance the FEC’s authority to protect elections against fraud. Under current law, candidates are barred from fraudulently speaking for another candidate on a matter that is “damaging” to that candidate. In other words, it is illegal for Candidate A to put words in the mouth of Candidate B on a matter that is damaging to Candidate B’s campaign. The FEC should clarify that this existing ban applies to fraudulent uses of AI. Congress should then expand this provision to prohibit all fraudulent misrepresentation — regardless of who is speaking and whether it is damaging — including through the use of AI. 

Third and finally, Congress should expand existing transparency requirements to include disclaimers on the face of any political ad using AI, which would inform viewers when electoral messages have been materially created or altered with the help of AI. This would at least ensure voters can treat such ads with appropriate skepticism. 

These proposals are not exclusive or exhaustive. They are a starting point for what the government, including policymakers across the country, must do. 

The 2024 election stands to be one of the most contentious in our history. AI-based disinformation could add fuel to the fire if we do not act quickly to safeguard our democracy. 

Trevor Potter is president and founder of the Campaign Legal Center and a Republican former chairman of the Federal Election Commission. 

Tags 2024 presidential election Artificial intelligence Deepfake videos Elizabeth Warren Federal Election Commission political advertising Ron DeSantis

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Top ↴
Main Area Bottom ↴

Most Popular

Load more