Fears grow over AI’s impact on the 2024 election
The rapid rise of artificial intelligence (AI) is raising concerns about how the technology could impact next year’s election as the start of 2024 primary voting nears.
AI — advanced tech that can generate text, images and audio, and even build deepfake videos — could fuel misinformation in an already polarized political landscape and further erode voter confidence in the country’s election system.
“2024 will be an AI election, much the way that 2016 or 2020 was a social media election,” said Ethan Bueno de Mesquita, interim dean at the University of Chicago Harris School of Public Policy. “We will all be learning as a society about the ways in which this is changing our politics.”
Experts are sounding alarms that AI chatbots could generate misleading information for voters if they use it to get info on ballots, calendars or polling places — and also that AI could be used more nefariously, to create and disseminate misinformation and disinformation against certain candidates or issues.
“I think it could get pretty dark,” said Lisa Bryant, chair of the Department of Political Science at California State University, Fresno and an expert with MIT’s Election lab.
Polling shows the concern about AI doesn’t just come from academics: Americans appear increasingly worried about how the tech could confuse or complicate things during the already contentious 2024 cycle.
A UChicago Harris/AP-NORC poll released in November found a bipartisan majority of U.S. adults are worried about the use of AI “increasing the spread of false information” in the 2024 election.
A Morning Consult-Axios survey found an uptick in recent months in the share of U.S. adults who said they think AI will negatively impact trust in candidate advertisements, as well as trust in the outcome of the elections overall.
Nearly 6 in 10 respondents said they think misinformation spread by AI will have an impact on who ultimately wins the 2024 presidential race.
“They are a very powerful tool for doing things like making fake videos, fake pictures, et cetera, that look extremely convincing and are extremely difficult to distinguish from reality — and that is going to be likely to be a tool in political campaigns, and already has been,” said Bueno de Mesquita, who worked on the UChicago poll.
“It’s very likely that that’s going to increase in the ‘24 election — that we’ll have fake content created by AI that’s at least by political campaigns, or at least by political action committees or other actors — that that will affect the voters’ information environment make it hard to know what’s true and false,” he said.
Over the summer, the DeSantis-aligned super PAC Never Back Down reportedly used an AI-generated version of former President Trump’s voice in a television ad.
Just ahead of the third Republican presidential debate, former President Trump’s campaign released a video clip that appeared to imitate the voices of his fellow GOP candidates, introducing themselves by Trump’s favored nicknames.
And earlier this month, the Trump campaign posted an altered version of a report that NBC News’s Garrett Haake gave before the third GOP debate. The clip starts unaltered with Haake’s report but has a voiceover take over, criticizing the former president’s Republican rivals.
“The danger is there, and I think it’s almost unimaginable that we won’t have deepfake videos or whatever as part of our politics going forward,” Bueno de Mesquita said.
The use of AI by political campaigns in particular has prompted tech companies and government officials to consider regulations on the tech.
Google earlier this year announced it would require verified election advertisers to “prominently disclose” when their ads had been digitally generated or altered.
Meta also plans to require disclosure when a political ad uses “photorealistic image or video, or realistic-sounding audio” that was generated or altered to, among other purposes, depict a real person doing or saying something they did not do.
President Biden issued an executive order on AI in October, including new standards for safety and plans for the Commerce Department to craft guidelines on content authentication and watermarking.
“President Biden believes that we have an obligation to harness the power of AI for good, while protecting people from its potentially profound risks,” a senior administration official said at the time.
But lawmakers have largely been left scrambling to try to regulate the industry as it charges ahead with new developments.
Shamaine Daniels, a Democratic candidate for Congress in Pennsylvania, is using an AI-powered voice tool from the startup Civox as a phone-banking tool for her campaign.
“I share everyone’s grave concerns about the possible nefarious uses of AI in politics and elsewhere. But we need to also understand and embrace the opportunities this technology represents,” Daniels said when she announced her campaign would roll out the tech.
Experts say AI could be used for good in election cycles — like informing the public what political candidates they may agree with on issues and helping election officials clean up voter lists to identify duplicate registrations.
But they also warn the tech could worsen problems exposed during the 2016 and 2020 cycles.
Bryant said AI could help disinformation “micro-target” users even further than what social media has already been able to. She said no one is immune from this, pointing to how ads on a platform like Instagram already can influence behavior.
“It really has helped to take this misinformation and really pinpoint what kinds of messages, based on past online behavior, really resonate and work with individuals,” she said.
Bueno de Mesquita said he is not as concerned about micro-targeting from campaigns to manipulate voters, because evidence has shown that social media targeting has not been effective enough to influence elections. Resources should be focused on educating the public about the “information environment” and pointing them to authoritative information, he said.
Nicole Schneidman, a technology policy advocate at the nonprofit watchdog group Protect Democracy, said the organization does not expect AI to produce “novel threats” for the 2024 election but rather potential acceleration of trends that are already affecting election integrity and democracy.
She said a risk exists of overemphasizing the potential of AI in a broader landscape of disinformation affecting the election.
“Certainly, the technology could be used in creative and novel ways, but what underlies those applications are all threats like disinformation campaigns or cyberattacks that we’ve seen before,” Schneidman said. “We should be focusing on mitigation strategies that we know that are responsive to those threats that are amplified, as opposed to spending too much time trying to anticipate every use case of the technology.”
A key solution to grappling with the rapidly developing technology could just be getting users in front of it.
“The best way to become AI literate myself is to spend half an hour an hour playing with the chat bot,” said Bueno de Mesquita.
Respondents in the UChicago Harris/AP-NORC who reported being more familiar with AI tools were also more likely to say use of the tech could increase the spread of misinformation, suggesting awareness of what the tech can do can increase awareness of its risks.
“I think the good news is that we have strategies both old and new to really bring to the fore here,” Schneidman said.
She said as AI becomes more sophisticated, detection technology may have trouble keeping up despite investments in those tools. Instead, she said “pre-bunking” from election officials can be effective at informing the public before they even potentially come across AI-generated content.
Schneidman said she hopes election officials also increasingly adopt digital signatures to indicate to journalists and the public what information is coming directly from an authoritative source and what might be fake. She said these signatures could also be included in photos and videos a candidate posts to plan for deepfakes.
“Digital signatures are the proactive version of getting ahead of some of the challenges that synthetic content could pose to the caliber of the election information ecosystem,” she said.
She said election officials, political leaders and journalists can get information people need about when and how to vote so they are not confused and voter suppression is limited. She added that narratives surrounding interference in elections are not new, which gives those fighting disinformation from AI content an advantage.
“The advantages that pre-bunking gives us is crafting effective counter messaging that anticipates recurring disinformation narratives and hopefully getting that in the hands and in front of the eyes of voters far in advance of the election, consistently ensuring that message is landing with voters so that they are getting the authoritative information that they need,” Schneidman said.
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts