Political consultant indicted in fake Biden robocall in New Hampshire
The Democratic political consultant who admitted to using a deepfake of President Biden’s voice in a New Hampshire primary robocall earlier this year was indicted Wednesday, the state attorney general announced.
Steve Kramer, who said he created the robocall to warn about the dangers of artificial intelligence (AI), was charged with 13 counts each of felony voter suppression and misdemeanor impersonating a candidate.
“New Hampshire remains committed to ensuring that our elections remain free from unlawful interference and our investigation into this matter remains ongoing,” Attorney General John Formella (R) said in a statement.
The call was the first known use of deepfake technology in U.S. politics, sparking a tidal wave of calls to regulate the use of AI in elections. The fake Biden voice in the call encouraged thousands of New Hampshire primary voters to stay home and “save” their votes.
“This is a way for me to make a difference, and I have,” Kramer told NBC News in February. “For $500, I got about $5 million worth of action, whether that be media attention or regulatory action.”
The consultant previously worked for Rep. Dean Phillips’s (D-Minn.) long-shot presidential campaign, which was suspended in March, though he said Phillips’s team was not connected to or aware of his robocall effort. He also backed efforts to regulate the technology.
“With a mere $500 investment, anyone could replicate my intentional call,” Kramer said in a statement in February. “Immediate action is needed across all regulatory bodies and platforms.”
The Federal Communication Commission (FCC) also announced its own enforcement action against Kramer on Thursday, fining him $6 million and levying a $2 million fine against the telecom carrier that operated the phone lines.
FCC Chair Jessica Rosenworcel called Kramer’s scheme “unnerving.”
“This is only a start,” she wrote in a statement. “Because we know AI technologies that make it cheap and easy to flood our networks with fake stuff are being used in so many ways here and abroad. It is especially chilling to see them used in elections.”
The FCC also announced Wednesday that it will consider requiring political advertisers to disclose the use of AI on television and radio.
“As artificial intelligence tools become more accessible, the commission wants to make sure consumers are fully informed when the technology is used,” Rosenworcel said in a separate statement Wednesday.
The FCC banned the use of AI in robocalls earlier this year after Kramer’s effort in New Hampshire.
AI is “supercharging” threats to the election system, technology policy strategist Nicole Schneidman told The Hill in March. “Disinformation, voter suppression — what generative AI is really doing is making it more efficient to be able to execute such threats.”
AI-generated political ads have already broken into the space with the 2024 election. Last year, the Republican National Committee released an entirely AI-generated ad meant to show a dystopian future under a second Biden term. It employed fake but realistic photos showing boarded-up storefronts, armored military patrols in the streets and waves of immigrants creating panic.
In India’s elections, recent AI-generated videos misrepresenting Bollywood stars as criticizing the prime minister exemplify a trend tech experts say is cropping up in democratic elections around the world.
Sens. Amy Klobuchar (D-Minn.) and Lisa Murkowski (R-Alaska) also introduced a bill earlier this year that would require similar disclosures when AI is used in political advertisements.
Updated at 11:45 a.m. EDT
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts