Gaps in AI robocall ban boost pressure on Congress, election officials

FILE - A man uses a cellphone in New Orleans, Aug. 11, 2019. On Tuesday, May 23, 2023, attorneys general across the U.S. joined in a lawsuit against a telecommunications company accused of making more than 7.5 billion robocalls to people on the national Do Not Call Registry. (AP Photo/Jenny Kane, File)
FILE – A man uses a cellphone in New Orleans, Aug. 11, 2019. On Tuesday, May 23, 2023, attorneys general across the U.S. joined in a lawsuit against a telecommunications company accused of making more than 7.5 billion robocalls to people on the national Do Not Call Registry. (AP Photo/Jenny Kane, File)

Gaps in a federal ban on robocalls with voices generated by artificial intelligence (AI) are highlighting concerns about the lack of regulations on other digitally altered content and its use in campaigns.

The Federal Communications Commission (FCC) last week unanimously voted to recognize AI-generated voices as “artificial” under the Telephone Consumer Protection Act, banning them from use. The vote came shortly after a call with an AI-generated voice impersonating President Biden spread throughout New Hampshire ahead of the state’s primary.

Experts called the FCC ban a welcome first step toward curbing deceptive AI-generated content, but not nearly enough on its own.

“Of course, voice content is very, very important, but it’s just one kind,” said Julia Stoyanovich, an associate professor at New York University’s Tandon School of Engineering.

“We need to be thinking holistically about AI-generated media and how to regulate the use of such media and how to ban, or have accountability more generally, when these media are used in particular settings.” 

Under the Telephone Consumer Protection Act, which restricts the use of artificial or prerecorded voice messages in telemarketing calls, the FCC can fine robocallers and block calls from telephone carriers facilitating illegal robocalls. 

“AI-generated voice cloning and images are already sowing confusion by tricking consumers into thinking scams and frauds are legitimate,” FCC Chair Jessica Rosenworcel said in a statement. 

“No matter what celebrity or politician you favor, or what your relationship is with your kin when they call for help, it is possible we could all be a target of these faked calls.”

“That’s why the FCC is taking steps to recognize this emerging technology as illegal under existing law, giving our partners at State Attorneys General offices across the country new tools they can use to crack down on these scams and protect consumers,” she added.

The FCC and other federal regulators have faced steady pressure from the nonprofit group Public Citizen and other advocates calling for AI guardrails ahead of the 2024 election.

“This rule will meaningfully protect consumers from rapidly spreading AI scams and deception,” said Robert Weissman, president of Public Citizen.

“Unfortunately, through no fault of the FCC, this move is not enough to safeguard citizens and our elections,” he added.  

The FCC’s limited scope leaves AI-generated images and videos unregulated as political campaigns and their supporters increasingly use such materials ahead of the election.

“Here we’re talking about political advertising,” Stoyanovich added. “And this, of course, is probably the most important issue that we’re facing this year, being an election year.” 

Experts and advocates are now boosting pressure on the Federal Election Commission (FEC) to fill the gaps left by the FCC’s robocall ban and ramp up its efforts to regulate AI. Public Citizen, which led a petition for the FEC to clarify the rule, has said the agency is moving too slowly.

The FEC voted in August to consider clarifying its rule against fraudulently misrepresenting candidates or political parties to explicitly cover deceptive AI in campaigns. 

The FEC has yet to make an update on the rule since the public comment period closed in October, and a spokesperson previously told The Hill they do not have an update on timing.  

FEC Commissioner Sean J. Cooksey (R) told The Washington Post last month the commission and staff are “diligently reviewing the thousands of public comments submitted,” and he expects the commission to resolve the rule “by early summer.”  

Nick Penniman, founder and CEO of the nonpartisan political reform group Issue One, said in a statement the FCC’s rule is a “positive step, but it’s not enough.” Penniman called for Congress to take action to “prohibit bad actors from using deceptive AI to disrupt our elections” and for the FEC to clarify language to ban the use of deceptive AI in campaign communications.  

“The unregulated use of AI as a means to target, manipulate, and deceive voters is an existential threat to democracy and the integrity of our elections. This is not a future possibility, but a present reality that demands decisive action,” Penniman said.  

As the FEC and Congress weigh their next steps, the FCC may have trouble enforcing its new ban.

The challenge of identifying AI-generated content could thwart the effectiveness of the rule, said Jessica Furst Johnson, an election lawyer at Holtzman Vogel and general counsel to the Republican Governors Association.

Furst Johnson said that because the FCC rule relies on reports from robocall recipients, voters could be more likely to bring a complaint about supposed AI use based on a call from a party or candidate they don’t support.

Stoyanovich also warned that it will be “really difficult” to enforce the rule without some type of automation. 

“If this is something very difficult for a person to tell apart, whether it’s a robocall that is using a machine-generated voice or a human speaking, then it will also be very difficult for automated detection,” Stoyanovich told The Hill. 

“And if we can’t automate this, then it’s just going to be really difficult generally,” she continued.

Some social media companies are also taking steps to curb the spread of AI-generated political content ahead of the election.

Meta —  the parent company of Facebook, Instagram and Threads — is ramping up its efforts to detect and label AI-generated images. 

Tags Jessica Rosenworcel Joe Biden Nick Penniman Robert Weissman Sean Cooksey

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Top ↴

THE HILL MORNING SHOW

More Technology News

See All
Main Area Bottom ↴

Most Popular

Load more