The views expressed by contributors are their own and not the view of The Hill

The government can’t stop those AI robocalls, so stay skeptical 

This image made from video of a fake video featuring former President Barack Obama shows elements of facial mapping used in new technology that lets anyone make videos of real people appearing to say things they've never said. There is rising concern that U.S. adversaries will use new technology to make authentic-looking videos to influence political campaigns or jeopardize national security.

Anyone with a cell phone knows that robocalls are a common nuisance. But these days, that robocall might be generated by artificial intelligence (AI), and the voice on the other end could sound like the president of the United States.

As AI technology continues to improve and change our day-to-day lives, this type of scam is going to become more common, and the scope will expand beyond calls from AI “Joe Biden.” In fact, something similar already happened when a political action committee affiliated with Florida Gov. Ron DeSantis used AI to replicate Donald Trump’s voice in an attack ad. 

In response to such incidents, many are calling for federal intervention, and the Federal Communications Commission (FCC) is the first agency to answer the call. 

The FCC confirmed in early February that using AI-generated voices in robocalls is illegal. This applies to all forms of robocall scams, including those related to elections and campaigns. While much of the media coverage surrounding the decision focused on the agency taking action to “ban” or “outlaw” the use of AI in robocalls, the actual decision was a simple confirmation that there are already federal regulations in place that apply to AI phone calls. 

However, as anyone with a phone knows, despite FCC regulation, unwanted robocalls are a common problem and there’s little evidence to suggest that this ruling will reduce the volume of AI-assisted robocalls voters receive over the next year. In fact, unwanted robocalls have been illegal since 1991 when Congress passed the Telephone Consumer Protection Act (TCPA), which prohibited making calls using an “artificial or prerecorded voice” without the consent of the call recipient. 


As a consequence, Americans must be prepared to protect themselves from bad actors using robocalls to spread false information about the election. 

It’s also a reminder that government regulation is all too often ineffective at stopping bad actors from attempting to take advantage of the public. Over the past five years, Americans have received, on average, more than 50 billion robocalls per year that violate the TCPA. There are various explanations for why the ban is ineffective—ranging from a lack of enforcement authority to outdated definitions—but the larger point is that prohibitions imposed by the government rarely work as intended. This dynamic is unlikely to change as policymakers at all levels of government respond to public pressure to do something about AI in elections by proposing new bans or restrictions—instead of trying to make the existing regulations work as intended. 

If government regulation is unable to stem the tide of robocalls this election season, the burden will fall on us all to remain vigilant in protecting ourselves from efforts to deceive. The extensive media coverage around AI this election cycle is good for public awareness, and technology companies have committed to helping spread the word as part of their recently announced accord to combat deceptive uses of AI in elections. 

It’s these light-touch efforts that empower individual voters and hold the most promise for combatting efforts to disrupt the 2024 election using AI.

Thanks to AI, the next robocall from Joe Biden or deepfake of Donald Trump is just a keystroke away. For voters, the first defense against these attempts at deception is to assume that government efforts to protect us from them will fail.

Chris McIsaac is a fellow with the R Street Institute’s governance program.