The views expressed by contributors are their own and not the view of The Hill

AI can digitally clone anyone — laws and norms need to catch up

(Photo by Mike Marsland/WireImage)
Scarlett Johansson attends the “Asteroid City” red carpet during the 76th annual Cannes film festival at Palais des Festivals on May 23, 2023 in Cannes, France.

Last week, actor Scarlett Johansson accused OpenAI of creating a voice for its ChatGPT system that sounded “eerily similar” to hers. Johansson reportedly rejected an offer from the company last year to hire her to voice their new system.

If OpenAI did clone Johansson’s voice, the process would have been remarkably simple. AI companies like ElevenLabs, HeyGen and Synthesia can make a convincing video or audio recreation of anyone based on just a few minutes of footage. You can also pick a preexisting avatar, input text, and out comes a human-like video saying whatever you want.

These digital clones are surprisingly convincing. When I cloned my voice and used it in a phone call with my sister, she didn’t even notice. In a recent set of studies, I compared viewers’ reactions to AI avatars and real humans leading a business training video and giving an entrepreneur’s pitch. More than half of the subjects believed the AI was a real person, and another 10 percent to 15 percent weren’t sure.

As the OpenAI controversy shows, this technology comes with major risks. There are concerns about misinformation, like a fake robocall to voters that used President Joe Biden’s voice. Sexual deepfakes have targeted celebrities like Taylor Swift and underage girls at a Beverly Hills middle school.

Scammers have also put the technology to quick use. A mother in Arizona received a call with her teenage daughter’s voice saying, “Mom, these bad men have me. Help me, help me,” (fortunately, her daughter was fine). A Hong Kong finance worker paid out $25 million in company funds after a video call with an AI-deepfake of his boss.

Despite their dystopian applications, digital avatars can also do a lot of good, from onboarding employees to sharing public health information in dozens of languages. More than half of Fortune 100 companies already use AI avatars. Xerox, Zoom and Salesforce are deploying them in multiple languages for training and kickoff videos, and small businesses on a budget are using them for Instagram ads.

This is just the beginning. During a future pandemic, we could instantly distribute health guidance to communities in the language they speak from a person who looks like them — both factors that research shows are key to effective communication. Imagine if Khan Academy’s AI chatbot tutor had a voice and an avatar rather than just communicating through text — something at least one LSAT tutoring company is experimenting with. My research shows that avatars are still effective even when we know they aren’t real, and viewers can learn just as much information from a digital avatar as a human.

While there are many exciting applications of digital avatars, the OpenAI controversy — along with the many scams and schemes described above — makes it clear that we need new norms, legal protections and digital tools to prevent abuses.

An important first step is to guarantee individuals ironclad control over their digital likeness. A bipartisan bill that emerged after the Swift scandal to criminalize nonconsensual, sexualized AI-generated images is a good start, as is legislation that would crack down on AI robocalls.

But these don’t go far enough. It needs to be easy and quick to assert control over our digital likenesses in every context. If Johansson sues OpenAI, it may set some interesting legal precedents.

Legislation isn’t the only path. After their strike last year, the SAG-AFTRA union secured provisions that require actors to approve any use of their likeness via AI. Every worker’s contract should offer these protections. While I’ve experimented with creating AI clones of myself, I wouldn’t want my university using my digital clone to teach students without my permission.

AI programs are also creating invisible fingerprints to track who abuses their tools. OpenAI’s Sora, which constructs realistic videos from simple text prompts, will feature a watermark at the bottom of each clip and contain metadata to triangulate its creator. And a major European Union law will require developers to label AI-generated content.

We also need new norms about when it it’s acceptable to use an AI avatar and when to disclose such use. An AI clone of the CEO may be fine for an employee welcome message, but not for addressing a company controversy. Authenticity is important, especially for well-known figures — one reason altered photos of Kate Middleton went viral earlier this year. And mimicking someone without their permission is clearly out of bounds, as OpenAI quickly discovered.

AI avatars and clones offer an useful and cost-effective way to communicate information, but they aren’t suited for every situation. They must be used with caution, and we must create laws and norms to prevent abuses.

However we feel about it, this technology is not going away, and it’s only going to get better and faster. By taking the risks seriously, we can also take advantage of the possibilities.

Stephen Lind is an associate professor of Clinical Business Communication at the University of Southern California Marshall School of Business.

Tags Artificial intelligence Joe Biden OpenAI Scarlett Johansson Scarlett Johansson Technology

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Bottom ↴

Top Stories

See All

Most Popular

Load more