Technology

Twitter solicits feedback on new ‘deepfakes’ policy

Twitter is soliciting feedback to inform its new policy limiting the reach of “deepfakes,” or video footage that has been altered in misleading ways. 

Twitter’s vice president of trust and safety Del Harvey wrote in a blog post that Twitter might begin labeling tweets that include “synthetic or manipulated media” or warning users when they’re sharing such content. 

{mosads}”We propose defining synthetic and manipulated media as any photo, audio, or video that has been significantly altered or fabricated in a way that intends to mislead people or changes its original meaning,” Harvey wrote, a policy that would account for both “deepfakes” — footage that has been doctored using artificial intelligence — and “shallowfakes” — video that has been significantly and selectively edited without advanced tools.

A controversial video of Speaker Nancy Pelosi (D-Calif.), which went viral across most of the top social media platforms in May, kicked off a larger conversation about how companies like Facebook, Twitter and YouTube plan to deal with the deluge of manipulated footage that will likely flood their networks ahead of the 2020 presidential elections.

Facebook drew a wave of harsh criticism when it declined to take down a user-posted video of Pelosi that was slowed down and edited to make it appear as though she was slurring her words. The hundreds of comments on the video indicated viewers thought the video showed Pelosi in real time. Shortly after, President Trump tweeted a video edited to make it seem like Pelosi was stumbling over her words.  

Facebook, Twitter and YouTube are all now working to formulate new policies around dealing with deepfakes and other manipulated footage, the companies told Rep. Adam Schiff (D-Calif.) in letters earlier this year

“The solutions we develop will need to protect the rights of people to engage in parody, satire and political commentary,” Twitter’s director of public policy, Carlos Monje wrote in the July letter. 

In the post on Monday, Twitter’s Harvey wrote that Twitter is considering removing manipulated footage when it could “threaten someone’s physical safety or lead to other serious harm.” But the bulk of the proposed policy would simply inform Twitter users when footage has been altered rather than removing it entirely.

Twitter is soliciting feedback through a public survey and online comments until Nov. 27, according to the blog post.

“At that point, we’ll review the input we’ve received, make adjustments, and begin the process of incorporating the policy into the Twitter Rules, as well as train our enforcement teams on how to handle this content,” Harvey wrote, pledging to make another announcement “30 days before the policy goes into effect.”