Meta action on explicit deepfakes under review by Oversight Board

FILE - The Facebook logo is seen on a cell phone, on Oct. 14, 2022, in Boston. A Thai Cabinet minister is threatening to try to shut down Facebook in the country, saying the social media platform does not do enough to screen the advertisements it runs, leaving people vulnerable to costly scams. (AP Photo/Michael Dwyer, File)
FILE – The Facebook logo is seen on a cell phone, on Oct. 14, 2022, in Boston. (AP Photo/Michael Dwyer, File)

Meta’s Oversight Board will review two cases about how Facebook and Instagram handled content containing artificial intelligence- (AI) generated nude images of two famous women, the board announced Tuesday.

The board is soliciting public comments about concerns around AI deepfake pornography as part of its review of the cases.

One case concerns an AI-generated nude image made to look like an American public figure that was removed automatically by Facebook after being identified by a previous poster as violating Meta’s bullying and harassment policies.

The other case concerns an AI-generated nude image made to resemble a public figure from India, which Instagram not intitially remove after it was reporterd. The image was later removed after the board selected the case and Meta determined the content was left up “in error,” according to the board.

The board is not naming the individuals involved to prevent further harm or risk of gender-based harassment, a spokesperson for the Oversight Board said.

The board, which is run independently from Meta and funded by a grant provided by the company, can issue a binding decision about content, but policy recommendations are non-binding and Meta has final say about what it chooses to implement.

The board is seeking public comments that address strategies for how Meta can address deepfake porn, as well as on the challenges of relying on automated systems that can close appeals in 48 hours if no review has taken place.

The case in India, where a user reported the explicit deepfake, was automatically closed because it was not reviewed within 48 hours. When the same user appealed the decision, the case was also automatically closed and the content remained up. That user then appealed to the board.

A Meta spokesperson confirmed that both pieces of content chosen by the board have been taken down, and said the company will “implement the board’s decision once it has finished deliberating.”

Concerns around how explicit deepfakes have been spreading have been amplified in recent months as AI has become more advanced and pervasive.

In January, the spread of explicit AI-generated images of Taylor Swift urged lawmakers and the White House to push for action to mitigate the spread of deepfake porn.

Tags AI Artificial intelligence Deepfakes Instagram Meta

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Bottom ↴

Top Stories

See All

Most Popular

Load more