Palestinian groups accuse Meta of unfairly moderating speech amid Israel-Hamas conflict

AP Photo/D. Ross Cameron
Audience members show their support at a special session of the Oakland City Council for a resolution calling for an immediate cease-fire in Gaza, Monday, Nov. 27, 2023, in Oakland, Calif.

Several Palestinian advocacy groups are calling on the parent company of Facebook, Instagram and WhatsApp to address long-standing content moderation issues they allege have unfairly restricted Palestinian speech in the wake of the outbreak of the Israel-Hamas war in October.

The “Meta: Let Palestine Speak” petition accuses the tech giant of unfairly removing content and suspending or “shadow banning” accounts from Palestinians, while failing to adequately address “incendiary Hebrew-language content.”

The complaints about Meta’s content moderation policies stretch back several years, said Nadim Nashif, the executive director and co-founder of the Palestinian digital rights group 7amleh-The Arab Center for the Advancement of Social Media. 7amleh is leading the Meta petition alongside the digital rights advocacy group Fight for the Future.

Following an earlier outbreak of violence in May 2021 that prompted similar accusations of unfair treatment of Palestinians on Meta’s platforms, the tech giant commissioned an independent due diligence report.

The report found that Meta’s actions during the period of unrest “appear to have had an adverse human rights impact on the rights of Palestinian users to freedom of expression, freedom of assembly, political participation, and non-discrimination.”

Meta agreed to implement many of the recommendations from the report, including developing and deploying “classifiers” for Hebrew “hostile speech.” The classifiers, which use machine learning to detect violating content, had previously existed for Arabic but not Hebrew.

However, following the Oct. 7 attack on Israel by the Palestinian militant group Hamas and Israel’s subsequent airstrikes and ground invasion of Gaza, Nashif said Meta’s new Hebrew classifiers appear to have fallen short.

Meta acknowledged the shortcomings of its Hebrew classifiers internally last month, noting that the classifiers were not being used on Instagram comments because the machine learning-based tool did not have enough data to function properly, the Wall Street Journal reported.

The tech giant also lowered the threshold for an automated system that hides comments that could potentially violate the company’s policies on hostile speech from 80 percent certainty to 25 percent within the Palestinian territories, according to the Journal.

Meta, which lowered the thresholds for several countries throughout the region to lesser extents, reportedly sought to address a surge in hateful content after the Oct. 7 attack.

However, Nashif argued that lowering the threshold produces a “very aggressive content moderation approach,” which results in “lots of false positives” and content “being taken down that should not be taken down.”

Jillian York, the director for international freedom of Expression at the Electronic Freedom Foundation (EFF), similarly suggested that the current conflict has resulted in levels of content about Palestine and content removals that are “unprecedented” in scale.

In the wake of the present backlash, she argued that social media companies need to be “more transparent in everything that they’re doing, every step of the way” when it comes to moderating content.

“Basically, tell us what you’re taking down and tell us why you’re taking it down, who asked you to take it down and why,” York said.

She also emphasized the importance of making content moderation “culturally specific,” noting that Arabic is a complex language with multiple dialects across many different countries.

“Meta has not done its due diligence in ensuring that its moderation is culturally specific,” York said. “So the people who are moderating content for Palestine might be from Morocco, for example. They might not have the same dialect, they might not have the same understanding, they might not have the same words.”

“And that’s just the human moderators,” she continued. “Then you have the machine learning, automation and so on.”

Several recent problems with how artificial intelligence (AI) powered tools on Meta platforms have represented Palestinians have also highlighted the way in which human biases can affect such technology, Nashif noted.

Meta had to apologize last month after Instagram’s auto-translation inserted the word “terrorist” into the bios of some Palestinian users containing an Arabic phrase, and WhatsApp came under fire earlier this month when an AI-powered tool generated images containing guns in response to phrases such as “Palestinian”, “Palestine” or “Muslim boy Palestinian.”

“When people were speaking with the company, they were saying, ‘Oh, this is a technical error,’ or, ‘This is a glitch,’ and were not really tackling the issue,” Nashif said. “Now, obviously, this is not a technical error; this is just a reflection of the bias that is there.”

“At the end of the day, this is a machine learning process,” he added. “There are sets of data that they are feeding the machine with, and clearly if those sets of data are biased, clearly the end result will be biased.”

In response to a request for comment, Meta pointed to an October statement about its ongoing efforts to respond to the Israel-Hamas war.

“Our policies are designed to keep people safe on our apps while giving everyone a voice,” the company said in the statement. “We apply these policies equally around the world and there is no truth to the suggestion that we are deliberately suppressing voice.”

Tags content moderation Facebook Instagram Israel Israel-Hamas conflict Israel-Hamas war israel-palestine conflict Meta Meta palestine Palestinians whatsapp

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Bottom ↴

Top Stories

See All

Most Popular

Load more