Technology

Facebook researchers say findings on racial bias were ignored by superiors

Several Facebook employees say their concerns about potential racial bias in the automated account removal system for Instagram were ignored by superiors in 2019.

The account removal system, designed to remove bullying and other problematic content from the company’s platforms, was introduced to Facebook-owned Instagram even though researchers noted that users whose activity suggested they were Black were about 50 percent more likely to have their accounts disabled compared with users perceived by the app to be white, two current and one former employee told NBC News.

When their findings were reported to managers, the employees said they were told to cease any further research into the racial bias of the system and to not share the information with their colleagues.

More than a half a dozen other current and former Facebook employees told NBC that management at the company had repeatedly dismissed internal research regarding racial bias in moderation practices.

Facebook’s vice president of growth and analytics Alex Schultz pushed back, saying that the company hadn’t “ignored research.”

“In this specific case we have put additional standards to ensure we approach the work of analyzing bias in a rigorous and ethical way,” Shultz told The Hill in a statement. 

“There will be people who are upset with the speed we are taking action,” Schultz said, adding that Facebook has “massively increased our investment” in comprehending hate speech and any algorithmic bias.

Facebook spokeswoman Carolyn Glanville told NBC: “We are actively investigating how to measure and analyze internet products along race and ethnic lines responsibly and in partnership with other companies.”

Facebook and its CEO Mark Zuckerberg have frequently been criticized over the amount of hate speech and misinformation on the platform and efforts to moderate content.

Updated at 9:58 p.m.