The views expressed by contributors are their own and not the view of The Hill

Don’t let Facebook off the hook for Jan. 6

The Jan. 6 committee’s hearings over the past few weeks did crucial work illuminating the players and the plot behind efforts to overturn the 2020 election results. Now I hope lawmakers will take a close look at the technology platform that was at the center of events on Jan. 6: Facebook. 

Numerous reports have documented the pivotal role that Facebook played in spreading the conspiracy theories, calls for violence and far-right militia activity that fueled the mob attack on Congress. The company even ran ads for military gear next to election misinformation by extremist groups — essentially stoking and profiting off the insurrection.

For Facebook, this is a problem that has been building for years. The Tech Transparency Project collected and analyzed more than three years’ worth of news reports about content on Facebook that violated the company’s policies. We found that roughly 70 percent involved violence and incitement, including threats and physical assaults. Another large category included hate speech and other objectionable content.

Beyond showing how Facebook has systematically failed to police its platform, the analysis tracked the company’s responses to the news reports and found that Facebook issued the same cookie-cutter statements over and over again. In short, rather than fixing problems exposed by the media, Facebook resorted to recycled PR.

Few companies have been in hot water for such a broad range of societal harms as Facebook, from allegedly pushing drug content to teens to fueling the migration crisis at the U.S. southern border to purportedly fostering genocide in Myanmar. Facebook executives regularly trot out assurances that their artificial intelligence systems are getting better at finding and taking down violating content. But the vaunted A.I. systems have been failing to identify graphic violence for years, and recent events continue to lay bare the deficiencies of Facebook’s content moderation.


Why is it that a company that boasts sophisticated A.I. technology and more than 40,000 safety and security workers continually fails to stop dangerous content? Why does a company with these kinds of resources end up relying frequently on small groups of experts to find this stuff?

It becomes clearer with each passing day that Facebook is unwilling to make the changes and investments required to make its platform a safer place and instead continues to be focused on growth — even after a calamity like Jan. 6.

When it comes to the insurrection, Facebook has much to answer for. Why didn’t the company immediately implement all the measures recommended by its own researchers, who identified misinformation and daily calls for violence in the platform’s top “civic” groups in the months leading up to the 2020 election? Why did Facebook disband its civic integrity team, which battled election disinformation and other harmful content, in December 2020, even as the “Stop the Steal” movement gained momentum? Facebook said it shared information with federal authorities about potential violence at the Capitol on Jan. 6. When did that happen and what did the company know?

Facebook Chief Operating Officer Sheryl Sandberg infamously tried to shift blame away from her company for Jan. 6, saying the Capitol riot was “largely organized” on other tech platforms. But the dubious spin attempt has only gotten more laughable as evidence has piled up about Facebook’s central role in the events of that day.

The Jan. 6 committee has subpoenaed Facebook parent company Meta and other tech companies as part of its investigation, but Facebook executives have yet to appear on the Jan. 6 hearing schedule. Don’t let Facebook CEO Mark Zuckerberg’s much-discussed pivot to the metaverse allow the company to distract from its role in turbocharging the violent attempted coup.

Facebook executives continue to mislead the public about the company’s role in the Capitol riot, and the company is undoubtedly hoping Congress will give it a pass. Let’s hope lawmakers will eventually turn their attention to the role of social media in general – and Facebook in particular – in fomenting the insurrection.

Katie A. Paul is the director of the Tech Transparency Project (TTP), where she specializes in tracking extremism, disinformation and criminal activity on online platforms such as Facebook. Paul also serves as co-director and co-founder of the Antiquities Trafficking and Heritage Anthropology Research (ATHAR) Project and a founding member of the Alliance to Counter Crime Online (ACCO).