The views expressed by contributors are their own and not the view of The Hill

Move slowly and fix things: A better way to combat social media posts that incite violence

It’s deja vu all over again. Facebook this week announced that it will remove posts calling for armed protests over false claims of a rigged election.

Except it’s January 2023, not January 2021, and rioters are calling for reinstatement of former Brazilian President Jair Bolsonaro, not former U.S. President Donald Trump. Unfortunately, according to our just-published research of Facebook’s content take downs following the Jan. 6 attack, this approach – removing content days after it’s been posted – does little to stop the spread of harmful messaging inciting violence.

In our study, we analyzed more than 2 million posts from 2,551 U.S. news sources during and after the 2020 elections, including more than 10,000 posts that were eventually removed. We estimated that when posts were removed, those removals happened late enough in posts’ lifecycles that only about 20 percent of the posts’ expected lifetime engagement was deterred. This is because posts tend to get the most engagement (people commenting, “liking” or otherwise interacting with information) soon after posting.

In Brazil, the day after the rioters broke into government buildings, a “Comunidade aguas lindas go” post of a Facebook live video that referred to them as “patriots” had already garnered more than 115,000 video views — and was still up on the site. Another video post, also still up, by “Albari Dias O intervencionista” proclaiming, “It was not the people, it was fraud, this time we will take Congress,” had more than 3,000 video views.

In our study, we found that Facebook users engaged with news content that would later be removed some 3.8 million times. The most engaging content was from far rightwing news sources that had a reputation for misinformation. The “Dan Bongino” page was the most popular on the right that had content removals.

Our research also shows that much of the removed content did not come down until a full week after the Jan. 6 riot. That means that the public had already engaged with it almost as much as they would have anyway — we estimated that less than 1 percent of future engagement was stopped by these interventions.

Removing content during a crisis, or just after, is like trying to stop a bomb mid-explosion. The charge has already been detonated, and most of the damage has already been done. In a crisis, the more effective approach is to slow everything down.

Algorithms keyed to engagement disproportionately favor harmful content; indeed, they tend to accelerate it. If Facebook put the brakes on all content, that would disproportionately dampen posts that incite violence and other harms. That gives humans time to do expedited reviews of content, to make the sophisticated judgements about context and potential for harm. It creates space for them to build and train algorithms to automatically detect content related to the crisis.

In the past, Facebook has announced a “break glass” approach to stem the flow of harmful misinformation. Indeed back in 2020, Facebook spokespeople said the platform had a plan to restrict content if civil unrest and violence erupted following elections. Our research shows that whatever the good intentions, Facebook’s internal policies weren’t good enough.

We’re far past the point where we can trust social media platforms to police themselves. We need policymakers to create laws and regulations that make platforms accountable to the public. Mandates for increased data transparency help independent researchers like us dig into the “black box” that is social media to help develop strategies to make online spaces safer. Other approaches would give more authority to regulators to hold social media platforms to their promises. We can’t afford to keep repeating the same mistakes.

Laura Edelson and Damon McCoy are co-directors of NYU Cybersecurity for Democracy, a research-based, nonpartisan and independent effort to expose online threats to our social fabric — and recommend how to counter them. We are part of the Center for Cybersecurity at the NYU Tandon School of Engineering.

Tags 2020 election Brazil disinformation Donald Trump Election misinformation Facebook Jair Bolsonaro mark zuckerberg jack dorsey sundar pichai facebook twitter google ceo big tech hearing misinformation content moderation control misinformation Presidency of Jair Bolsonaro Twitter

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Top ↴
Main Area Bottom ↴

Most Popular

Load more