Tech giants pressed in House hearing on policing extremist content

Facebook, Twitter and Google defended their efforts to combat extremist content and misinformation online before House lawmakers on Wednesday, but lawmakers walked away complaining that they aren’t satisfied with the tech giants’ efforts. 

The hearing came approximately three months after the deadly mosque shootings in Christchurch, New Zealand, which were livestreamed on Facebook and quickly went viral. Footage of the gruesome attack proliferated across all the top social media platforms, with users uploading versions of the video much faster than the companies could take them down. 

{mosads}The incident, which saw millions of people uploading videos of an apparent white supremacist gunning down worshippers at a mosque, has drawn new attention to the issue of extremist content online. 

Representatives from Twitter, Google and Facebook on Wednesday said they are making efforts to invest more in technology and trained professionals who can deal with bad actors on their platforms. 

But members of the House Homeland Security Committee pushed back on those claims, accusing the platforms of failing to take the issue seriously. 

“They’re going to have to do more,” panel Chairman Bennie Thompson (D-Miss.) told reporters after the hearing, noting that he was dissatisfied with answers on a range of issues.

Rep. Max Rose (D-N.Y.) offered some of the sharpest criticism, saying the tech firms are offering “technocratic” explanations while “people are being killed.” 

Rose specifically criticized the Global Internet Forum to Combat Terrorism, the coalition of top tech companies aimed at preventing the spread of terrorist content. Through the effort, which began in 2017, companies including Facebook, Google and Twitter share information on violent or nefarious actors and content on their platforms.

Rose, who chairs the House Homeland Security Subcommittee on Intelligence and Counterterrorism, called it a “joke of an association.” 

But the hearing veered off track as the hours ticked on, with lawmakers raising concerns about a range of questionable content, including anti-vaccine misinformation and “deep fakes,” or videos that have been altered to make it appear that people are saying things they never said. Most Republicans on the committee brought up concerns that the companies are biased against conservatives, a claim that the representatives flatly denied. 

And the conversation returned multiple times to the issue of freedom of speech and whether the companies should censor voices on their platforms. 

During an exchange about anti-vaccine misinformation on Instagram, Facebook told lawmakers it has been working with top medical organizations to surface “authoritative” content about vaccines rather than remove the anti-vax content entirely. 

“Years ago, required reading I had was the book ‘1984,’” Rep. Debbie Lesko (R-Ariz.) said in response. “And this committee hearing is scaring the heck out of me.” 

“If somebody googles vaccines, the answer was, ‘Oh, we’re going to put over what the person is actually looking, for what we think is best,’” Lesko said. “Who are the people judging what’s best, what’s accurate? This is really scary stuff and really goes to the heart of our First Amendment rights.” 

After the hearing, Thompson told reporters that he is unsure if the committee will pursue any legislative action, but said it’s an option if the tech giants continue to refuse to cooperate.

Tags anti-extremism Bennie Thompson Extremism extremist content Facebook Google Instagram Max Rose Social media Twitter

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Top ↴

More Technology News

See All
Main Area Bottom ↴

Most Popular

Load more