Facebook, Google and Twitter tried to assure skeptical senators on Wednesday that they are improving their efforts to find and remove violent and hateful content on their platforms.
The social media companies have been sharply criticized over the issue after a spate of mass shootings this year that appeared to be inspired by online extremism and in some cases were even broadcast on the internet.
The three companies sent executives to testify before the Senate Commerce Committee on Wednesday for a hearing on “mass violence, extremism and digital responsibility.”
{mosads}”In today’s internet-connected society, misinformation, fake news, deep fakes and viral online conspiracy theories have become the norm,” said Sen. Roger Wicker (R-Miss.), the committee’s chairman. “This hearing is an opportunity for witnesses to discuss how their platforms go about identifying content and material that threatens violence and poses a real and potentially immediate danger to the public.”
The executives told lawmakers that they were collaborating with one another and other tech companies on the issue and that they had made strides using artificial intelligence (AI) to detect hateful and violent content.
Facebook has “updated our proactive detection systems and reduced the average time it takes for our AI to find a violation on Facebook Live to 12 seconds — a 90 percent reduction in our average detection time from a few months ago,” said Monika Bickert, Facebook’s vice president of global policy management. “Being able to detect violations sooner means that in emergencies where every minute counts, we can assist faster.”
“Over 87 percent of the 9 million videos we removed [from YouTube] in the second quarter of 2019 were first flagged by our automated systems,” added Derek Slater, Google’s director of information policy. “More than 80 percent of those auto-flagged videos were removed before they received a single view. And overall, videos that violate our policies generate a fraction of a percent of the views on YouTube.”
And Twitter’s director of public policy strategy, Nick Pickles, told the committee that its “proactive measures” account for 90 percent of the suspensions it has carried out under its terrorism policies.
Lawmakers have taken more interest in online extremism in recent weeks following an uptick in high-profile mass shootings. Earlier this month, the owner of the controversial message board 8chan briefed staffers for the House Homeland Security Committee after a shooter killed 22 people at a Walmart in El Paso, Texas.
Police say the shooter posted a racist manifesto on 8chan. And the massacre at two mosques in Christchurch, New Zealand, was recorded by the attacker and broadcast to millions on social media.
Facebook, which has received the brunt of lawmaker criticism, has repeatedly touted changes it has made in recent months to its content policies. It’s also creating a new oversight board to review its enforcement decisions.
The company has also suggested that the government should be more proactive in setting expectations on speech and harmful content for social media companies.
“One of the things that we’re looking to with our dialogue with governments is clarity on actions that governments want us to take,” Bickert said during Wednesday’s hearing. “So we have our set of policies that lays out very clearly how we define things, but we don’t do that in a vacuum. We do that with a lot of input from civil society organizations and academics around the world, but we also like to hear the views from government so we can make sure we’re mindful of all the different safety perspectives.”
But one of the difficulties for Silicon Valley is that the industry finds itself in the middle of partisan crossfire over what efforts social media companies should be taking to clean up their platforms.
Democrats have pushed them to crack down more forcefully on white nationalist extremism, while Republicans have leveled unproven allegations that social media companies are censoring conservative voices.
And despite their promises of improvement in weeding out explicitly violent content, the tech giants are under constant pressure to do more.
Before the hearing, a coalition of civil rights groups released letters to each of the companies pushing them to take a hard line against racist extremism on their platforms.
“Each massacre makes clearer that, while each of your companies has taken some steps to address white nationalism and white supremacy online, those steps are not enough,” the letter reads.
“Congress’ acute responsibility to pass common-sense gun safety laws does not excuse corporations from doing all in their power to prevent mass violence,” it continues.
Lawmakers made it clear they expected tech companies to follow through.
“I welcome that you’re doing more and trying to do it better,” said Sen. Richard Blumenthal (D-Conn.) during the hearing. “But I would suggest that even more needs to be done and it needs to be better.”