Technology

Dems slam ‘vague explanations’ by tech firms on extremist content

A pair of Democratic lawmakers on Thursday slammed the “vague explanations” offered by tech companies responding to questions from the House Homeland Security Committee about extremist content on their platforms. 

Rep. Bennie Thompson (D-Miss.), the chairman of the committee, and Rep. Max Rose (D-N.Y.) said Facebook, YouTube, Twitter and Microsoft have failed to “properly or fully” provide the committee with specifics about their efforts to remove online terrorist content. 

{mosads}”The fact that some of the largest corporations in the world are unable to tell us what they are specifically doing to stop terrorist and extremist content is not acceptable,” Thompson and Rose said in a joint statement.

The statement criticizing the companies comes after committee Democrats last month asked Facebook, YouTube, Twitter and Microsoft — members of a counterterrorism effort by tech companies called the Global Internet Forum to Counter Terrorism (GIFCT) — to disclose how much they spend on their efforts to remove extremist content from their platforms.

The committee on Thursday released the full letters from Twitter and YouTube, neither of which included the total amount of money the companies spend on counterterrorism.

Twitter called the request “complex,” while Google-owned YouTube said it would be “difficult and possibly misleading” to assign a dollar amount to its counterterrorism efforts.  

Thompson and Rose pointed out that Facebook had not responded to their questions. 

“We have been and continue to work with the Committee on this issue, which is of utmost importance,” a Facebook spokesperson told The Hill. 

The Homeland Security Committee’s spokesman told The Hill that the committee did not expect a response from Microsoft because “they are not predominantly a social media/internet company.” 

Rose and others had asked for each tech company’s annual budget for counterterrorism-related programs, the number of personnel working solely on those programs and the number of staff who specialize in far-right extremism at each organization. 

Twitter, in its letter, said its global workforce is 4,100, “a substantial portion of whom are directly involved in reviewing and moderating content on Twitter,” but did not share specific numbers. The letter primarily explained Twitter’s policies on terrorism, extremist groups and hateful conduct. 

YouTube shared figures on its takedowns of terrorist content, noting that, in the first three months of 2019, the platform manually reviewed more than 1 million “suspected terrorist videos” and found that fewer than 10 percent of them violated its terrorism policy. 

“Even though the amount of content we remove for terrorism is low compared to the overall amount our users and algorithms flag, we invest in reviewing all of it out of an abundance of caution,” William McCants, Google’s global public policy lead for hate speech and terrorism, wrote. 

McCants said Google’s current operating expenses, excluding the cost of revenues, were $50.9 billion in 2018, and the company currently spends “hundreds of millions of dollars annually and have more than 10,000 people working across Google to address content that might violate our policies, which include our policies against promoting violence and terrorism.” 

Representatives with Google, Facebook, Twitter and Microsoft briefed the House panel on their efforts to take down extremist content online in a closed-door briefing at the end of March. 

Thompson invited the tech companies to come to Capitol Hill and discuss their efforts to crack down on violent extremists following the mass shooting at two New Zealand mosques earlier that month, an attack that was live-streamed online and went viral on all the major platforms. 

Rose and Thompson in their statement Thursday said “broad platitudes and vague explanations of safety procedures aren’t enough.”