The views expressed by contributors are their own and not the view of The Hill

Recruiting terrorists: We’re losing the fight against online extremism – here’s why


The past few months have been challenging for social media companies, as a continuous stream of new revelations of internet misuse have dominated the news – privacy scandals, special user data deals with Chinese companies, antitrust violations, terrorists recruiting and inciting violence, fake news, and election meddling, to name a few.

Members of the U.K. Parliament recently called Google-owned YouTube “utterly hopeless” over its handling of extremist and violent videos. A frustrated European Union Commission insisted that internet companies like Google and Facebook, must remove terrorist and other illegal content within an hour of being notified or face legislation forcing them to do so. The exasperated Germans recently began enforcing a tough hate speech law that can slap these offending multi-billion-dollar corporations with heavy fines.

{mosads}Tech companies rolled out big numbers earlier this year to try and convince skeptical lawmakers and advertisers that they take the issue of extremist content seriously. Facebook announced that it had removed 1.9 million terrorist-related posts in the first quarter of 2018 and had previously claimed that it was removing 83 percent of known terror content within one hour of upload.  YouTube trumpeted the removal of more than 8 million videos during the last quarter of 2017 (the vast majority of which, however, were spam or adult content).

 

These seemingly large numbers, however, do not necessarily indicate that there are fewer people accessing extremist and radicalizing content.

During a recent three-month period, the Counter Extremism Project (CEP) conducted a limited study using its own video-matching technology and YouTube’s API to identify the presence on YouTube of a small sample of 229 previously identified ISIS-generated videos (just a fraction of the trove of extremist material available). The goal of this study was to better understand the distribution and prevalence of extremist material online. Over the course of the three-month period and based on CEP’s narrow research parameters, this is what we learned:

  • No fewer than 1,348 ISIS videos were uploaded to YouTube in a three-month period, garnering more than 163,000 views.
  • 91 percent of the videos were uploaded more than once.
  • 76 percent of the videos remained on YouTube for less than two hours, but still generated a total of 14,801 views.
  • 278 different YouTube accounts were responsible for these 1,348 uploads, one of which uploaded as many as 50 videos.
  • 60 percent of accounts remained active even after videos from the account had been removed for content violations.

Our findings reveal that YouTube’s combination of automatic and manual review to identify and remove terrorist content are failing to effectively moderate hate-speech on their platform. Additionally, YouTube’s promise to take action against accounts that repeatedly violate their terms of service is simply not being enforced.

Each day, too much dangerous material is being uploaded, it is remaining online for too long, and even if it is eventually removed, it quickly reappears, meaning that this content is effectively always available as a pernicious radicalizing force.

Measuring the efficacy of countering online extremism by the number of take-downs and the time to take-down does not tell the complete story. We should measure efficacy by how many views violent or radicalizing content garners, how many times it is uploaded and allowed to remain on-line, and how aggressively accounts are removed after clear violations. ISIS material, similar to other types of propaganda, is posted in order to influence opinions and actions. A larger audience raises the possibility that someone may commit an act of terrorism. Setting standards for removal time periods is a good first step, but lawmakers should also consider regulating and potentially fining companies based on highly viewed terrorist material and inaction in removing accounts that repeatedly upload banned content.

There’s no doubt that social media companies, through major lobbying and public-relations campaigns, now say the right things about the connection between extremist content and terrorist acts – something they have previously denied. But when examples of terrorist content – including brutal executions and beheadings – remain pervasive and are easily found online, it is time to question whether removal counts and times is an inflated metric.

A study released by public relations firm Edelman starkly shows that trust in social media companies is falling worldwide and consumers want legislators and advertisers to push for industry reforms. Insisting on a considerably more aggressive approach to addressing online extremism would be a good first step.

Dr. Hany Farid is the Albert Bradley 1915 Third Century Professor of Computer Science at Dartmouth College and a senior adviser to the Counter Extremism Project.

Tags Counter Extremism project Facebook Islamic State of Iraq and the Levant YouTube

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Top ↴

THE HILL MORNING SHOW

Main Area Bottom ↴

Top Stories

See All

Most Popular

Load more