On the last day of Passover, an extremist attacked a synagogue in San Diego, murdered a woman, and posted a violent, anti-Semitic manifesto online just before the attack.
In March, the Christchurch mosque attacker published a white supremacist screed online and then live-streamed his attack on Facebook.
Sri Lanka on Easter Sunday; Pittsburgh six months ago. Heinous attacks on worshippers in their most sacred places on their most holy days. These and other acts of violence share common origins in hate. Their perpetrators also exploit all-too-common weaknesses in the technology platforms that are so ubiquitous in modern life.
{mosads}Social media platforms have failed to address the threat of extremist content, becoming at best complacent and at worst complicit in the transformation of tech platforms into breeding grounds for reaching and radicalizing violent extremists. For too long, the internet has seemed at times like the Wild West, lawless and beyond control.
The tech industry has consistently demonstrated that it is unable or unwilling to quickly and consistently identify and remove extremist content. Sadly, the U.S. Congress has thus far been unwilling to use its authority to hold tech platforms accountable, letting extremist content flourish and endangering American lives in the process.
On the other hand, governments around the world are recognizing their responsibility to stop the spread of hateful, violent ideologies. Last month, the European Parliament voted to require platforms to remove extremist content within one hour of notification or face hefty fines, $20 million per violation.
The UK has also introduced regulation that goes further and could hold technology executives personally liable if illegal content is not removed promptly.
In the wake of the Christchurch attack, New Zealand and Australia swiftly put in place measures making it illegal for tech companies to host extremist content. Australian legislation even threatens executives with jail time if their companies fail to act.
In a sign of growing international consensus on the issue, New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron will co-chair a meeting of global leaders and technology executives later this month on curbing extremist content.
All this, as the United States stands idly by.
With hate crimes on the rise in the United States and extremist groups proliferating, U.S. political leaders are mired in shameful inflexibility and broken, tired partisanship. The Southern Poverty Law Center reports the number of known hate group has grown every year since 2014. FBI figures show hate crimes increased 30 percent from 2014-2017.
The technology to swiftly identify and remove extremist content is already available. Tech companies have failed to widely and transparently adopt this technology on their own, leaving a clear need for the U.S. Congress to hold executives’ feet to the fire.
{mossecondads}Tech companies have had their opportunity to self-regulate and have been ineffective at policing their own networks. After years of resistance, even Facebook CEO Mark Zuckerberg admits there should be regulation.
Yet, recent congressional hearings on online hate speech descended into embarrassing political point-scoring instead of moving to meaningfully consider laws or regulations to crack down on the rise of hate speech on the internet. Americans urgently deserve better, and our elected officials should treat extremist online content for what it is: a national security threat.
Speaker Nancy Pelosi has called for a “new era” of “shared values” between big tech and government, but time is running out for action. Lawmakers face a simple challenge: find the added political will or risk the spread of extremist content that could ultimately put more lives at risk.
David Ibsen is executive director of the Counter Extremism Project, an international policy organization formed by former world leaders and diplomats to combat the growing threat from extremist ideologies. Follow him @dlibsen