Tech companies must act to stop horrific exploitation of their platforms
The recent New Zealand shootings were an immense tragedy and our hearts and prayers go out to the family, friends and loved ones of the 50 people who have lost their lives and were severely injured. As a shocked nation tries to regain a sense of normalcy by making sense of the senseless, social media’s role during the massacre and in its aftermath must be closely scrutinized. It took Facebook 17 crucial minutes before it took down the shooter’s live stream. It’s taking Facebook, YouTube, Twitter and other social media platforms even longer to stop reuploads of the tragic event now totaling more than 1.5 million removed copies of the mosque attack video on Facebook alone.
It should come as a surprise to no one – least of all the tech companies – that an extremist would weaponize social media. Live streaming capabilities have been used by other extremists to document their shootings, rapes and other unspeakable acts of violence. The Counter Extremism Project has also documented the many ways in which terrorist have used Facebook Live to discuss propaganda, share methods of recruitment, and post hateful and threatening messages.
{mosads}Extremists’ exploitation of social media isn’t a new problem and tech companies must take responsibility for how little they have done to prevent it. And the sooner they take seriously the removal of online extremism, the sooner their platforms will stop being viewed by extremists as a fertile platforms for broadcasting their ideology to the world.
Unverifiable claims of artificial intelligence and machine learning, such as those made at the recent House Judiciary Committee Hearing, must be backed up by results. Certainly, the actions of a terrorist are predictable or even rational. But by allowing videos of the New Zealand shooting itself and those videos praising it to go viral, tech companies have shown their solutions are inadequate and unacceptable.
Proven technology to prevent such occurrences already exists. Now called eGLYPH, robust hashing is the technological evolution of PhotoDNA, which has been used by countless organizations to stop online child exploitation. It works by fingerprinting any form of digital content that has already been removed – images, videos and audio – and is sophisticated enough to identify variations of the same content, stopping re-uploads from ever occurring.
Further, tech companies must now look past their bottom lines and put their words into action by taking responsibility for what happens on their platforms. Facebook CEO Mark Zuckerberg has claimed that he wants his company to become a force for social good. But those words are meaningless when their business model – data and online content – is directly affected by anything that filters or removes it. While Facebook, YouTube and others are claiming to have removed millions of videos of the New Zealand shootings, they fail to acknowledge that they were so slow to act that law enforcement was responsible for alerting Facebook what was occurring on its own site explaining why it took so long to systematically remove the content.
Tech companies’ profits have allowed them to grow to an unprecedented scale, but in the absence of regulation, online extremism and exploitation have run rampant on their platforms. With controversy after controversy, public officials have now grown wise to tech companies’ inaction and rhetoric, signaling an end to the era of self-regulation. A U.K. parliamentary committee in February released a report excoriating Facebook for its inability to prevent misuse on its platform. In 2018, Germany launched a first-of-its-kind law called NetzDG that fines tech companies for any systematic failures to delete illegal content. And more recently, regulators at the Federal Trade Commission created a task force to more closely monitor tech companies – mirroring efforts by the European Union for policing misbehavior.
The work of identifying and removing extremist content which exists online has been the mission of the Counter Extremism Project and a tireless pursuit. After years of work, in 2017, we were able to convince YouTube to take down Anwar al-Awlaki’s lectures and sermons. He was an American who had turned into a terrorist preacher and whose addresses were used by Al Qaeda in the Arabian Peninsula to recruit followers.
We understand that the threat of online extremist ideology will never go away, but that does not mean incompetent or ineffective action is acceptable, and more cannot be done to stop tragedies such as the New Zealand shooting from happening again. Tech companies must do more to stop these public safety and national security crises. It is in their business imperative to do so, and to do so now – with each horrific exploitation of their platforms, the public outrage and call for action will grow stronger and more decisive.
Hany Farid is the Albert Bradley 1915 Third Century Professor of Computer Science at Dartmouth College and a senior adviser to the Counter Extremism Project. Mark D. Wallace, a former U.S. ambassador to the United Nations for management and reform, is the CEO of the Counter Extremism Project.
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts