The shocking hate-fueled violence and open displays of racism and anti-semitism flaring up across the country have rightly shaken Americans in recent weeks. How could Nazi and white supremacist groups form, grow and erupt in this way in our neighborhoods and communities?
The incredible power of the internet to connect and amplify the power and reach of these groups is one reason they have surged more quickly today than in the 1990s, the last time domestic terrorist groups posed a significant threat. The Southern Poverty Law Center now tracks 917 domestic terror and hate groups, most of which use the internet to connect, organize, plot and operate.
{mosads}In the wake of Charlottesville, the technology giants who provide these platforms have moved swiftly. Google and GoDaddy have cut off access to the Daily Stormer, the Nazi website that promoted the Charlottesville “Unite the Right” while Facebook quickly pulled down the rally’s event page. But unfortunately, the same moral clarity and urgency could do little in advance of such despicable activities.
Likewise, it has taken too long for major players, particularly Google, to making the moves needed to prevent its platforms from being used to facilitate both domestic and international violent extremism. For years, counterterrorism experts, victim advocates and political leaders have urged Google to do more to identify and remove extremist propaganda and recruitment materials from its networks, with little to show for it.
Some progress was made this spring when major advertisers like AT&T, Johnson & Johnson, and Dominos pulled their ads from Google’s YouTube service after they appeared alongside violent and extremist content. That move hit Google where it hurts — the advertising business that produces 90 percent of its revenue — and the company quickly rolled out new policies to ensure corporate ads didn’t end up placed on hate sites.
But managing ad placements doesn’t go nearly far enough to address the problem. The real issue here is that Google must reconsider whether it is time to scrub extremist materials from its products and services, to ensure they are not used as unwitting tools of radicalization and, ultimately, murder.
To its credit, Google seems to recognize this, and in the wake of recent terror attacks has pledged to do more to find and block access to extremist propaganda. But it is their continued vigilance, once the spotlight of publicity and political pressure moves on, that is even more critical. This pledge cannot go the way of previous promises Google has made over the years.
For example, in 2013, the company paid a $500 million fine for knowingly placing ads for illegal pharmacies that sold drugs like Oxycontin and Ritalin without requiring a prescription. Apparently, even after Google learned these pharmacies weren’t following the law, it “continued to accept their money and assisted the pharmacies in placing ads and improving their [websites].”
Likewise, creative artists have complained for years that Google has failed to live up to its promises to battle piracy on YouTube and the problem keeps growing to the point where economists claim it costs artists a billion dollars a year. Rather than look for a comprehensive solution, Google has engaged in an endless game of “whack a mole” where every time one pirate link is taken down, another pops up.
This is the exact same problem counterterrorism experts have been warning of for years: “An [extremist] website gets knocked down and it comes back exactly the same in a few days or hours” and even if a site is “flagged and removed [it’s too late] to stop countless copies popping up elsewhere.”
Perhaps events in Charlottesville can propel more urgent and timely action going forward. In the past, when the stakes have been high enough and the risk of liability for failing to act is clear, the company has shown it can clean up its network. Google received wide praise for its efforts to combat child pornography, putting in place a wide range of tools to ensure its services aren’t used to facilitate this crime.
Countering online extremism requires the same depth of commitment and dedication of resources. The current approach simply shuttles the problem around the internet but does little to actually stop it. Google must develop better tools to automatically block or delist terror sites, both foreign and domestic, as well as extremist propaganda in order to end this cycle.
While some remain skeptical that the company will prioritize this effort, or take any steps that shut off advertising revenue streams that are the heart of its business, we should recognize that Google’s leaders have taken an important first step: acknowledging the problem and promising to address it.
If they follow through, this is a great opportunity for one of the most important consumer brands in the world to earn back trust and win credibility. And if they don’t, it will fall to regulators, consumers and potentially even victims to hold them, and the company itself, accountable.
Mark R. Jacobson (@MarkOnDefense) is an associate professor at the Walsh School of Foreign Service at Georgetown University and an expert on the use of propaganda and disinformation during the Cold War. He has served on the staff of the Senate Armed Services Committee and at the Pentagon as an advisor to the secretary of Defense and secretary of the Navy. The views he represents are his own.
The views expressed by contributors are their own and are not the views of The Hill.