Live video of New Zealand shooting puts tech on defensive

Getty

Social media giants, including YouTube, Facebook and Twitter, are facing new criticism after they struggled to block livestreamed footage of a gunman shooting worshippers at a mosque in New Zealand.

The episode saw users uploading and sharing clips from the disturbing 17-minute livestream faster than social media companies could remove them.

The companies were already under scrutiny over the rise in online extremist content, but Friday’s troubling incident also underscored big tech’s difficulties in rooting out violent content as crises unfold in real time.

In a live point-of-view video uploaded to Facebook on Friday, the killer shot dozens of mosquegoers, at one point going back to his car to stock up on weapons. 

{mosads}New Zealand police said the footage, which captured only part of the attack on two separate mosques that left 49 people dead and more than 40 injured, was “extremely distressing” and asked people to refuse to share it.

Critics pounced on tech companies, accusing them of failing to get ahead of the violent video spreading.

“Tech companies have a responsibility to do the morally right thing,” Sen. Cory Booker (D-N.J.), a 2020 contender, told reporters on Friday. “I don’t care about your profits.

“This is a case where you’re giving a platform to hate,” Booker continued. “That’s unacceptable and should have never happened, and it should have been taken down a lot more swiftly. The mechanisms should be in place to allow these companies to do that.” 

“The rapid and wide-scale dissemination of this hateful content — live-streamed on Facebook, uploaded on YouTube and amplified on Reddit — shows how easily the largest platforms can still be misused,” Sen. Mark Warner (D-Va.) said.

Facebook said it took down the video as soon as it was flagged by the New Zealand Police. But that response suggested artificial intelligence (AI) tools and human moderators had failed to catch the livestream, which went on for 17 minutes.

By the time Facebook suspended the account behind the video, an hour and a half after it was posted, the footage had already proliferated across the Internet with thousands of uploads on Twitter, Instagram, YouTube, Reddit and other platforms. 

Critics said companies had failed to prepare for these issues.

“The reality is that Facebook and others have grown to their current monstrous scale without putting guard rails in place to deal with what was predictable harm,” Hany Farid, a computer science professor at Dartmouth College, said in a statement to The Hill. “Now they have the unbearably difficult problem of going back and trying to retrofit a system not designed to have guard rails to deal with what is a spectacular array of troubling content.” 

More than 10 hours after the attack, the video could still be found through searches on YouTube, Twitter and Facebook, even as those companies said they were working to prevent the footage from spreading.

YouTube by Friday evening had removed thousands of videos related to the incident. Facebook and Twitter did not share numbers, but both said they were working overtime to remove the content as it appeared. 

“Shocking, violent and graphic content has no place on our platforms, and we are employing our technology and human resources to quickly review and remove any and all such violative content on YouTube,” a YouTube spokesperson said. “As with any major tragedy, we will work cooperatively with the authorities.” 

Tech firms have been using AI tools to identify and remove subsequent uploads of the video, but the process has been complicated by users posting slightly altered versions. When users crop or manipulate the footage, it becomes harder for AI tools to track down.

“Since the attack happened, teams from across Facebook have been working around the clock to respond to reports and block content, proactively identify content which violates our standards and to support first responders and law enforcement,” Mia Garlick, Facebook’s director of policy for Australia and New Zealand, said in a statement. 

Friday’s attacks are not the first time violence has been livestreamed.

In February 2017, two people, including a 2-year-old boy, were shot and killed during a livestream on Facebook. Later that year, Facebook Live captured a gunman shooting a 74-year-old man in Cleveland. 

Facebook in May 2017 announced that it was hiring 3,000 more content moderators to deal with the issue of graphic video content, a move that Mary Anne Franks, a law professor at the University of Miami, said amounted to “kicking the can down the road.” 

She said Facebook will now have to answer questions about whether its Facebook Live product should have been rolled out in the first place, given the opportunities it posed for violent extremists. 

“We need to be having the conversation about whether or not Facebook Live should exist at all,” Franks told The Hill. “If they can’t get it to a place where it’s safe, then they shouldn’t have the product.” 

{mossecondads}While Facebook has policies against violent video, tracking and removing content being uploaded in real time is complicated. The company relies on a mixture of AI and human content moderators to flag and remove live footage, and both methods come with a set of serious challenges. 

AI technology at this point is not sophisticated enough to flag all the live videos that depict violence and death, Farid said. And the thousands of human moderators must sort through billions of Facebook uploads per day, often being exposed to disturbing content.

“Human moderators are being exploited … monetarily and in terms of psychological impact,” Franks said. “They’re not an answer to address this problem.” 

The issue extends beyond live videos, raising questions about whether the platforms are doing enough to cut off extremist content as it emerges. 

“We’re seeing neo-Nazis deliberately using these platforms to spread messages of hate, to incite others to violence, and they’re using it with language that is deliberately designed to avoid content filters,” extremism researcher and data scientist Emily Gorcenski told The Hill.

“We see the major social media platforms and tech companies are very slow to moderate content, and they are overly permissive of the type of speech that is leading to the kinds of radicalization that we’re seeing,” she added. 

Hours before the shooting, the suspect apparently posted a manifesto on Twitter and announced his intention to livestream the mass shooting on 8chan, a fringe chatroom that he frequented. 

New Zealand police confirmed the suspected gunman had penned the white nationalist, anti-immigrant screed, which is more than 70 pages.

Twitter deleted the account in question hours after the shooting took place, and it has been working to remove reuploads of the video from its service.

Facebook, Twitter, YouTube and other leading social media platforms have been grappling with how to handle extremist and white nationalist content for years, particularly as anti-immigrant sentiment has spiked in the U.S. and Europe. The companies have struggled to draw the line between freedom of speech and incendiary propaganda that has the potential to radicalize users.

In the U.S., because of Section 230 of the Communications Decency Act, the platforms are not held legally liable for what users post. Tech advocates credit that law with empowering the internet, but some lawmakers have questioned whether it should be changed.

“There’s no question that we are seeing the propagation of hate and extremism online, and what we’re also seeing is the social media companies twisting themselves into pretzels trying to figure out how to deal with some of this as it ensues,” Robert McKenzie, a director and senior fellow with New America, told The Hill. 

Gorcenski, the data scientist, said pressure on tech would only grow.

“It’s a very easy decision to censor a video that shows 50 people murdered,” she said.

“What is a much more difficult and bold decision is to actively and aggressively deplatform the people who are sharing those messages and promoting that level of violence and hatred,” she added. 

Tags Cory Booker Mark Warner

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Top ↴

Daily News

Hunter Biden's SECOND TRIAL Set To Begin, Prosecutors Look To Bring Addiction Back Into Spotlight

Hunter Biden's SECOND TRIAL Set To Begin, Prosecutors ...
RFK Jr tells Roseanne Barr he staged dead bear cub ...
Kamala Harris's VP shortlist narrows
Harris, Trump court voters in Georgia as they stand ...
More Videos
Main Area Middle ↴
See all Hill.TV See all Video
Main Area Bottom ↴

Most Popular

Load more