GamingNews

Buffalo gunman clips proliferate on social media following Twitch removal

Following Saturday’s horrific mass shooting in Buffalo, online platforms like Facebook, TikTok and Twitter are seemingly struggling to prevent various versions of the gunman’s livestream from proliferating on their platforms. The shooter, an 18-year-old white male, attempted to broadcast the entire attack on Twitch using a GoPro Hero 7 Black. The company told Engadget it took his channel down within two minutes of the violence starting.

“Twitch has a zero-tolerance policy against violence of any kind and works swiftly to respond to all incidents,” a Twitch spokesperson said. “The user has been indefinitely suspended from our service, and we are taking all appropriate action, including monitoring for any accounts rebroadcasting this content.”

Despite Twitch’s response, that hasn’t stopped the video from proliferating online. According to New York Times reporter Ryan Mac, one link to a version of the livestream someone used a screen recorder to preserve saw 43,000 interactions. Another Twitter user said they found a Facebook post linking to the video that had been viewed more than 1.8 million times, with an accompanying screenshot suggesting the post did not trigger Facebook’s automated safeguards. 

A Meta spokesperson told Engadget the company has designated the shooting as a terrorist attack and added the gunman’s footage to a database it says will help it automatically detect and remove copies before they’re uploaded again. The spokesperson added the company’s moderation teams are working to catch bad actors who attempt to circumvent the blocks it has put in place.       

Responding to Mac’s Twitter thread, Washington Post reporter Taylor Lorenz said she found TikTok videos that share accounts and terms Twitter users can search for to view the full video. “Clear the vid is all over Twitter,” she said. We’ve reached out to the company for comment.

“We believe the hateful and discriminatory views promoted in content produced by perpetrators are harmful for society and their dissemination should be limited in order to prevent perpetrators from publicizing their message,” a Twitter spokesperson told Engadget. They added the company was “proactively” working to identify and take action against tweets that violate its guidelines.   

Preventing terrorists and violent extremists from disseminating their content online is one of the things Facebook, Twitter and a handful of other tech companies said they would do following the 2019 shooting in Christchurch, New Zealand. In the first 24 hours after that attack, Meta said it removed 1.5 million videos, but clips of the shooting continued to circulate on the platform for more than a month after the event. The company blamed its automated moderation tools for the failure, noting they had a hard time detecting the footage because of the way in which it was filmed. “This was a first-person shooter video, one where we have someone using a GoPro helmet with a camera focused from their perspective of shooting,” Neil Potts, Facebook’s public policy director, told British lawmakers at the time.

Update 6:39PM ET: Added comment and additional information from Meta and Twitter.


Author: I. Bonifacic
Source: Engadget

Related posts
AI & RoboticsNews

AI risk management startup ValidMind raises $8.1M to help banks comply with regulations

DefenseNews

Amid faltering domestic program, Taiwan orders more MQ-9B drones

DefenseNews

BAE demos platform that gives Army AMPVs turret system options

DefenseNews

US Army’s fresh look at watercraft includes unmanned options

Sign up for our Newsletter and
stay informed!