Facebook offers insight into how the NZ shooter video spread

Facebook icon displayed on a smartphone before a screen with the full logo

Facebook has released more statistics about how the New Zealand shooter video was spread so prolifically across social media outlets, including its own.

While it cannot disclose all the details it has, due to co-operation with New Zealand's law enforcement investigation, it has revealed some interesting numbers which shed light on the minutiae of the video's redistribution.

Facebook said that fewer than 200 users viewed the shooter's live broadcast and no users filed a report until 12 minutes after the broadcast ended. Including the views from the live stream, the subsequently uploaded video was viewed around 4,000 times before being removed from the site.

The original Facebook live broadcast was removed and hashed so any other shares of it would be picked up by automatic content recognition technology and thus blocked at the point of upload.

"Some variants such as screen recordings were more difficult to detect, so we expanded to additional detection systems including the use of audio technology," said Chris Sonderby, vice president and deputy general counsel at Facebook.

Facebook said that the video was made available to download via file sharing link posted to the website 8chan before the social network was alerted to the video on its platform.

Facebook has since been in contact with the member organisations of the Global Internet Forum to Counter Terrorism (GIFCT) and has shared more than 800 visually-distinct videos of the attack as well as details of its enforcement approaches.

"We will continue to work around the clock on this and will provide further updates as relevant."

Google's AI tech strained by NZ shooter video - 18/03/2019:

Google's AI has been effective in the past, prolifically removing copyrighted and extremist-related material from YouTube's platform, but the New Zealand mosque shooter's self-recorded video seems to have stymied its algorithms, along with other social media sites.

The video, first livestreamed on Facebook and then shared across YouTube and Twitter was easily searchable using basic keywords such as the shooter's name.

Google has previously released statistics on how effective its machine learning-driven AI, first implemented in YouTube in 2017, has been. At the start, 8% of videos flagged for violent and extremist content were taken down with fewer than 10 views and by April 2018, over half of the same videos were taken down with fewer than 10 views.

Google declined to comment on how effective its AI was to IT Pro this time around, but users of the YouTube platform are appalled by the number of re-uploads the site has let slip through its net.

"Our hearts go out to the victims of this terrible tragedy. Shocking, violent and graphic content has no place on our platforms, and is removed as soon as we become aware of it. As with any major tragedy, we will work cooperatively with the authorities," said a Google spokesperson.

AI is being used and adopted by businesses everywhere in the name of automation and making operations more efficient. From voice assistants to warfare, AI's applications know no bounds but as everyone sees the benefits of AI, cases like this exemplify how much work is still yet to be done to polish the tech that could wipe out a vast number of human jobs.

Google encourages users to flag any videos that violate its guidelines through the site which, it said, should trigger a review of the video quickly.

"This video is still circulating online and I urge everyone to stop viewing and sharing this sick material. It is wrong and it is illegal," said Sajid Javid, home secretary in the Daily Express. "Tech companies who don't clean up their platforms should be prepared to face the force of the law.

"Online platforms have a responsibility not to do the terrorists' work for them. This terrorist filmed his shooting with the intention of spreading his ideology. Tech companies must do more to stop his messages being broadcast on their platforms," he added.

See more

It's not difficult to alter a video for it to bypass a website's content filter, according to Rasty Turek, chief executive officer of Pex, speaking to Bloomberg.

He said that simply making minor changes such as putting a frame on it or mirroring the video so it appears to be different can trick filters into letting it through.

Training AI to recognise a blacklisted clip is difficult when many altered iterations of the video are reuploaded. The AI must learn to spot the ways in which the clip is being modified and recognise that it's the same clip that was originally blocked - this, however, takes time.

The shooter also livestreamed the video which meant that removing came down to a human and it's a problem that companies such as Google and Facebook are having difficulty with.

Facebook, in particular, is known for having a high turnover rate with its human content moderators because many of them are being diagnosed with mental ill health as a result of overexposure to disturbing images.

Zuckerberg's company also struggled to contain the spread of the disturbing shooter video. It said in a tweet that the site removed 1.5m videos of the terrorist attack, 1.2m of which were blocked at the point of upload. Despite this, more videos were on the site 12 hours after the attack and it's clear by the 300,000 videos (that Facebook was aware of) which slipped through the automatic filtering, that a 20% failure rate isn't good enough.

It also begs the question of how content filters are supposed to work with the upcoming and increasingly likely-to-be-approved Article 13. If the content filters used by the world's biggest tech companies can't effectively stop a terrorist's video being shared, they might not be reliable enough for use in other organisations looking to protect themselves from the wrath of new EU law.

Connor Jones
News and Analysis Editor

Connor Jones has been at the forefront of global cyber security news coverage for the past few years, breaking developments on major stories such as LockBit’s ransomware attack on Royal Mail International, and many others. He has also made sporadic appearances on the ITPro Podcast discussing topics from home desk setups all the way to hacking systems using prosthetic limbs. He has a master’s degree in Magazine Journalism from the University of Sheffield, and has previously written for the likes of Red Bull Esports and UNILAD tech during his career that started in 2015.