It's time to break the news.The Messenger's slogan

How violent extremists use livestreams and manifestos to build an audience for hate

An expert explains how mass violence in Buffalo or Christchurch is produced and distributed for a vast global audience.

The man responsible for the mass homicide of Black Americans at a Buffalo, New York, supermarket didn’t enter the store alone. Via a head-mounted camera, the killer brought along an audience as he livestreamed his attack via Twitch, an online streaming platform owned by Amazon.

And it did. In the aftermath of the attack, users around the world shared imagery of the atrocity pulled from the video and details from the manifesto on traditional social media platforms like Twitter and Facebook, as well as in the 4chan forums where the shooter says he was radicalized over the last two years.

“These extremist communities are very versed in how media exists and is shared on social media,” O’Connor told Grid. “They are also often determined to use these kinds of incidents as propaganda for further promotion of their ideologies, [and] to glorify the acts of different extremists.”

O’Connor focuses on how far-right content spreads across the internet. In an interview with Grid’s misinformation reporter, Anya van Wagtendonk, he spoke of the difficulty of tamping the flow of the post-Buffalo imagery, and the social platforms’ responsibility in addressing violent, hateful content. He cautioned that the coming days offer fertile ground for online extremists seeking to exploit the attention paid to this mass shooting.

“The content from the day itself, but also the content that the shooter had created or shared online themselves, will be used by extremist communities to prolong interest in this attack, [and] also to amplify the motivations and the extremist ideologies underpinning that,” he said.

This interview has been edited for length and clarity.

The fact they choose livestreams is also likely because live content is very hard for platforms — be it the mainstream platforms like Twitch or Facebook or YouTube but, even more, those alternatives — it’s a hard thing to monitor and moderate. If you’re determined to … maximize the eyeballs to see your attack, then using something like a livestreaming platform offers you the opportunity of broadcasting a lot before it’s ultimately taken down as well. But it all comes back to just maximizing the impact of the content and creating as much hurt and pain and outrage and trying to broaden the reach of the message that you are trying to spread, and the ideologies and conspiracies underpinning that as well.

And that’s the immediate footage. These videos also get clipped and shared as shorter clips as well. In the immediate aftermath of the attack, there were two main clips that were being shared. … Saturday night and Sunday morning, I was finding versions of these clips being shared on Facebook and on Twitter that had tens of thousands and hundreds of thousands of views.

Judging how platforms will have performed in either succeeding or failing to take action against the content — we’re still a couple of days away from really being able to inspect and ask that question. We’re still waiting for more information from Twitch about the livestream and the user and the people who viewed it and all these kinds of different things.

Essentially, it’s very difficult because these platforms not only don’t have kind of guidelines or habits of enforcing the guidelines, but they also seem ideologically determined for their spaces not to be places where content like that is removed as quickly as possible. And that’s a big challenge for governments, or for researchers like ourselves. It’s also a challenge for the communities that are regularly exposed and targeted by extremist communities on these websites.

We’ve kind of lived through a decade or more of allowing platforms — mainstream and more alt as well — to kind of set their own terms and find ways to [address] challenges and deal with this kind of problem. And it really hasn’t worked.

But it’s difficult, because this stuff travels in many different ways: as screenshots, or as a WhatsApp video or as a link that was posted on one mirror site that was then shared on Facebook.

Yes, livestreaming terrorist attacks are quite a new phenomenon. But the idea of terrorist attacks that have been shared and disseminated online, and manifestos as well, that is not so new. When terrorists create or write these manifestos, they do it with the desire that they want to be shared, to broaden the reach of their attack or the ideologies that support them. And they create these documents to be shared and discussed. And to [share them], you are doing their work for them.

The danger of these kinds of manifestos at a moment like this, when they have maximum attention, is that they may expose someone to beliefs that they weren’t previously exposed to. That might ... lead someone toward being more exposed to extremist ideologies. So it’s a challenge for society but also one for the first social media platforms and online spaces, too, that we don’t unwittingly expose people to extremist ideology.

It just comes back to what [mainstream] platforms can do to limit the exposure … to this kind of material. And then for the more alternative spaces, if they would root out this kind of behavior and activity on their platforms, that will go a long way, too. But at the same time, removing one website that is used by extremists will not result in extremism disappearing as well. It’s a much wider societal problem that we’re dealing with.

Start your day with the biggest stories and exclusive reporting from The Messenger Morning, our weekday newsletter.
By signing up, you agree to our privacy policy and terms of use.
Sign Up.