Check out our new paper to be presented at '23! We investigate the nature of hate raids on Twitch, how they impacted communities, and how different stakeholders responded: arxiv.org/pdf/2301.03946
Joint work with , , , 🧵
Conversation
Replying to
A wave of "hate raids" hit Twitch streamers in summer 2021; mainstream media described hate raids as highly-targeted attacks, often persecuting Black and LGBTQ+ communities on Twitch:
1
6
In our work, we paired interviews of streamers and bot developers from targeted communities with a large-scale measurement of hate raids across 9.6K popular channels on Twitch to better understand hate raids.
1
5
These hate raids had serious personal impacts on the targeted streamers, requiring them to spend significant time recruiting new moderators, adding new moderation tools, and taking steps to protect their personal information and even physical safety.
1
5
Streamers largely experienced hate raids in a way that aligned with popular media portrayals. Hate raids share characteristics with what Marwick calls morally-motivated networked harassment: they rely on identity conflicts and amplification.
1
4
Harassers justify these attacks with what they perceive to be a moral goal (e.g., fighting against “wokeness” on Twitch).
1
4
How then, might attackers have chosen their targets? Of the streams using tags (self-assigned categories for streams), we find those with Black, African American, and LGBTQ+ identity tags were disproportionately targeted by hate raids (p < 0.05).
1
4
Though we cannot prove this conclusively based on our data, this provides further evidence that attackers used tags as a targeting mechanism.
1
3
In our quantitative analysis, we found that 98% of hate raid messages consisted of identity-based attacks (most often anti-Black/antisemitic). However, a significant fraction of targeted streamers did not belong to the groups included in the raids’ hateful language.
1
3
Thus, beyond harassment of individuals, an additional goal was likely to garner as much visibility as possible. In many ways, this parallels Phillips' definition of subcultural trolling previously observed on 4chan and similar spaces.
1
4
We therefore conclude that, with characteristics of each, hate raids on Twitch exist in a space between subcultural trolling and morally-motivated networked harassment.
1
4
Additionally, we present narratives detailing how members of these target communities were able to rapidly respond to the threat of hate raids by aggregating and developing resources before the platform could, benefiting from an agility that companies often lack.
1
3
For example, users were willing to accept initially high false-positive rates on bans administered by community-created tools in exchange for low false-negative rates + speed of iteration, and these tools made a major positive impact!
1
5
When designing platforms/studying vulnerable communities, we encourage broader consideration of target communities’ lived experiences, the division of labor between moderators/tool builders/platforms, and different motivations for the perpetrators of hate-based attacks.
1
6
No single research method can fully describe a phenomenon as complex as hate raids, but complementary methods with attention to users’ experiences can provide a good start.
5
