On Sunday, I did a second talk – “Assume Worst Intent”. How to design online services to reduce the risk of harassment and abuse.
Slides, a transcript, and a video: https://alexwlchan.net/2018/09/assume-worst-intent/ … #PyConUK
-
Show this thread
-
Replying to @alexwlchan
I really enjoyed this. I really want to hear your thoughts on anti-harassment tools/features themselves used to harass. For example, we know trans women are banned from Twitter at higher rates because terfs (and their allies) report them in droves. What can be done in such cases?
2 replies 1 retweet 3 likes -
Replying to @o_guest
I’ve been thinking about this for a couple of days. As Laurelai said, dethroning the abusers/white supremacists is pretty key – if you don’t have buy-in to prevent abuse of your tools, you’ve already lost.
1 reply 0 retweets 2 likes -
Replying to @alexwlchan @o_guest
One is having good moderation tools, and sticking humans in the middle. If you can trip a suspension/ban automatically, somebody will work out how and do it behind your back. [See: hiring moderators for context, and also language fluency. Google Translate isn’t good enough.]
1 reply 0 retweets 2 likes -
Replying to @alexwlchan @o_guest
This is one where Mastodon has room to improve – the description of the moderation UI in https://nolanlawson.com/2018/08/31/mastodon-and-the-challenges-of-abuse-in-a-federated-system/ … sounds like there’s room to improve. If overwhelming the moderator with too many reports triggers a ban, you still lose.
1 reply 0 retweets 1 like -
Replying to @alexwlchan @o_guest
2) Twitter makes it easy to spin up sockpuppet accounts and make spurious reports. If you restrict what you can do as a new user (e.g. disable some features for first 24 hours), that can reduce that effect. [Idea originally from
@tsunamino, who’s an A+ follow for this stuff.]1 reply 0 retweets 3 likes -
Replying to @alexwlchan @o_guest
3) Stack Overflow has an interesting model for reporting – new users start with 10 “flags” a day, and can earn/lose flags based on the quality of their reports. Extra flags for helpful reports, or a temporary flagging ban for too many declined reports:https://meta.stackexchange.com/a/175405/226928;2 …
1 reply 0 retweets 2 likes -
Replying to @alexwlchan @o_guest
4) Look for patterns in the people who are being reported/doing the reporting. Is X somebody who consistently makes lots of reports? Maybe time to tap them on the shoulder, slow down. Is Y somebody who is often reported on? Double-check a flag if you’re about to sanction them.
1 reply 0 retweets 1 like -
Replying to @alexwlchan @o_guest
On small sites, your moderators will spot these patterns going through the reports. On larger sites, you want better metrics/moderator UI – for example, when I’m reviewing a report, show me how many other reports the reporter made/are about the reported person. More context.
1 reply 0 retweets 1 like -
Replying to @alexwlchan @o_guest
End thread, for now. Those the sort of ideas you had in mind?
1 reply 0 retweets 1 like
Great stuff! Yeah, that's really useful. Thank you. I will think about this myself too. 
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.