The most basic thing you can do with 988 variables pattern recognition. Gut feelings/intuition/System 1. And the most bad8c kind of pattern recognition is classification into friendly and hostile. That’s the most natural kind of double standard. One for friends, one for threats.
-
Show this thread
-
Obviously this can go badly wrong very easily. We’re primed to map this standard to family vs not. My tribe vs yours. Generalized ingroup vs outgroup. These are what I call uncritical double standards. Ones you adopt unconsciously, often via imitation of authority figures.
1 reply 0 retweets 11 likesShow this thread -
But the dangers of uncritical double standards should not drive you to the opposite end of the spectrum, of clueless single standards that involve perverse self-blinding to illegible context out of a perverse desire to be “consistent” like you’re a court. You’re not a court.
1 reply 4 retweets 22 likesShow this thread -
A *critical* double standard is two things: a) A clear but illegible in/out sorting function. Legibility of a gut instinct is a clear sign of uncritical prejudice. b) Reserving the right to choose when you’ll make an effort to explain yourself and when you won’t bother.
1 reply 2 retweets 14 likesShow this thread -
On twitter for example, most people have a gut sense of concern trolling even if they haven’t heard the term. You know when someone is doing it (maliciously or unconsciously). So unless you’re dumb, you engage at your discretion and don’t explain yourself when you don’t.
3 replies 0 retweets 11 likesShow this thread -
Yes there are classification errors. Sometimes you apply good-faith rules of engagement to bad-faith people and vice-versa. Correct the error, use it to become more mindful of your critical pattern matching, move on. Don’t agonize over it.
2 replies 1 retweet 13 likesShow this thread -
The only real discipline you need is a sense of your own power to hurt others. Outside of institutional due process contexts, it is actually really hard to hurt someone by simply refusing to deal with them. If your “bad faith standard” is disengagement you’re probably fine.
1 reply 2 retweets 20 likesShow this thread -
If your bad-faith standard is active (like open hostility or quietly working to get someone blacklisted or persona non-grata on some social graph) then you have to police it harder. But it’s okay to have standards for picking actual conflicts.
1 reply 1 retweet 8 likesShow this thread -
The idea of explainable or justifiable decisions is kinda dumb outside closed contexts. Which is why the explainable AI conversation is both interesting and tedious. When you want to use AI for due process contexts, it’s an interesting challenge to “blind” it to some things.
3 replies 0 retweets 6 likesShow this thread -
Blinding is not explainability. You could blind a hiring algorithm to gender say, by doing statistical testing and removing inputs that provides a gender hint. That still won’t mean decisions are explainable. They;l simply be demonstrably statistically agnostic to some variable.
1 reply 0 retweets 7 likesShow this thread
Demanding a clear logical account of a decision is silly for almost everything. You shouldn’t expect it of either humans or machines most of the time. A decision is 3 things: input blinding, intuitive, classification into regimes, application of regime-specific standards.
-
-
Replying to @vgr
If you haven't seen it, for hiring decisions, including blinding (down to voice modulation in coding inerviews), take a look at
@interviewingio blog, amazing data there. Or ask@alinelernerLLC1 reply 0 retweets 0 likes -
Thanks for calling that out. Here's the post in question:https://blog.interviewing.io/we-built-voice-modulation-to-mask-gender-in-technical-interviews-heres-what-happened/ …
0 replies 0 retweets 2 likes
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.