I do have double standards, and it doesn’t bother me
One standard for people I think are acting in good faith, one for people I think are acting in bad faith.
I’m not a court of law, or otherwise formally behind a veil-of-ignorance. I don’t owe the world a single standard.
-
-
The most basic thing you can do with 988 variables pattern recognition. Gut feelings/intuition/System 1. And the most bad8c kind of pattern recognition is classification into friendly and hostile. That’s the most natural kind of double standard. One for friends, one for threats.
Show this thread -
Obviously this can go badly wrong very easily. We’re primed to map this standard to family vs not. My tribe vs yours. Generalized ingroup vs outgroup. These are what I call uncritical double standards. Ones you adopt unconsciously, often via imitation of authority figures.
Show this thread -
But the dangers of uncritical double standards should not drive you to the opposite end of the spectrum, of clueless single standards that involve perverse self-blinding to illegible context out of a perverse desire to be “consistent” like you’re a court. You’re not a court.
Show this thread -
A *critical* double standard is two things: a) A clear but illegible in/out sorting function. Legibility of a gut instinct is a clear sign of uncritical prejudice. b) Reserving the right to choose when you’ll make an effort to explain yourself and when you won’t bother.
Show this thread -
On twitter for example, most people have a gut sense of concern trolling even if they haven’t heard the term. You know when someone is doing it (maliciously or unconsciously). So unless you’re dumb, you engage at your discretion and don’t explain yourself when you don’t.
Show this thread -
Yes there are classification errors. Sometimes you apply good-faith rules of engagement to bad-faith people and vice-versa. Correct the error, use it to become more mindful of your critical pattern matching, move on. Don’t agonize over it.
Show this thread -
The only real discipline you need is a sense of your own power to hurt others. Outside of institutional due process contexts, it is actually really hard to hurt someone by simply refusing to deal with them. If your “bad faith standard” is disengagement you’re probably fine.
Show this thread -
If your bad-faith standard is active (like open hostility or quietly working to get someone blacklisted or persona non-grata on some social graph) then you have to police it harder. But it’s okay to have standards for picking actual conflicts.
Show this thread -
The idea of explainable or justifiable decisions is kinda dumb outside closed contexts. Which is why the explainable AI conversation is both interesting and tedious. When you want to use AI for due process contexts, it’s an interesting challenge to “blind” it to some things.
Show this thread -
Blinding is not explainability. You could blind a hiring algorithm to gender say, by doing statistical testing and removing inputs that provides a gender hint. That still won’t mean decisions are explainable. They;l simply be demonstrably statistically agnostic to some variable.
Show this thread -
Demanding a clear logical account of a decision is silly for almost everything. You shouldn’t expect it of either humans or machines most of the time. A decision is 3 things: input blinding, intuitive, classification into regimes, application of regime-specific standards.
Show this thread
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.