A single standard is a peculiar thing. It’s a feature of due processes that are purposely designed to be blind to context in specific ways for specific reasons. Context is a huge bucket of illegible things when judging good/bad faith — identity, trust, intentions, circumstances.
-
-
If your bad-faith standard is active (like open hostility or quietly working to get someone blacklisted or persona non-grata on some social graph) then you have to police it harder. But it’s okay to have standards for picking actual conflicts.
Show this thread -
The idea of explainable or justifiable decisions is kinda dumb outside closed contexts. Which is why the explainable AI conversation is both interesting and tedious. When you want to use AI for due process contexts, it’s an interesting challenge to “blind” it to some things.
Show this thread -
Blinding is not explainability. You could blind a hiring algorithm to gender say, by doing statistical testing and removing inputs that provides a gender hint. That still won’t mean decisions are explainable. They;l simply be demonstrably statistically agnostic to some variable.
Show this thread -
Demanding a clear logical account of a decision is silly for almost everything. You shouldn’t expect it of either humans or machines most of the time. A decision is 3 things: input blinding, intuitive, classification into regimes, application of regime-specific standards.
Show this thread
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.