You can understand what ML systems do but why they make certain decisions can be opaque without a lot of investigation. E.g. search algos showing background check ads for black names like “Jamal” but not white names like “Bob” being due to click through rate not racist algo
Hence most programs are trying to do things that try to interpret outputs, uncover potential biases, tweak models to guess at spurious correlations... None of those are interpretation, and none of them rule out the issues I worry about.
-
-
This Tweet is unavailable.
-
It really isn’t controversial! But really not sure how to convince you. ML not interpretable isn’t a grand claim of mine. It’s a mundane part of the field.

- Show replies
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.