no, but I see people with far more expertise than me have been explaining it to you
-
This Tweet is unavailable.
-
-
This Tweet is unavailable.
-
you think debugging the system and having it explain decisions are the same? I think I understand where you're confused now ;)
1 reply 0 retweets 4 likes -
Steve nobody saying that nothing is fixed. There’s consensus in the field that ML is not interpretable, a research program to try to do that, but not much progress and good arguments that it never well because connectionist neural networks are different then symbolic coding.
2 replies 0 retweets 2 likes -
Replying to @zeynep @marypcbuk and
Hence most programs are trying to do things that try to interpret outputs, uncover potential biases, tweak models to guess at spurious correlations... None of those are interpretation, and none of them rule out the issues I worry about.
0 replies 0 retweets 1 like -
This Tweet is unavailable.
-
It really isn’t controversial! But really not sure how to convince you. ML not interpretable isn’t a grand claim of mine. It’s a mundane part of the field.
1 reply 0 retweets 3 likes -
It isn't controversial. James Mickens covers this in his USENIX security keynote with humor and technical detailshttps://youtu.be/ajGX7odA87k?t=300 …
0 replies 0 retweets 3 likes -
This Tweet is unavailable.
“Some people were wrong about some things.” Okay.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.