So the Q is: do you think we will have explainable/interpretable AI? I'm on the this doesn't seem anymore likely than using brain scans to understand humans. ML is classification that works not via rules we wrote (not Symbolic/Minsky style code). That really is different! +
-
-
This Tweet is unavailable.
-
One can also write a supra program that looks at results and tags, and reports anomalies, or things society deems are biases. Frequently, the problem with ML may be the mirror it holds up to us.
1 reply 0 retweets 4 likes -
Only for obvious variables like race/gender which we know to look for. That's why there is so much reporting on that. Familiar ground. But ML will detect and discriminate things we could not previously detect, will not even think to check for. No variable list to run against.
1 reply 3 retweets 22 likes -
This Tweet is unavailable.
-
I am not going to convince you over Twitter that ML is not adding more variables, in the classic sense.
It's just not the same as say, adding more variables like heart rate, blood pressure, this and that measurement and running a regression or applying a formula.2 replies 0 retweets 4 likes -
Perhaps. But simply asserting something and then ignoring any possible objection does not make for a strong position.
1 reply 0 retweets 0 likes -
So that I'm clear: is the assertion you are objecting to this:"ML is different than trad programs or databases and that it creates (for computation) unique challenges to transparency and auditing?" It's good for me to understand because, honestly, I'd have ranked that as mundane.
2 replies 0 retweets 0 likes -
That’s a truism. The problem is that you can also say this about any new technology. Everything is always different in some important way, and so, really, it’s not a new problem at all. There are now a dozen or two replies to you pointing this out in various ways.
1 reply 0 retweets 1 like -
I actually don't think so, but thanks for the clarification. Yes, I'm asserting that there is something qualitatively different about ML than any other new technology—it's not just that it's opaque (or seemingly-magical) to non-experts. It's intrinsically opaque to its experts.+
1 reply 0 retweets 13 likes
zeynep tufekci Retweeted Where the Tweets have no name
One (1) counter is that we'll eventually crack this, and have interpretable ML (See: https://twitter.com/andrewthesmart/status/1064341779816767488 …). Other counter (2) is that we have used black-box technologies before (that it produced behavior we wanted but we did not know how). (1) maybe. (2) not too many.
zeynep tufekci added,
-
-
Replying to @zeynep @benedictevans and
But thanks, it's clearer to me what the disagreement is. I'll see if anyone has a long-form version of what I'm asserting (obviously, I think correctly) and what other historical black-box examples could compare (and also set out their limits).
0 replies 0 retweets 11 likes -
This Tweet is unavailable.
- Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.