One can also write a supra program that looks at results and tags, and reports anomalies, or things society deems are biases. Frequently, the problem with ML may be the mirror it holds up to us.
-
This Tweet is unavailable.
-
-
Only for obvious variables like race/gender which we know to look for. That's why there is so much reporting on that. Familiar ground. But ML will detect and discriminate things we could not previously detect, will not even think to check for. No variable list to run against.
1 reply 3 retweets 22 likes -
This Tweet is unavailable.
-
I am not going to convince you over Twitter that ML is not adding more variables, in the classic sense.
It's just not the same as say, adding more variables like heart rate, blood pressure, this and that measurement and running a regression or applying a formula.2 replies 0 retweets 4 likes -
Perhaps. But simply asserting something and then ignoring any possible objection does not make for a strong position.
1 reply 0 retweets 0 likes -
So that I'm clear: is the assertion you are objecting to this:"ML is different than trad programs or databases and that it creates (for computation) unique challenges to transparency and auditing?" It's good for me to understand because, honestly, I'd have ranked that as mundane.
2 replies 0 retweets 0 likes -
That’s a truism. The problem is that you can also say this about any new technology. Everything is always different in some important way, and so, really, it’s not a new problem at all. There are now a dozen or two replies to you pointing this out in various ways.
1 reply 0 retweets 1 like -
I actually don't think so, but thanks for the clarification. Yes, I'm asserting that there is something qualitatively different about ML than any other new technology—it's not just that it's opaque (or seemingly-magical) to non-experts. It's intrinsically opaque to its experts.+
1 reply 0 retweets 13 likes -
Replying to @zeynep @benedictevans and
zeynep tufekci Retweeted Where the Tweets have no name
One (1) counter is that we'll eventually crack this, and have interpretable ML (See: https://twitter.com/andrewthesmart/status/1064341779816767488 …). Other counter (2) is that we have used black-box technologies before (that it produced behavior we wanted but we did not know how). (1) maybe. (2) not too many.
zeynep tufekci added,
Where the Tweets have no name @andrewthesmartReplying to @stevesi @zeynep and 2 othersThere is an entire new sub-field of machine learning research called "interpretable machine learning" which tries to develop techniques for interpreting the mathematical structures models use to make predictions **because the models are not comprehensible**1 reply 0 retweets 4 likes
But thanks, it's clearer to me what the disagreement is. I'll see if anyone has a long-form version of what I'm asserting (obviously, I think correctly) and what other historical black-box examples could compare (and also set out their limits).
-
-
This Tweet is unavailable.
-
Replying to @stevesi @benedictevans
Perhaps
@zeynep needs to be more specific: she means Neural Networks instead of ML? Because as I understand it, ML encompasses a broad range of techniques, many of them explainable. Whereas NNs are considered not explainable by even the experts in the field.1 reply 0 retweets 0 likes - Show replies
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.