Only for obvious variables like race/gender which we know to look for. That's why there is so much reporting on that. Familiar ground. But ML will detect and discriminate things we could not previously detect, will not even think to check for. No variable list to run against.
Conversation
This Tweet was deleted by the Tweet author. Learn more
I am not going to convince you over Twitter that ML is not adding more variables, in the classic sense. 🤷♀️It's just not the same as say, adding more variables like heart rate, blood pressure, this and that measurement and running a regression or applying a formula.
2
4
So that I'm clear: is the assertion you are objecting to this:"ML is different than trad programs or databases and that it creates (for computation) unique challenges to transparency and auditing?" It's good for me to understand because, honestly, I'd have ranked that as mundane.
2
That’s a truism. The problem is that you can also say this about any new technology. Everything is always different in some important way, and so, really, it’s not a new problem at all. There are now a dozen or two replies to you pointing this out in various ways. 🤷🏻♂️
1
1
I actually don't think so, but thanks for the clarification. Yes, I'm asserting that there is something qualitatively different about ML than any other new technology—it's not just that it's opaque (or seemingly-magical) to non-experts. It's intrinsically opaque to its experts.+
1
1
13
One (1) counter is that we'll eventually crack this, and have interpretable ML (See: twitter.com/andrewthesmart). Other counter (2) is that we have used black-box technologies before (that it produced behavior we wanted but we did not know how). (1) maybe. (2) not too many.
Quote Tweet
Replying to @stevesi @zeynep and 2 others
There is an entire new sub-field of machine learning research called "interpretable machine learning" which tries to develop techniques for interpreting the mathematical structures models use to make predictions **because the models are not comprehensible**
1
4
But thanks, it's clearer to me what the disagreement is. I'll see if anyone has a long-form version of what I'm asserting (obviously, I think correctly) and what other historical black-box examples could compare (and also set out their limits).
11
This Tweet was deleted by the Tweet author. Learn more
Perhaps needs to be more specific: she means Neural Networks instead of ML? Because as I understand it, ML encompasses a broad range of techniques, many of them explainable.
Whereas NNs are considered not explainable by even the experts in the field.
NNs are much harder to interpret but that doesn't mean others are easily interpretable right away. Nevertheless, better to focus on NNs than using algorithm.
1
NNs can sometimes be converted to decision trees to some degree for interpretability with acceptable loss. Yet feature selection is not trivial and there's a tradeoff between interpretability and accuracy. I believe this tradeoff leads to 's arguments.
1




