Or, ML algorithms will, without anyone understanding what's going on, weed out people prone to depression or women more likely to become pregnant in the next two years. It's ML, not just bureacratic rules codified into some code.
-
-
Notice that my argument isn't 1-humans are great; 2-AI is biased (that's a problem but that's almost the easier problem because that's mainly a political problem). I'm saying ML, the current form of AI, will work, work well, at scale and cheaply and in ways we don't understand.
-
An interesting paper on auditing deep learning systems tldr: testing and guessing. https://scholarship.tricolib.brynmawr.edu/handle/10066/18664 …
- Show replies
New conversation -
-
-
You keep making this assertion. I keep pointing out why it’s flawed. This would be a more productive conversation if you could respond to that.
-
When we take formal rules and put them in a database (say the CA immigration system: it adds points. Doesn't matter by hand or computation) or even when we automate a fairly well-understood system (flying) we have ways of debugging/troubleshooting that we don't have for ML. +
- Show replies
New conversation -
-
-
This Tweet is unavailable.
-
There is an entire new sub-field of machine learning research called "interpretable machine learning" which tries to develop techniques for interpreting the mathematical structures models use to make predictions **because the models are not comprehensible**
End of conversation
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.