Why is an institution implementing an ML system without understanding how it could be wrong any different to an institution implementing a database without having that understanding?
-
-
So the Q is: do you think we will have explainable/interpretable AI? I'm on the this doesn't seem anymore likely than using brain scans to understand humans. ML is classification that works not via rules we wrote (not Symbolic/Minsky style code). That really is different! +
-
We use ML because it can do things that we can not hand-code by writing rules or instructions. It's exactly because it's so different that it is so powerful and useful, and spread so widely in just a few years (once it had data to eat).
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.