I made a typo, last clause should be "and uses model states to make decisions"
Even if it could make explanations for its models, they might be obviously wrong
-
-
For any sequence of observations, there are infinitely many wrong explanations that nevertheless incorporate all data points
-
Adding learning does something to help this situation, though I might argue "learning" is... underspecified?
-
E.g. "Alright, but how quickly does it learn?" "Is it incentivized to seek out novel situations or ideas?" etc.
-
If we could ask AlphaGo to explain how to play Go based on its model, it would seem really dumb
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.