not sure I understand the distinction you're making, specifically, what is the distinction between model and explanation?
For any sequence of observations, there are infinitely many wrong explanations that nevertheless incorporate all data points
-
-
Adding learning does something to help this situation, though I might argue "learning" is... underspecified?
-
E.g. "Alright, but how quickly does it learn?" "Is it incentivized to seek out novel situations or ideas?" etc.
-
If we could ask AlphaGo to explain how to play Go based on its model, it would seem really dumb
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.