really though, the only reason I could say that of the former is that it is "under-specified" compared to the latter
-
-
Replying to @BagelDaughter @ReferentOfSelf
so the former *might* be useful for explanations, or maybe not. but the latter seems optimized for something else
1 reply 0 retweets 0 likes -
Replying to @BagelDaughter @ReferentOfSelf
(also, everything I'm expressing here is based on my present contending with Popperian epistemology)
1 reply 0 retweets 2 likes -
Replying to @BagelDaughter
what if I specify that the latter learns; that its models continue to make good predictions in new situations?
1 reply 0 retweets 0 likes -
Replying to @ReferentOfSelf @BagelDaughter
what you're saying is rather abstract to me, can you give an example (tweet size permitting)
1 reply 0 retweets 0 likes -
Replying to @ReferentOfSelf
I've been going off the idea that an agent can make all correct predictions but have a very poor ability to "explain"
1 reply 0 retweets 0 likes -
Replying to @BagelDaughter @ReferentOfSelf
Even if it could make explanations for its models, they might be obviously wrong
1 reply 0 retweets 0 likes -
Replying to @BagelDaughter @ReferentOfSelf
For any sequence of observations, there are infinitely many wrong explanations that nevertheless incorporate all data points
1 reply 0 retweets 0 likes -
Replying to @BagelDaughter @ReferentOfSelf
Adding learning does something to help this situation, though I might argue "learning" is... underspecified?
1 reply 0 retweets 0 likes -
Replying to @BagelDaughter @ReferentOfSelf
E.g. "Alright, but how quickly does it learn?" "Is it incentivized to seek out novel situations or ideas?" etc.
1 reply 0 retweets 0 likes
If we could ask AlphaGo to explain how to play Go based on its model, it would seem really dumb
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.