How do you tell the difference between an agent that reasons about "the external world" and
so the former *might* be useful for explanations, or maybe not. but the latter seems optimized for something else
-
-
(also, everything I'm expressing here is based on my present contending with Popperian epistemology)
-
what if I specify that the latter learns; that its models continue to make good predictions in new situations?
- 7 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.