How do you tell the difference between an agent that reasons about "the external world" and
-
-
by "explanation" I mean a thing that acts on existing models to change/improve them
-
really though, the only reason I could say that of the former is that it is "under-specified" compared to the latter
-
so the former *might* be useful for explanations, or maybe not. but the latter seems optimized for something else
-
(also, everything I'm expressing here is based on my present contending with Popperian epistemology)
- 8 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.