Like, to me it mostly seems to say "there are things that we haven't figured out yet", but this discussion was in the context of the submind model, and the connection from "we haven't figured this out" and "that's why submind theory assumes a homonculus" isn't clear to me?
-
-
Unless we're talking about subminds that aren't minds, in any typical sense of the word. Kind of like adding epicycles? Maybe be wrong, but still can be useful
-
Well Minsky’s SOM project was to understand intelligence by breaking into successively smaller, less-intelligent pieces, until you got to pieces that don’t need to be intelligent at all. That didn’t work then, but there’s no a priori reason it might not work (afawk).
- 7 more replies
New conversation -
-
-
So had a three-hour conversation with an enactivist today and it clarified some of this, and I found much that I could agree with, e.g. intentionality being grounded in action. But I still found myself puzzled over the claim that it proves cognitivism/representationalism *wrong*.
-
I used the example of AlphaGo being fed a board position, estimating a win probability, and using it to guide a tree search of future positions. I asked, is the estimated win probability not a representation?
- 26 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.
s