Eliminativism solves the logical problem, but hardly anyone else buys it. Also it seems to make the substantive part of the job of cogsci much harder, because you can no longer use mental entities in your explanations.
Um, I’m interweaving writing this too-long thread with other things and losing track of the point. It would be a very long blog post or a short book. Maybe it’s best to stop now, and we can discuss if it prompts thoughts.
-
-
So I've read this thread, your "tiny spooks" page, and heard a bit from people who seem to share your position, but I must admit that I still seem to fail to get a grip on the whole argument. (This might be entirely my own failing, of course.)
-
Like, to me it mostly seems to say "there are things that we haven't figured out yet", but this discussion was in the context of the submind model, and the connection from "we haven't figured this out" and "that's why submind theory assumes a homonculus" isn't clear to me?
- 19 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.