(“Seemingly” because when I last read his stuff, which was like 30 years ago, he waffled a bit. He may have clarified or changed his position since, but I haven’t heard so.)
-
-
Replying to @Meaningness @xuenay and
Eliminativism solves the logical problem, but hardly anyone else buys it. Also it seems to make the substantive part of the job of cogsci much harder, because you can no longer use mental entities in your explanations.
1 reply 0 retweets 2 likes -
Replying to @Meaningness @xuenay and
Also, Eliminativist: You don’t have subjective experiences. Anyone else: Yes I do! E: That’s just an illusion. A: An illusion is a mistaken subjective experience, and you just said I don’t have them.
2 replies 0 retweets 6 likes -
Replying to @Meaningness @xuenay and
E: Well, I was being polite. Actually, you are just wired up to say you have experiences. And beliefs. You don’t actually believe you have experiences, or anything else. No one is home; you are a low-quality robot. A: [Punches him]
2 replies 0 retweets 2 likes -
Replying to @Meaningness @xuenay and
So say we admit there is subjective experience and want to explain it. Generally experience is experience *of* something; it is “intentional” in the technical sense of *about* something. So how does it get its aboutness?
1 reply 0 retweets 0 likes -
Replying to @Meaningness @xuenay and
The usual cognitivist move is to make the intentionality of experience dependent on the intentionality of representations. That’s because for a while they thought they had an explanation for the intentionality of representations.
1 reply 0 retweets 1 like -
Replying to @Meaningness @xuenay and
Or, actually, they thought AI guys did. We thought they did, so we both proceeded with the assumption that intentionality was understood, leaving the hard part to the other field. Once both sides realized this, the whole thing imploded.
1 reply 0 retweets 2 likes -
Replying to @Meaningness @xuenay and
If there were an explanation of intentionality, that wouldn’t be an explanation of subjectivity. One could imagine an AI with genuinely-referring representations that has no subjectivity. In fact it is commonly (though mistakenly) believed that programs routinely do just that.
1 reply 0 retweets 2 likes -
Replying to @Meaningness @xuenay and
The prototype for the physical theory of mental representations is sentences written on paper. The difficulty is that they have meaning (intentionality) only for a reader. Who reads the sentences in our heads?
1 reply 0 retweets 1 like -
Replying to @Meaningness @xuenay and
Cognitivism has to say “no one”, but then we need a different explanation for what gives them meaning, and all attempts to devise one have failed. You can tell a story for “the twitter window I am typing at is black” but “unicorns have one horn” is out of reach.
1 reply 0 retweets 1 like
We can tell a (vague, but plausible) mechanistic story about how perception works. That story may involve “representations” but they’re dissimilar to (and apparently function quite differently from) propositional representations such as of “unicorns have one horn.”
-
-
Replying to @Meaningness @xuenay and
Um, I’m interweaving writing this too-long thread with other things and losing track of the point. It would be a very long blog post or a short book. Maybe it’s best to stop now, and we can discuss if it prompts thoughts.
1 reply 0 retweets 0 likes -
So I've read this thread, your "tiny spooks" page, and heard a bit from people who seem to share your position, but I must admit that I still seem to fail to get a grip on the whole argument. (This might be entirely my own failing, of course.)
1 reply 0 retweets 1 like - 20 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.