The hard question is “original intentionality”: under what circumstances (if any) is something inherently representational, & how does that work? E.g., is a chess program’s representation of board states inherently that, or is it like a stop sign, requiring human interpretation?
-
-
Replying to @Meaningness @xuenay and
If the program controls a robot and machine vision system that interact with a physical board, then it is more plausible to say that representations are inherently of board states (although even this turns out to be surprisingly tricky).
1 reply 0 retweets 2 likes -
Replying to @Meaningness @xuenay and
So, e.g., some neurons in human V1 cortex can reasonably be said to represent edge angles, because they’re causally coupled to those. The question is whether you can extend a representational story to cognition in general (and if so, how).
1 reply 0 retweets 2 likes -
Replying to @Meaningness @xuenay and
E.g., there’s no direct coupling of my knowledge that Ouagadougou is the capital of Burkina Fasso to Ouagadougou. I have no idea what it looks like, how to get there, etc. This *can’t* be grounded in my personal perception or action (so some versions of “enactivism” are wrong).
1 reply 0 retweets 2 likes -
Replying to @Meaningness @xuenay and
In that case, we need a social account of distributed knowledge: I can regurgitate the phrase “Ouagadougou is the capital of Burkina Fasso,” but it is meaningful only in virtue of *other* people being able to interact with it, and therefore meaningfully interpret my “knowledge.”
2 replies 0 retweets 2 likes -
Replying to @Meaningness @xuenay and
Additionally, much of what we know does not, as far as can be told, involve representations at all. A standard example is bicycle riding. Cognitivists have to say that this ability is represented unconsciously, but there’s zero evidence for that, and good arguments against it.
1 reply 0 retweets 3 likes -
Replying to @Meaningness @xuenay and
So the representationalist story is that ALL mental activity, by definition, is computations over representations that are intrinsically meaningful. This runs into a slew of different problems, and is just not at all credible, and was abandoned by all serious philosophers ~1992.
1 reply 0 retweets 3 likes -
Replying to @Meaningness @xuenay and
One can imagine weakening the cognitivist story so that only certain sorts of mental activity are like that, or something, but I don’t know of any serious proposals along those lines.
1 reply 0 retweets 1 like -
Replying to @Meaningness @xuenay and
Instead, cognitivists just agreed to carefully avoid talking about anything that would make the difficulties obvious. Unfortunately that was almost everything, so cognitive science has been basically sterile and at a standstill since the 1992 implosion.
2 replies 0 retweets 2 likes -
Replying to @Meaningness @xuenay and
The first fMRI study was 1992, which was just when AI-based cognitivism died (I helped kill it). So the cognitivists all transferred their hopes to “neuroscience will eventually explain how representation works.”
1 reply 0 retweets 3 likes
But fMRI is not nearly fine-grained enough to see representations, if they even existed, & also fMRI basically doesn’t work at all, so thirty years work from a lot of otherwise seemingly intelligent people has been wasted.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.