Of course, I might just be entirely misunderstanding the whole thing. :-) feel free to let me know in that case
-
-
Hmm, yes, this does needs more explanation than is feasible on twitter!
1 reply 0 retweets 1 like -
Replying to @Meaningness @xuenay and
I guess I can try to clarify one thing. The observation is not that submind theory requires a homunculus more than some other similar theory. It’s that it doesn’t need spookiness any less. That is: sharding the ghost in the machine doesn’t help; you just get lots of
s2 replies 0 retweets 8 likes -
So had a three-hour conversation with an enactivist today and it clarified some of this, and I found much that I could agree with, e.g. intentionality being grounded in action. But I still found myself puzzled over the claim that it proves cognitivism/representationalism *wrong*.
1 reply 0 retweets 0 likes -
Replying to @xuenay @Meaningness and
I used the example of AlphaGo being fed a board position, estimating a win probability, and using it to guide a tree search of future positions. I asked, is the estimated win probability not a representation?
1 reply 0 retweets 0 likes -
Replying to @xuenay @Meaningness and
It refers to the external world, can be decoupled from current situation (future board positions being searched over have their own probabilities), has a causal role in the system's decision-making, and has been explicitly designed to play such a role by the system's developers.
1 reply 0 retweets 0 likes -
Replying to @xuenay @Meaningness and
We ended up looking at a list of tenets of enactivism, one of which said something like "the meaning of cognitive contents comes from their role in action, not from virtue them representing an external circumstance or containing a miniature world".
1 reply 0 retweets 0 likes -
Replying to @xuenay @Meaningness and
And I was like, okay, you could interpret the probability in this light, where its meaning comes from its role in the action. Or you could take a more representationist view, where its meaning comes from that *and* the representational contents that allows it to do its job.
1 reply 0 retweets 0 likes -
Replying to @xuenay @Meaningness and
And these seemed to me like two equally valid lenses for analyzing it, one of which emphasized the action and downplayed the representative aspect while the other didn't, but neither could be said to be more right or wrong than the other.
1 reply 0 retweets 0 likes -
We have found that it is difficult to do philosophy over twitter. However, I’ll give it a bit of a go… First, everyone agrees that artifacts can be representations; a stop sign e.g. They are representational in virtue of our treating them as such, not inherently.
1 reply 1 retweet 5 likes
Most representations in a computer also obviously only have this sort of “derived intentionality.” If you read a news report on your screen, what it means is not intrinsic to the pixels, but to human ability to understand it.
-
-
Replying to @Meaningness @xuenay and
The hard question is “original intentionality”: under what circumstances (if any) is something inherently representational, & how does that work? E.g., is a chess program’s representation of board states inherently that, or is it like a stop sign, requiring human interpretation?
2 replies 0 retweets 2 likes -
Replying to @Meaningness @xuenay and
If the program controls a robot and machine vision system that interact with a physical board, then it is more plausible to say that representations are inherently of board states (although even this turns out to be surprisingly tricky).
1 reply 0 retweets 2 likes - 18 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.