TBH, intentionality seems to me like the kind of an issue that philosophers will spend a lot of ink on to show how impossible it is, for an engineer to show up and solve it, while shrugging at the indignant philosopher insisting that that's just a hack, not a real solution. :-)
-
-
Replying to @xuenay @Meaningness and
Not that there would be no important issues about intentionality... but e.g. this seems to hinge on whether logical relationships are physical? That kind of thing makes me skeptical about whether the argument has any real-world relevance rather than just playing with words.pic.twitter.com/OIJXn7xUz8
2 replies 0 retweets 0 likes -
Replying to @xuenay @Meaningness and
If logical relationships being non-physical is a problem for some overly strict version of physicalism, so much worse for that theory. But why is that interesting? It makes no real world predictions, AFAICT.
1 reply 0 retweets 1 like -
Replying to @xuenay @Meaningness and
(not that a philosophical argument would always need to, but if you claim this to be an important reason for why our model of mind is wrong, you need to cash out the argument in real world terms or it's impossible for me to evaluate)
1 reply 0 retweets 0 likes -
Replying to @xuenay @Meaningness and
Of course, I might just be entirely misunderstanding the whole thing. :-) feel free to let me know in that case
1 reply 0 retweets 0 likes -
Hmm, yes, this does needs more explanation than is feasible on twitter!
1 reply 0 retweets 1 like -
Replying to @Meaningness @xuenay and
I guess I can try to clarify one thing. The observation is not that submind theory requires a homunculus more than some other similar theory. It’s that it doesn’t need spookiness any less. That is: sharding the ghost in the machine doesn’t help; you just get lots of
s2 replies 0 retweets 8 likes -
Replying to @Meaningness @xuenay and
Unless we're talking about subminds that aren't minds, in any typical sense of the word. Kind of like adding epicycles? Maybe be wrong, but still can be useful
1 reply 0 retweets 0 likes -
Replying to @garybasin @xuenay and
Well Minsky’s SOM project was to understand intelligence by breaking into successively smaller, less-intelligent pieces, until you got to pieces that don’t need to be intelligent at all. That didn’t work then, but there’s no a priori reason it might not work (afawk).
1 reply 0 retweets 3 likes -
Replying to @Meaningness @garybasin and
It doesn’t seem that you can apply the same approach to intentionality or subjectivity, though. There’s no concept of “simpler and therefore somewhat less referential” or “simpler and somewhat less aware.”
1 reply 0 retweets 1 like
Specifically wrt submind theory, the subminds are taken as having beliefs, desires, and intentions, which are no less spooky than those of the person as a whole.
-
-
Replying to @Meaningness @garybasin and
As Rin’dzin mentioned in the podcast, I find the submind approach *majorly* valuable in understanding myself, but I regard it as a heuristically useful metaphor, rather than as an actual explanation.
1 reply 0 retweets 5 likes -
Replying to @Meaningness @garybasin and
My experience of stuff arising in mind is that it’s arbitrary and random - I find it hard to ascribe intentionality/agency, even when points of reference for content of the thought or whatever is arising are apparent. Didn’t think to mention that in the podcast.
1 reply 0 retweets 8 likes - 3 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.