Like, are we assuming that a submind model presupposes a cognitivist theory? Does it have to? How exactly are we defining a cognitivist theory? (this is the kind of term that everyone defines slightly differently)
-
-
Replying to @xuenay @Meaningness and
You say that there may be something like representations but they are different from the unicorn-propositional ones (UPOs), but why would they need to be UPOs in the first place? Does submind model assume that? I don't think it does?
1 reply 0 retweets 1 like -
Replying to @xuenay @Meaningness and
TBH, intentionality seems to me like the kind of an issue that philosophers will spend a lot of ink on to show how impossible it is, for an engineer to show up and solve it, while shrugging at the indignant philosopher insisting that that's just a hack, not a real solution. :-)
2 replies 0 retweets 1 like -
Replying to @xuenay @Meaningness and
Not that there would be no important issues about intentionality... but e.g. this seems to hinge on whether logical relationships are physical? That kind of thing makes me skeptical about whether the argument has any real-world relevance rather than just playing with words.pic.twitter.com/OIJXn7xUz8
2 replies 0 retweets 0 likes -
Replying to @xuenay @Meaningness and
If logical relationships being non-physical is a problem for some overly strict version of physicalism, so much worse for that theory. But why is that interesting? It makes no real world predictions, AFAICT.
1 reply 0 retweets 1 like -
Replying to @xuenay @Meaningness and
(not that a philosophical argument would always need to, but if you claim this to be an important reason for why our model of mind is wrong, you need to cash out the argument in real world terms or it's impossible for me to evaluate)
1 reply 0 retweets 0 likes -
Replying to @xuenay @Meaningness and
Of course, I might just be entirely misunderstanding the whole thing. :-) feel free to let me know in that case
1 reply 0 retweets 0 likes -
Hmm, yes, this does needs more explanation than is feasible on twitter!
1 reply 0 retweets 1 like -
Replying to @Meaningness @xuenay and
I guess I can try to clarify one thing. The observation is not that submind theory requires a homunculus more than some other similar theory. It’s that it doesn’t need spookiness any less. That is: sharding the ghost in the machine doesn’t help; you just get lots of
s2 replies 0 retweets 8 likes -
Replying to @Meaningness @xuenay and
Unless we're talking about subminds that aren't minds, in any typical sense of the word. Kind of like adding epicycles? Maybe be wrong, but still can be useful
1 reply 0 retweets 0 likes
Well Minsky’s SOM project was to understand intelligence by breaking into successively smaller, less-intelligent pieces, until you got to pieces that don’t need to be intelligent at all. That didn’t work then, but there’s no a priori reason it might not work (afawk).
-
-
Replying to @Meaningness @garybasin and
It doesn’t seem that you can apply the same approach to intentionality or subjectivity, though. There’s no concept of “simpler and therefore somewhat less referential” or “simpler and somewhat less aware.”
1 reply 0 retweets 1 like -
Replying to @Meaningness @garybasin and
Specifically wrt submind theory, the subminds are taken as having beliefs, desires, and intentions, which are no less spooky than those of the person as a whole.
1 reply 0 retweets 2 likes - 5 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.