Or maybe I should say that some people seem to learn rationality and meta-rationality in an intertwined way, so that meta-rationality never looks counterintuitive because it's been part of the lesson all along.
-
-
An excess focus on rationality, neglecting the meta-rational aspects, looks to me more like something produced by tribal signaling ("our thinking is better than that of emotion-only hippies"), psychological insecurities or autism spectrum traits rather than a logical necessity.
1 reply 0 retweets 4 likes -
This ties back to what
@everytstudies said in the quoted thread - "it would be way easier to absorb [the constructivist lesson] from people who didn't come off as hostile". Meta-rationality wouldn't be so counterintuitive if its ideas weren't associated with a hostile tribe.1 reply 0 retweets 5 likes -
Thanks for the replies! Yeah, I've read the first bit of
@everytstudies podcast transcript and liked the social constructivist example too (also reminds me I should read some of those people). I agree that the transition could be made easier...1 reply 0 retweets 2 likes -
Replying to @drossbucket @xuenay and
e.g. by showing what's on the other side of pomo ideas. I think the transition would still be fairly difficult even without the hostility though, so maybe we disagree on that.
1 reply 0 retweets 2 likes -
Maybe? My main grasp of rationalism is LW rationalism, which I realize is slightly different than David's main critique - but still, when I started reading the Sequences, a lot of it felt like a scientific explanation of that pomo stuff my humanities friend talked about.
2 replies 0 retweets 2 likes -
Hm, yes OK that is surprising to me, I didn't get anything like that out of the LW sequences (I've only read parts, and relatively recently). Which bits are you thinking of, out of interest?
1 reply 0 retweets 1 like -
It's been a long time since I read them, but lots of stuff that seemed to be saying things like "judgments are in the mind of the interpreter rather rather than being objective facts of the world", e.g. https://www.readthesequences.com/Mind-Projection-Fallacy … .
1 reply 0 retweets 1 like -
Ah, OK, thanks. This probably gets into more than we can really go into on Twitter, but this seems to be mostly about probabilistic uncertainty, which is pretty well covered by what-I'd-call-rationality already...
1 reply 0 retweets 0 likes -
Replying to @drossbucket @xuenay and
What-I'd-call-metarationality also include the question of how you pick your state space in the first place, so that you can even start to apply probabilistic methods... which LW rarely goes into as far as I can see (apart from unusable in practice stuff like Solomonoff)
3 replies 2 retweets 4 likes
This post gets into the distinction in a better way than I'm going to manage here: https://jaydaigle.net/blog/paradigms-and-priors/ …
-
-
I agree that things like picking your state space aren't discussed much on LW, but probabilistic uncertainty seems different from what I thought Mind Projection was talking about. I thought it (and many other posts) was more about "all classifications are value-laden".
1 reply 0 retweets 1 like -
Yes, when he's talking in general, non-technical terms he can be quite good on theory-ladenness of observation. Another nice example is here: https://www.lesswrong.com/posts/GKfPL6LQFgB49FEnv/replace-the-symbol-with-the-substance … But then on the next page he'll talk about Bayes, in a way that strongly implies that that solves the problem.
1 reply 0 retweets 1 like - 3 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.