Thanks for the replies! Yeah, I've read the first bit of @everytstudies podcast transcript and liked the social constructivist example too (also reminds me I should read some of those people). I agree that the transition could be made easier...
-
-
Replying to @drossbucket @xuenay and
e.g. by showing what's on the other side of pomo ideas. I think the transition would still be fairly difficult even without the hostility though, so maybe we disagree on that.
1 reply 0 retweets 2 likes -
Maybe? My main grasp of rationalism is LW rationalism, which I realize is slightly different than David's main critique - but still, when I started reading the Sequences, a lot of it felt like a scientific explanation of that pomo stuff my humanities friend talked about.
2 replies 0 retweets 2 likes -
Hm, yes OK that is surprising to me, I didn't get anything like that out of the LW sequences (I've only read parts, and relatively recently). Which bits are you thinking of, out of interest?
1 reply 0 retweets 1 like -
It's been a long time since I read them, but lots of stuff that seemed to be saying things like "judgments are in the mind of the interpreter rather rather than being objective facts of the world", e.g. https://www.readthesequences.com/Mind-Projection-Fallacy … .
1 reply 0 retweets 1 like -
Ah, OK, thanks. This probably gets into more than we can really go into on Twitter, but this seems to be mostly about probabilistic uncertainty, which is pretty well covered by what-I'd-call-rationality already...
1 reply 0 retweets 0 likes -
Replying to @drossbucket @xuenay and
What-I'd-call-metarationality also include the question of how you pick your state space in the first place, so that you can even start to apply probabilistic methods... which LW rarely goes into as far as I can see (apart from unusable in practice stuff like Solomonoff)
3 replies 2 retweets 4 likes -
Replying to @drossbucket @xuenay and
This post gets into the distinction in a better way than I'm going to manage here: https://jaydaigle.net/blog/paradigms-and-priors/ …
1 reply 0 retweets 2 likes -
I agree that things like picking your state space aren't discussed much on LW, but probabilistic uncertainty seems different from what I thought Mind Projection was talking about. I thought it (and many other posts) was more about "all classifications are value-laden".
1 reply 0 retweets 1 like -
Yes, when he's talking in general, non-technical terms he can be quite good on theory-ladenness of observation. Another nice example is here: https://www.lesswrong.com/posts/GKfPL6LQFgB49FEnv/replace-the-symbol-with-the-substance … But then on the next page he'll talk about Bayes, in a way that strongly implies that that solves the problem.
1 reply 0 retweets 1 like
e.g. the one you posted looks promising, but in the next page following on we're straight back into the Bayes stuff. He even got the 'Mind Projection Fallacy' tag from Jaynes! https://www.readthesequences.com/Probability-Is-In-The-Mind …
-
-
Replying to @drossbucket @Meaningness
The interpretation that I had was that he focused on Bayes because that's what you could usefully tackle using formal methods, with things like generating the original ideas to test being something that emerges from less-understood machinery in the brain.
1 reply 0 retweets 0 likes -
Something like "we don't understand hypothesis generation / picking the state space very well so we'll mostly treat it as a black box & take it as a given, and focus on the things that we do understand how to reason about". (with some exceptions such as https://www.lesswrong.com/posts/X2AD2LgtKgkRNPj2a/privileging-the-hypothesis …)
0 replies 0 retweets 1 like
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.