Never wrote it up properly (and I'd write it differently now anyway) but here are some rough notes, tl;dr it's a mess https://drossbucket.com/newsletters/march-2018/ … I never got to the mutual information stuff, which would only add to the mess :)
-
-
Replying to @drossbucket @JakeOrthwein and
y/n? 1. The LessWrong/etc. account of symbols/concepts/reality doesn’t say where the concepts/ontology come from. 2. Where the concepts/ontology come from is the only hard or interesting part. [...] N. Therefore, the LW account is not just wrong but completely wrong and also bad.
1 reply 0 retweets 2 likes -
Replying to @meditationstuff @JakeOrthwein and
Yep basically agree with 1 and 2 - figuring out axes for your clusterspace is the hard part. Dunno about completely wrong but certainly very limited if it has little to say about the hard part!
2 replies 0 retweets 0 likes -
Replying to @drossbucket @JakeOrthwein and
Ok. And/but it seems that people are having at least the experience of getting tremendous *epistemological*-feeling *usefulness* out of being exposed to the map/territory distinction, and I think we need an explanation for that? Seems more than any-port-in-storm or sociological.
3 replies 0 retweets 1 like -
Replying to @meditationstuff @drossbucket and
My theory (explained upthread) is that this does genuinely dramatically simplify, and thereby clarify, your thinking.
1 reply 0 retweets 2 likes -
Replying to @Meaningness @meditationstuff and
Unfortunately, it does that only by making most of the complexity of real-world representation invisible. Which means you are frequently wrong, and don’t have the necessary tools to debug when your wrongness collides with reality.
1 reply 0 retweets 4 likes -
Replying to @Meaningness @drossbucket and
If not too socially awkward, do you have a sense of where LW’ers get stuck on real-world problems? Speaking extremely generally, where “they” are not “rigid” (to my mind) their Qs & As for many real-world topics seem very good. “Explicit abstract contradictions” seems wrong crit.
3 replies 0 retweets 1 like -
Replying to @meditationstuff @Meaningness and
in my view, the rationalist failure case is the failure case of the vast majority of philosophy: optimizing for abstract rectitude rather than empirical ROI
4 replies 2 retweets 10 likes -
Replying to @sonyasupposedly @meditationstuff and
There's a lot more than that. Rationalists don't do their due diligence when criticising other philosophy (see eg this: https://www.lesswrong.com/posts/qmqLxvtsPzZ2s6mpY/a-priori …) or otherwise understanding the literature, they are very unrigorous and handwavy (see the same post)
2 replies 0 retweets 10 likes -
Replying to @aphercotropist @sonyasupposedly and
On top of that, there are a lot of philosophers that deal with really detailed real-world case studies. Many philosophers of science for example get very in depth into the history and methodology of the specific sciences they're investigating. Something LW really fails at.
1 reply 0 retweets 10 likes
Yes; but they were 22 and so they hadn’t had time to learn anything besides undergrad physics yet
-
-
Replying to @Meaningness @aphercotropist and
So they thought everything worked like undergrad physics. Kinda dumb but we were all young once
2 replies 1 retweet 15 likes -
Replying to @Meaningness @aphercotropist and
I still have concerns about gatekeeping, unevenly applies standards, or dismissal based in part on some non-standard usage of terms. The project might not have been an ivory-tower project, if that makes sense. But possibly used as straw target for academic in-group signaling.
1 reply 0 retweets 6 likes - 12 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.