... I was motivated by the fact that sometimes (as in your screenshot) he shows an understanding that representation should ground out in interaction somewhere. But exactly how he wants that to work is hugely contradictory across his writing, and sometime flat out absent...
-
-
Replying to @drossbucket @JakeOrthwein and
Never wrote it up properly (and I'd write it differently now anyway) but here are some rough notes, tl;dr it's a mess https://drossbucket.com/newsletters/march-2018/ … I never got to the mutual information stuff, which would only add to the mess :)
2 replies 0 retweets 2 likes -
Replying to @drossbucket @JakeOrthwein and
y/n? 1. The LessWrong/etc. account of symbols/concepts/reality doesn’t say where the concepts/ontology come from. 2. Where the concepts/ontology come from is the only hard or interesting part. [...] N. Therefore, the LW account is not just wrong but completely wrong and also bad.
1 reply 0 retweets 2 likes -
Replying to @meditationstuff @JakeOrthwein and
Yep basically agree with 1 and 2 - figuring out axes for your clusterspace is the hard part. Dunno about completely wrong but certainly very limited if it has little to say about the hard part!
2 replies 0 retweets 0 likes -
Replying to @drossbucket @JakeOrthwein and
Ok. And/but it seems that people are having at least the experience of getting tremendous *epistemological*-feeling *usefulness* out of being exposed to the map/territory distinction, and I think we need an explanation for that? Seems more than any-port-in-storm or sociological.
3 replies 0 retweets 1 like -
Replying to @meditationstuff @drossbucket and
My theory (explained upthread) is that this does genuinely dramatically simplify, and thereby clarify, your thinking.
1 reply 0 retweets 2 likes -
Replying to @Meaningness @meditationstuff and
Unfortunately, it does that only by making most of the complexity of real-world representation invisible. Which means you are frequently wrong, and don’t have the necessary tools to debug when your wrongness collides with reality.
1 reply 0 retweets 4 likes -
Replying to @Meaningness @drossbucket and
If not too socially awkward, do you have a sense of where LW’ers get stuck on real-world problems? Speaking extremely generally, where “they” are not “rigid” (to my mind) their Qs & As for many real-world topics seem very good. “Explicit abstract contradictions” seems wrong crit.
3 replies 0 retweets 1 like -
Replying to @meditationstuff @Meaningness and
in my view, the rationalist failure case is the failure case of the vast majority of philosophy: optimizing for abstract rectitude rather than empirical ROI
4 replies 2 retweets 10 likes -
Replying to @sonyasupposedly @meditationstuff and
There's a lot more than that. Rationalists don't do their due diligence when criticising other philosophy (see eg this: https://www.lesswrong.com/posts/qmqLxvtsPzZ2s6mpY/a-priori …) or otherwise understanding the literature, they are very unrigorous and handwavy (see the same post)
2 replies 0 retweets 10 likes
LW was a bunch of bright ignorant kids who had no adult to point out that they were reinventing wheels but making them pentagonal. Some of them have grown up and are now clueful and accomplished in their fields.
-
-
Replying to @Meaningness @aphercotropist and
There’s residual nostalgia for an exciting and creative (if badly confused) community—a scenius, as Brian Eno says
0 replies 0 retweets 4 likesThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.