I think this is semi-deliberate: they found that thinking in terms of “maps” instead of “representations” clarified their thinking considerably, so they went with it. Indeed, it does make the story much more precise & tractable, at the cost of making it much more wrong.
-
-
Replying to @Meaningness @drossbucket and
The essay undermines this by pointing out the even literal maps don’t work anything like the way LW uses the word. There’s tons of nebulosity in there, not just uncertainty or imprecision. (But less nebulosity than with most representations)
2 replies 0 retweets 12 likes -
Replying to @Meaningness @drossbucket and
Maybe this idea about “entanglement” and “mutual information” could focus the criticism a bit? This seems to underpin Yudkowsky’s general conception of representation.pic.twitter.com/i0uqS3McCh
2 replies 0 retweets 1 like -
Replying to @JakeOrthwein @Meaningness and
I only vaguely know this particular post, but 3 years ago I got the idea that EY had a coherent story on representation and I just had to work out what it was. so god help me I ended up reading a pile of sequences posts, Arbital pages and ancient pdfs...
1 reply 0 retweets 3 likes -
Replying to @drossbucket @JakeOrthwein and
... I was motivated by the fact that sometimes (as in your screenshot) he shows an understanding that representation should ground out in interaction somewhere. But exactly how he wants that to work is hugely contradictory across his writing, and sometime flat out absent...
3 replies 0 retweets 6 likes -
Replying to @drossbucket @JakeOrthwein and
Never wrote it up properly (and I'd write it differently now anyway) but here are some rough notes, tl;dr it's a mess https://drossbucket.com/newsletters/march-2018/ … I never got to the mutual information stuff, which would only add to the mess :)
2 replies 0 retweets 2 likes -
Replying to @drossbucket @JakeOrthwein and
y/n? 1. The LessWrong/etc. account of symbols/concepts/reality doesn’t say where the concepts/ontology come from. 2. Where the concepts/ontology come from is the only hard or interesting part. [...] N. Therefore, the LW account is not just wrong but completely wrong and also bad.
1 reply 0 retweets 2 likes -
Replying to @meditationstuff @JakeOrthwein and
Yep basically agree with 1 and 2 - figuring out axes for your clusterspace is the hard part. Dunno about completely wrong but certainly very limited if it has little to say about the hard part!
2 replies 0 retweets 0 likes -
Replying to @drossbucket @JakeOrthwein and
Ok. And/but it seems that people are having at least the experience of getting tremendous *epistemological*-feeling *usefulness* out of being exposed to the map/territory distinction, and I think we need an explanation for that? Seems more than any-port-in-storm or sociological.
3 replies 0 retweets 1 like -
Replying to @meditationstuff @drossbucket and
My theory (explained upthread) is that this does genuinely dramatically simplify, and thereby clarify, your thinking.
1 reply 0 retweets 2 likes
Unfortunately, it does that only by making most of the complexity of real-world representation invisible. Which means you are frequently wrong, and don’t have the necessary tools to debug when your wrongness collides with reality.
-
-
Replying to @Meaningness @drossbucket and
If not too socially awkward, do you have a sense of where LW’ers get stuck on real-world problems? Speaking extremely generally, where “they” are not “rigid” (to my mind) their Qs & As for many real-world topics seem very good. “Explicit abstract contradictions” seems wrong crit.
3 replies 0 retweets 1 like -
Replying to @meditationstuff @Meaningness and
in my view, the rationalist failure case is the failure case of the vast majority of philosophy: optimizing for abstract rectitude rather than empirical ROI
4 replies 2 retweets 10 likes - 17 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.