one thing i will always be grateful to rationalism (among other groups) for is that regardless of the extent to which i agree with them about various things they took "how to think clearly" to be a core and vital challenge and it was a joy to partake in that learninghttps://twitter.com/eigenrobot/status/1442435972470169606 …
-
Show this thread
-
EA, HPMOR, AI risk, group houses, polyamory, gimmicky bayesianism, all that was later. maybe theres an extent to which that took up oxygen from the really useful development of a lexicon and culture of trying to think about things better that felt like the core of things to me
6 replies 0 retweets 35 likesShow this thread -
Replying to @eigenrobot
I’m confused by the chronology (again). I thought AI risk came first, and the rationality movement spun out from it. EA may have developed around the same time on a parallel track that later merged with the core AI rats? I dunno. I need a postrat to draw me a timeline.
1 reply 0 retweets 2 likes -
Replying to @SandrewFinance @eigenrobot
I think Yudkowsky says he wrote the sequences with the specific intent of arguing why AI risk mattered and was hard. But I don't know that Robin Hanson was ever particularly motivated by that. So some, but not all, early proponents cared a lot about ai risk?
2 replies 0 retweets 3 likes
i definitely came in via the hanson route so my views my not be representative
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.