"[I]f I think this objective stance is not only possible but desirable-- then what I am teaching him to do is to self-observation, I am training him to examine his own actions and thoughts as if he were a neutral person inside his own mind. But that other person would be me."
Conversation
sometimes a person writes a sentence that just lodges itself permanently inside of me and that's one of them. that's one of the harpoons sticking out of this little baby right here. "but that other person would be me"
1
41
this happened in january 2015 which means i had been hanging around the bay area rationalists for about 2.5 years at that point. 2.5 years and i was writing about one of the most meaningful experiences of my life like an entomologist taking notes on a particularly interesting bug
2
4
55
some people understand immediately when i try to explain what it was like to be fully in the grip of the yudkowskian AI risk stuff and some people it doesn't seem to land at all, which is probably good for them and i wish i had been so lucky
2
15
90
when i came across lesswrong as a senior in college i was in some sense an empty husk waiting to be filled by something. i had not thought, ever, about who i was, what i wanted, what was important to me, what my values were, what was worth doing. just basic basic stuff
2
3
77
the Sequences were the single most interesting thing i'd ever read. eliezer yudkowsky was saying things that made more sense and captivated me more than i'd ever experienced. this is, iirc, where i was first exposed to the concept of a *cognitive bias*
5
5
57
i remember being horrified by the idea that my brain could be *systematically wrong* about something. i needed my brain a lot! i depended on it for everything! so whatever "cognitive biases" were, they were obviously the most important possible thing to understand
1
2
59
"but wait, isn't yud the AI guy? what's all this stuff about cognitive biases?"
the reason this whole fucking thing exists is that yud tried to talk to people about AI and they disagreed with him and he concluded they were insane and needed to learn how to think better
2
5
108
so he wrote a ton of blog posts and organized them and put them on a website and started a whole little subculture whose goal was - as coy as everyone wanted to be about this - *thinking better because we were all insane and our insanity was going to kill us*
3
1
72
it would take me a long time to identify this as a kind of "original sin" meme. one of the most compelling things a cult can have is a story about why everyone else is insane / evil and why they are the only source of sanity / goodness
1
9
119
a cult needs you to stop trusting yourself. this isn't a statement about what any particular person wants. the cult itself, as its own aggregate entity, separate from its human hosts, in order to keep existing, needs you to stop trusting yourself
Replying to
yud's writing was screaming to the rooftops in a very specific way: whatever you're doing by default, it's bad and wrong and you need to stop doing it and do something better hurry hurry you idiots we don't have time we don't have TIME we need to THINK
2
2
74
i had no defenses against something like this. i'd never encountered such a coherent memeplex laid out in so much excruciating detail, and - in retrospect - tailored so perfectly to invade my soul in particular. (he knew *math*! he explained *quantum mechanics* in the Sequences!)
2
79
an egg was laid inside of me and when it hatched the first song from its lips was a song of utter destruction, of the entire universe consumed in flames, because some careless humans hadn't thought hard enough before they summoned gods from the platonic deeps to do their bidding
2
6
96
yud has said quite explicitly in writing multiple times that as far as he's concerned AI safety and AI risk are *the only* important stories of our lifetimes, and everything else is noise in comparison. so what does that make me, in comparison? less than noise?
1
57
an "NPC" in the human story - unless, unless i could be persuaded to join the war in heaven myself? to personally contribute to the heroic effort to make artificial intelligence safe for everyone, forever, with *the entire lightcone* at stake and up for grabs?
1
1
57
i didn't think of myself as knowing or wanting to know anything about computer science or artificial intelligence, but eliezer didn't really talk in CS terms. he talked *math*. he wanted *proofs*. he wanted *provable safety*. and that was a language i deeply respected
1
46
yud wrote harry potter and the methods of rationality on purpose as a recruitment tool. he is explicit about this, and it worked. many very smart people very good at math were attracted into his orbit by what was in retrospect a masterful act of hyperspecific seduction
2
3
70
i, a poor fool unlucky in love, whose only enduring solace in life had been occasionally being good at math competitions, was being told that i could be a hero by being good at exactly the right kind of math. like i said, could not have been tailored better to hit me
2
97
i don't know how all this sounds to you but this was the air i breathed starting from before i graduated college. i have lived almost my entire adult life inside of this story, and later in the wreckage it formed as it slowly collapsed
1
1
72
the whole concept of an "infohazard" comes from lesswrong as far as i know. eliezer was very clear on the existence of *dangerous information*. already by the time i showed up on the scene there were taboos. *we did not speak of roko's basilisk*
3
67
(in retrospect another part of the cult attractor, the need to regulate the flow of information, who was allowed to know what when, who was smart enough to decide who was allowed to know what when, and so on and so on. i am still trying to undo some of this bullshit)
2
1
70
traditionally a cult tries to isolate you on purpose from everyone you ever knew before them but when it came to the rationalists i simply no longer found that i had anything to say to people from my old life. none of them had the *context* about what *mattered* to me anymore
2
2
84
i didn't even move to the bay on purpose to hang out with the rationalists; i went to UC berkeley for math grad school because i was excited about their research. the fact that eliezer yudkowsky would be *living in the same city as me* was just an absurd coincidence
1
34
the rationalists put on a meet-and-greet event at UC berkeley. i met anna salamon, then the director (iirc) of the Center for Applied Rationality, and talked to her. she invited me to their rationality workshop in january 2013. and from that point i was hooked
1
43
(it's so hard to tell this story. i've gone this whole time without telling you that during this entire period i was dealing with a breakup that, if i stood still too long, would completely overwhelm me with feelings i gradually learned how to constantly suppress)
3
75
(this is deeply embarrassing to admit but one reason i always found the simulation hypothesis strikingly plausible is that it would explain how i just happened to find myself hanging around who i considered to be plausibly the most important people in human history)
1
3
95
(because you'd think if our far future descendants were running ancestor simulations then they'd be especially likely to be running simulations of pivotal historical moments they wanted to learn more about, right? and what could be more pivotal than the birth of AI safety?)
2
67
god i feel like writing this all out is explaining something that's always felt weird to me about the whole concept of stories and science fiction stories in particular. *i have been living inside a science fiction story written by eliezer yudkowsky*
1
12
122
it didn't happen all at once like this, exactly. the whole memeplex sunk in over time through exposure. the more i drifted away from grad school the more my entire social life consisted of hanging out with other rationalists exclusively. their collective water was my water
1
49
it would take me years to learn how to articulate what was in the water. i would feel like i was going insane trying to explain how somehow hanging out with these dorks in the bay area made some part of me want to extinguish my entire soul in service of the greater good
1
2
56
i have been accused of focusing too much on feelings in general and my feelings in particular, or something like that. and what i want to convey here is that my feelings are the only thing that got me out of this extremely specific trap i had found myself in
1
3
107
i had to go back and relearn the most basic human capacities - the capacity to notice and say "i feel bad right now, this feels bad, i'm going to leave" - in order to fight *this*. in order to somehow tunnel out of this story into a better one that had room for more of me
3
8
116
a fun fact about the rationality and effective altruism communities is that they attract a lot of ex-evangelicals. they have this whole thing about losing their faith but still retaining all of the guilt and sin machinery looking for something more... rational... to latch onto
6
17
216
(that's a very irresponsible psychoanalysis of a whole bunch of people i just did there but i've asked some of them about stuff like this, enough that i think i'm not completely making this up)
4
1
72
i really feel like i get it though. i too now also find that once i've had a taste of what it's like to feel cosmically significant i don't want to give it up. i don't know how to live completely outside a story. i've never had to. i just want a better one and i'm still looking
8
3
108
i was really really hoping i would never have to think about AI ever again, y'know, after all this. seeing AI discourse turn up here was like running into an ex i was hoping never to see again
1
1
77
leaving the rationalists was on some level one of the hardest things i've ever done. it was like breaking up with someone in a world where you'd never heard anyone even describe a romantic relationship to you before. i had so little context to understand what had happened to me
2
2
74
i'm finally getting around to reading your leverage post, i was too scared to read it when it first came out. thank you for writing this 🙏 i don't think any of my experiences were near this intense but there's a family resemblance
4
1
50
Quote Tweet
(this is deeply embarrassing to admit but one reason i always found the simulation hypothesis strikingly plausible is that it would explain how i just happened to find myself hanging around who i considered to be plausibly the most important people in human history)
Show this thread
2
1
53
i had very little direct interaction with leverage but i knew that they were around. geoff anders taught at my first CFAR workshop. at one point i signed up for belief reporting sessions and signed an NDA saying i wasn't allowed to teach belief reporting
1
39
Show replies
