Opens profile photo
Follow
Click to Follow robbensinger
Rob Bensinger ๐Ÿ”
@robbensinger
Comms czar . RT = increased vague psychological association between myself and the tweet.
Berkeley, Californianothingismere.comJoined November 2008

Rob Bensinger ๐Ÿ”โ€™s Tweets

Pinned Tweet
"Bad take" bingo cards are terrible, because they never actually say what's wrong with any of the arguments they're making fun of. So here's the "bad AI alignment take bingo" meme that's been going around... but with actual responses to the "bad takes"!
Image
33
891
Show this thread
Is there any extant novel philosophy text from after Boethius (AD 524) and before the Carolingians (AD ~800)?
Quote Tweet
they didnโ€™t exist and werenโ€™t important twitter.com/bad_histories/โ€ฆ
1
Acknowledging the issue's importance, raising the alarm, and getting to work solving AGI alignment and nonproliferation, all while continuing to speak frankly and normally about the topic. Because we don't have to choose one or the other, and the 12D chess options have failed us.
10
Show this thread
Not slipping into weird guarded language because it feels like an important geopolitics topic. Not getting hyper-conservative (or unrigorous) about what you're willing to think about because AI sounds too sci-fi to you.
1
13
Show this thread
I think that if humanity treated smarter-than-human AI the way we treat ordinary everyday topics, we'd be in a pretty good place. One of my top goals is just to encourage more ordinary thinking and talking about AI. (A more ambitious goal than it maybe sounds.)
1
32
Show this thread
Basic orientation to what is and isn't easy about AI alignment requires the first list. I didn't skip this step; others tried to skim my notes; and the ones who don't skim my notes seem to have no clue whatsoever. This by itself seems to me to explain most disagreement.
Quote Tweet
Replying to @IAmTimNguyen
Evolutionary psychology integrates biology, anthropology, primatology, genetics, neuroscience, cognitive science, economics, decision throty, and game theory, with connections to most of the social sciences, humanities, law, medicine, and policy. FWIW.
5
48
Show this thread
"Georgetown Uni's Center for Security and Emerging Technology is accepting applications for AI Safety / AI Assurance research grants. They offer up to $750k per project accepted, expended over 6-24 months. 1-2 page expression of interest due Aug 1." "
2
26
๐—ง๐—ต๐—ฒ ๐—ฝ๐—ฎ๐˜€๐˜ ๐Ÿฒ ๐—บ๐—ผ๐—ป๐˜๐—ต๐˜€: โ€œOf course, we wonโ€™t give the AI internet accessโ€ ๐˜”๐˜ช๐˜ค๐˜ณ๐˜ฐ๐˜ด๐˜ฐ๐˜ง๐˜ต ๐˜‰๐˜ช๐˜ฏ๐˜จ: ๐Ÿคช โ€œOf course, weโ€™ll keep it in a boxโ€ ๐˜๐˜ข๐˜ค๐˜ฆ๐˜ฃ๐˜ฐ๐˜ฐ๐˜ฌ: ๐Ÿ˜œ โ€œOf course, we wonโ€™t build autonomous weaponsโ€ ๐˜—๐˜ข๐˜ญ๐˜ข๐˜ฏ๐˜ต๐˜ช๐˜ณ: ๐Ÿ˜š โ€œOf course, weโ€™ll coordinate andโ€ฆย Show more
146
2,790
I personally know a good % of the ~300 people working on AI X-risk, and whether their beliefs are correct, ~everyone I know is authentically motivated. Many have sacrificed happiness and relationships to work on this problem for what they expect to be the rest of their lives.
24
469
Show this thread
"Having more ideologically pro-safety AI designers win an โ€˜arms raceโ€™ against less concerned teams ๐—ถ๐˜€ ๐—ณ๐˜‚๐˜๐—ถ๐—น๐—ฒ ๐—ถ๐—ณ ๐˜†๐—ผ๐˜‚ ๐—ฑ๐—ผ๐—ปโ€™๐˜ ๐—ต๐—ฎ๐˜ƒ๐—ฒ ๐—ฎ ๐˜„๐—ฎ๐˜† ๐—ณ๐—ผ๐—ฟ ๐˜€๐˜‚๐—ฐ๐—ต ๐—ฝ๐—ฒ๐—ผ๐—ฝ๐—น๐—ฒ ๐˜๐—ผ ๐—ถ๐—บ๐—ฝ๐—น๐—ฒ๐—บ๐—ฒ๐—ป๐˜ ๐—ฒ๐—ป๐—ผ๐˜‚๐—ด๐—ต ๐˜€๐—ฎ๐—ณ๐—ฒ๐˜๐˜† ๐˜๐—ผ ๐—ฎ๐—ฐ๐˜๐˜‚๐—ฎ๐—น๐—น๐˜† ๐—ป๐—ผ๐˜ ๐—ฑ๐—ถ๐—ฒ" -โ€ฆย Show more
Image
4
86
Replying to and
I'm happy to start out by providing a lot of technical reasons to expect that we can't align AI built on anything remotely like current techniques, and reasonable arguments for why unaligned superintelligence would be expected to kill us. I don't accept an infinite andโ€ฆย Show more
4
98
Ever wonder why itโ€™s harder and harder to get an appointment with a doctor? Itโ€™s because some folks decided decided to create a shortage. (I donโ€™t know why this started making the rounds again but itโ€™s still very relevant.)
13
563
Replying to
I think this long list of signatories, including the leadership teams of all the biggest labs, the founders of the field, and hundreds of academics and researchers, shows that to be false
5
124
My sense is that a lot of people previously thought that "seriously worrying AI may destroy the entire human race" was more niche than this. But I haven't seen many people voice their surprise on Twitter. Were you surprised?
Quote Tweet
Today many of the key people in AI came together to make a one-sentence statement on AI risk: 1/n safe.ai/statement-on-a
Show this thread
Image
11
66
Show this thread
There is so much dissonance in AI right now: ๐Ÿ˜• We're risking human extinction... ๐Ÿค  So pleased to release our new paper, many numbers going up!๐Ÿ“ˆ๐Ÿ”ฅ ๐ŸŒŽ World leaders: we need treat it like nuclear nonproliferation โ˜ข๏ธ ๐Ÿšจ New model and code release! Woop! ๐ŸŽ‰
3
84