Opens profile photo
Follow
Steven Byrnes
@steve47285
Researching Artificial General Intelligence Safety, mostly via thinking about neuroscience and algorithms, . Also @SteveByrnes@sigmoid.social
Boston, USAsjbyrnes.com/agi.htmlJoined October 2013

Steven Byrnes’s Tweets

Lots of people talk a big talk about building tools to accelerate AI alignment research, but these folks have been successfully doing just that for years. Dunno much about their other projects, but I’m very happy with how they run alignmentforum.org & lesswrong. Donated!
Quote Tweet
Lightcone Infrastructure is looking for funding. We run LessWrong.com and the AI Alignment Forum, wrote a lot of the code behind the EA Forum, and recently built a lot of in-person infrastructure for people working on reducing existential risk lesswrong.com/posts/9iDw6ugM 🧵
Show this thread
21
If you think that this statement will lead to so much global focus on AI extinction risk that we entirely forget about racism & climate change & doing the dishes… then I have some bad news for you about how things are going on the “pandemic & nuclear war prevention” front 😂🤦
Image
2
40
Hey I’d love to sign your statement but none of these options work for my situation, please add a third option “I fancy myself a notable figure & AI scientist”, thanks
Image
2
6
Here are two random examples (one from each side of the debate) of people unhelpfully arguing about policy, in a context where the real argument is whether there’s any problem in the first place. (5/5) twitter.com/steve47285/sta
Quote Tweet
Replying to @GaryMarcus
this is basically a version of Pascal's wager...we dont know the probability of going to Hell if you dont believe in God, but the outcome is so bad that the rational thing to do is to believe in God. What's the counterargument?
2
18
Show this thread
For my part, I think AI will probably kill everyone, and I also think that every proposed policy is awful, or inadequate, or (most often) both. Ditto for not passing any policy. If I ever figure out which of many bad options is the LEAST bad, I will advocate for it. 🤷 (4/5)
1
33
Show this thread
A very stupid version of this dynamic is when Y says “X’s policy is terrible, therefore AI won’t kill everyone” An even stupider version is when Y says “If X believed that, they’d support policy Z, which is awful, therefore AI won’t kill everyone” (3/5)
Quote Tweet
if you really believe AI timelines are so short why don't you do [insane thing that doesn't make any sense]?
Show this thread
2
38
Show this thread
2. Mixing up disagreement about the problem, with disagreement about policy solutions. If X thinks future AI will probably kill everyone, and Y thinks it almost definitely won’t, then obviously X & Y will disagree about what to do about that! (2/5)
3
40
Show this thread
Two unhelpful dynamics in public discussion of AI x-risk 1. Mixing up “risk from LLMs” with “risk from future AI”. Reasonable people disagree on whether future LLMs could kill every human, but there’s a much stronger case that SOME future AI could. (1/5)
4
156
Show this thread
I keep a compose key shortcut list pinned to my wall and occasionally update it. The changes tell a funny story where I keep cutting out technical symbols in favor of emojis 😂 Weirdly, I only just realized I could add shortcuts for my email & phone # !! sjbyrnes.com/unicode.html
Image
1
3
If anyone has seen any interesting / bold theorizing about what’s going on in the brain in Cluster B Personality Disorders (bipolar, narcissism, antisocial), please share. Even if you think it’s wrong—share it anyway. Thanks in advance!
2
7
Show this thread
I don’t have much forecasting track record, but I did once win a college dorm March Madness bracket contest in 2005. I don’t follow sports so I found an algorithmic prediction online & copied its answers. As cosmic justice, my prize was basketball tickets that I didn’t want. 😛
1
7
🤦 So, I made a meme 2 wks ago. I was gonna tweet it, but I said to myself: “Now Steve: This kind of snarky crap will impress the people who already agree with you, & annoy the people who don’t. You’re better than that! Don’t strawman the people you’re trying to gently win over… Show more
Quote Tweet
Summary of argument against AI safety
Image
6
152
FYI: When neuroscientists talk about the “nucleus accumbens shell”, it’s “shell as in hard taco shell”, not “shell as in egg shell”. It doesn’t wrap all the way around. I guess the internal capsule is the shredded cheese? 🤔 Thx 🌮🌮🌮
Image
Image
3
RE the FLI letter: (1) I think very dangerous AGI will come eventually, and that we’re extremely not ready for it, and that we’re making slow but steady progress right now on getting ready, and so I’d much rather it come later than sooner. (2) It’s hard to be super-confident,… Show more
17
I’m gearing up to mourn the (likely) upcoming dilution-to-meaninglessness of the previously-useful term “AGI” 😠
Image
2
4
I was *slightly* early on the LLM bandwagon — e.g. here’s Baby Steve in Aug 2019 lesswrong.com/posts/EMZeJ7vp — but then I jumped right back off the LLM bandwagon shortly thereafter, and right now I seem to have less credence in “LLM→AGI” than like 90% of AGI alignment people 😛
Quote Tweet
who saw LLMs coming? e.g. decades (or even 5+ years) ago, X said: when machine learning systems have enough compute and data to learn to predict text well, this will be a primary path to near-human-level AI.
Show this thread
6
USA invented nuclear weapons on the false belief that they were in a close race with Germany, and massively stockpiled them on the false belief that they were in a close race with Russia. Now I hear USA is in a close race with China on AI. Are we SURE??
Quote Tweet
Whether or not this concrete example is true, it's crazy to me that people have somehow memed themselves into believing one of the most authoritarian and conformist countries in the world will radically race ahead to society upending technology. twitter.com/blader/status/…
5
50
I find it a nice illustration that life on earth has probably barely scratched the surface of a much much larger space of all possible nanotech replicators. [pic is from the book "Transformer" by Nick Lane] (2/2)
2
Show this thread
Pretty wild that (according to one theory) there were iron-sulfur minerals catalyzing reactions in hydrothermal vents as life began billions of yrs ago; and to this day similar iron-sulfur structures—now inside a protein scaffold—are catalyzing similar reactions in my body (1/2)
Image
1
6
Show this thread