Lots of people talk a big talk about building tools to accelerate AI alignment research, but these folks have been successfully doing just that for years.
Dunno much about their other projects, but I’m very happy with how they run http://alignmentforum.org & lesswrong.
Donated!
Lightcone Infrastructure is looking for funding. We run http://LessWrong.com and the AI Alignment Forum, wrote a lot of the code behind the EA Forum, and recently built a lot of in-person infrastructure for people working on reducing existential risk https://lesswrong.com/posts/9iDw6ugMPk7pmXuyW/lightcone-infrastructure-lesswrong-is-looking-for-funding…
If you think that this statement will lead to so much global focus on AI extinction risk that we entirely forget about racism & climate change & doing the dishes…
then I have some bad news for you about how things are going on the “pandemic & nuclear war prevention” front 😂🤦
I’d love to sign your statement but none of these options work for my situation, please add a third option “I fancy myself a notable figure & AI scientist”, thanks
Here are two random examples (one from each side of the debate) of people unhelpfully arguing about policy, in a context where the real argument is whether there’s any problem in the first place. (5/5) https://twitter.com/steve47285/status/1644006861560070146?s=20…
this is basically a version of Pascal's wager...we dont know the probability of going to Hell if you dont believe in God, but the outcome is so bad that the rational thing to do is to believe in God.
What's the counterargument?
For my part, I think AI will probably kill everyone, and I also think that every proposed policy is awful, or inadequate, or (most often) both. Ditto for not passing any policy. If I ever figure out which of many bad options is the LEAST bad, I will advocate for it. 🤷 (4/5)
A very stupid version of this dynamic is when Y says “X’s policy is terrible, therefore AI won’t kill everyone”
An even stupider version is when Y says “If X believed that, they’d support policy Z, which is awful, therefore AI won’t kill everyone” (3/5)
2. Mixing up disagreement about the problem, with disagreement about policy solutions. If X thinks future AI will probably kill everyone, and Y thinks it almost definitely won’t, then obviously X & Y will disagree about what to do about that! (2/5)
Two unhelpful dynamics in public discussion of AI x-risk
1. Mixing up “risk from LLMs” with “risk from future AI”. Reasonable people disagree on whether future LLMs could kill every human, but there’s a much stronger case that SOME future AI could. (1/5)
I keep a compose key shortcut list pinned to my wall and occasionally update it. The changes tell a funny story where I keep cutting out technical symbols in favor of emojis 😂
Weirdly, I only just realized I could add shortcuts for my email & phone # !! https://sjbyrnes.com/unicode.html
If anyone has seen any interesting / bold theorizing about what’s going on in the brain in Cluster B Personality Disorders (bipolar, narcissism, antisocial), please share. Even if you think it’s wrong—share it anyway. Thanks in advance!
I don’t have much forecasting track record, but I did once win a college dorm March Madness bracket contest in 2005. I don’t follow sports so I found an algorithmic prediction online & copied its answers. As cosmic justice, my prize was basketball tickets that I didn’t want. 😛
🤦 So, I made a meme 2 wks ago. I was gonna tweet it, but I said to myself: “Now Steve: This kind of snarky crap will impress the people who already agree with you, & annoy the people who don’t. You’re better than that! Don’t strawman the people you’re trying to gently win over…Show more
FYI: When neuroscientists talk about the “nucleus accumbens shell”, it’s “shell as in hard taco shell”, not “shell as in egg shell”. It doesn’t wrap all the way around.
I guess the internal capsule is the shredded cheese? 🤔
Thx
RE the FLI letter:
(1) I think very dangerous AGI will come eventually, and that we’re extremely not ready for it, and that we’re making slow but steady progress right now on getting ready, and so I’d much rather it come later than sooner.
(2) It’s hard to be super-confident,…Show more
A couple months ago, EAI started an internal speaker series for alignment researchers. Checkout the transcript and slides of the first talk, featuring Steve Byrnes here:
who saw LLMs coming?
e.g. decades (or even 5+ years) ago, X said: when machine learning systems have enough compute and data to learn to predict text well, this will be a primary path to near-human-level AI.
USA invented nuclear weapons on the false belief that they were in a close race with Germany, and massively stockpiled them on the false belief that they were in a close race with Russia. Now I hear USA is in a close race with China on AI. Are we SURE??
Whether or not this concrete example is true, it's crazy to me that people have somehow memed themselves into believing one of the most authoritarian and conformist countries in the world will radically race ahead to society upending technology. twitter.com/blader/status/…
I find it a nice illustration that life on earth has probably barely scratched the surface of a much much larger space of all possible nanotech replicators.
[pic is from the book "Transformer" by Nick Lane] (2/2)
Pretty wild that (according to one theory) there were iron-sulfur minerals catalyzing reactions in hydrothermal vents as life began billions of yrs ago; and to this day similar iron-sulfur structures—now inside a protein scaffold—are catalyzing similar reactions in my body (1/2)
TIL a standard test of whether a drug is psychedelic is to give it to mice and see if they twitch their head from side to side.
And apparently people trust this test so much that they’ll describe it as “X has no psychedelic activity” without hedging 🤔https://en.wikipedia.org/wiki/Head-twitch_response…