Yes: What, if anything, can be reliably inferred from an individual's lack of alarm and urgency over the AGI steering problem?
-
-
-
Very little, really. There's just too many different reasons that people come up with for stepping off the line of reasoning at some point. And of course a supermajority of people haven't allocated time to explicitly consider the issue in the first place.
-
Fair. Thanks. Rephrased with additional structure: “Assuming high interest, relevant technical ability and high familiarity with the problem, what are the three most frequent classes of reasons....” And feel free to add enough structure to sharpen the answer most meaningfully.
-
1: The reasoning I discuss and try to refute in https://intelligence.org/2017/10/13/fire-alarm/ …, involving a considered opinion "I don't see how to get to AGI from the tools I know" and a less-considered sequitur from there to "So it's not time to start thinking about AI alignment."
-
2: They don't expect AI alignment to be difficult. They don't think that building aligned AGI, instead of just building any AGI, takes as much additional effort as building a secure OS compared to just building an OS. Not enough written on this, but seehttps://arbital.com/p/aligning_adds_time/ …
-
3: They disbelieve the rapid-capability-gains thesis, and expect the AGI situation to develop in a much slower and more multipolar way than, say, Deepmind's conquest of Go (never mind Alpha Zero blowing past the entire human Go edifice in one day from scratch).
-
Just what I wanted. Thanks!
End of conversation
New conversation -
-
-
Holy mother of god. The podcast I have been waiting for forever. I am way too excited. How’s the book coming along? How did Eliezer self-educate so well? Specifically, how would he recommend one learn about AI in-depth?
-
What changes in incentive structures does he think are best? What’s he working on right now? What’s his work like on a day-to-day basis? What progress in AI alignment has been made?
-
Will he pleeeease make the HPMOR epilogue? What charities does he give to? What does he do with his free time? How would he recommend someone getting into MIRI/the AI field?
-
Okay, final question: can you make this a four hour podcast?
End of conversation
New conversation -
-
-
Beyond the value alignment problem in AI, is there an aesthetics alignment problem? Or is ugliness just a form of evil that can be folded into a master utility function?
-
An aesthetics alignment problem would seem to imply a comparison to beauty. So which do you mean, that we should become as beautiful as silicon, or viceversa? Jokes work better when you get your analogies straight. Not that I'm hating or anything.
-
When the AGI overlord is coming, it will appear in such unimaginably beauty to our primal senses that circumvents our freewill so we submit without even asking why.
-
... with silicone tits
-
Now that I can always get behind.
End of conversation
New conversation -
-
-
What are the best counterarguments to
@ESYudkowsky's overall view? Imaginging that Eliezer finds out in 30 years he was significantly wrong about AI safety today, what would be the most likely error? -
thinking about having
@ESYudkowsky on 80,000 hours soon/ever?
End of conversation
New conversation -
-
-
Can he program an AI to edit and release your podcasts more quickly?
- 1 more reply
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.