Looking forward to speaking with @ESYudkowsky today. Any questions for him?
-
-
Fair. Thanks. Rephrased with additional structure: “Assuming high interest, relevant technical ability and high familiarity with the problem, what are the three most frequent classes of reasons....” And feel free to add enough structure to sharpen the answer most meaningfully.
-
1: The reasoning I discuss and try to refute in https://intelligence.org/2017/10/13/fire-alarm/ …, involving a considered opinion "I don't see how to get to AGI from the tools I know" and a less-considered sequitur from there to "So it's not time to start thinking about AI alignment."
-
2: They don't expect AI alignment to be difficult. They don't think that building aligned AGI, instead of just building any AGI, takes as much additional effort as building a secure OS compared to just building an OS. Not enough written on this, but seehttps://arbital.com/p/aligning_adds_time/ …
-
3: They disbelieve the rapid-capability-gains thesis, and expect the AGI situation to develop in a much slower and more multipolar way than, say, Deepmind's conquest of Go (never mind Alpha Zero blowing past the entire human Go edifice in one day from scratch).
-
Just what I wanted. Thanks!
End of conversation
New conversation -
-
-
Are you going to be co-authoring an AI book with Sam, Eliezer?
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.