Looking forward to speaking with @ESYudkowsky today. Any questions for him?
-
-
2: They don't expect AI alignment to be difficult. They don't think that building aligned AGI, instead of just building any AGI, takes as much additional effort as building a secure OS compared to just building an OS. Not enough written on this, but seehttps://arbital.com/p/aligning_adds_time/ …
-
3: They disbelieve the rapid-capability-gains thesis, and expect the AGI situation to develop in a much slower and more multipolar way than, say, Deepmind's conquest of Go (never mind Alpha Zero blowing past the entire human Go edifice in one day from scratch).
-
Just what I wanted. Thanks!
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.