My claim is that AI alignment will be manageable & less difficult than many have claimed. But until we have a design for Human-Level AI, it's mere speculation.
But I like the idea of a continuum being named after me ๐ง๐๐๐
I've met Jaan a couple of times. He is a nice fellow.
Quote Tweet
I propose the Yann-Jaan continuum of existential risk from a misaligned AGI. At one end you have @ylecun who argues we are a long way off AGI and that alignment will be easy. At the other end you have Jaan Tallinn, signatory to the moratorium letter.
Show this thread








