If goal-alignment means that AI should value my goals, then the future safety-guaranteed AGI won't be able to find a better goal for me.
-
-
I agree. But there is an assumption behind parental love: parents would age, so they need to teach their children enough skills/knowledge
-
before their demise such that children can live well independently. But that is not the case for AGI. They are always there. So I think a
- 4 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.