Conversation

That AGI will help with alignment in ways that are nonobvious to us now, but before it becomes extremely dangerous (vastly superintelligent).
3
29
Show replies
Show replies
One is that it's predicated on reward/goal oriented optimizers, which don't seem to be the tech path we're going down. The properties of the things we've got are different and less predictable.
4
14
Show replies