Top-level takeaway of tonight's catch-up reading on ML...
I still find it hard to take AI Risk seriously as a special problem here. There are definite risks but they don't seem qualitatively different from other kinds of engineering risk.
Conversation
Replying to
Even the notionally beyond-good-and-evil ideas (kill all humans as apathetic side effect of paperclip maximization) fall prey to the trap of means-ends nihilism
