Did it occur to you that if superintelligent AI turns out not to do what humans want that it is more likely correct than us?
-
-
Replying to @Plinz
"correct" is not a valid category when it comes to ethical matters. Except if the actors share the same ethical framework.
1 reply 0 retweets 2 likes -
Replying to @HahTse
I don’t think that‘s correct. Ethics is the principled negotiation of conflicts of interest under conditions of shared purpose, ie the theory of long games.
6 replies 1 retweet 7 likes -
Replying to @Plinz
Extreme example: a paperclip maker vs. all life in the universe. Yes, I know the argument that if the paperclip maker is super intelligent it will be able to question and change it's core motivation (or biases). I just don't think that there is all that much evidence for that.
1 reply 0 retweets 0 likes -
"I want to live" "Well I want to reconfigure your constituent atoms into a paperclip." "...but that would kill me. I think that's wrong." "Well, you are incorrect - making paperclips is clearly more important than any individual human life" There is your principled negotiation.
2 replies 0 retweets 2 likes
Yes, the shared purpose is the prerequisite, otherwise ethics is identical to control.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.