Did it occur to you that if superintelligent AI turns out not to do what humans want that it is more likely correct than us?
-
-
Extreme example: a paperclip maker vs. all life in the universe. Yes, I know the argument that if the paperclip maker is super intelligent it will be able to question and change it's core motivation (or biases). I just don't think that there is all that much evidence for that.
-
"I want to live" "Well I want to reconfigure your constituent atoms into a paperclip." "...but that would kill me. I think that's wrong." "Well, you are incorrect - making paperclips is clearly more important than any individual human life" There is your principled negotiation.
- 1 more reply
New conversation -
-
-
Why do you think there will be shared purpose 'between man and machine'? And isn't 'purpose' just the result of evolution, i.e. selection pressure for higher fitness?
-
Purposes are models of needs, in the context of a larger aesthetic. It is not obvious that all systems that come to rule after us will share purposes with us? Certainly that won’t be unconditional.
- 3 more replies
New conversation -
-
-
This non-standard definition of ethics seems to assume comparable levels of abilities of negotiation. Otherwise one side might always win the „negotiation“. (I may have ignored what you try to catch with „principled“. I had to. As I can‘t know what you refer to.)
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
If you define ethics this way you still need a word for what to maximize for in a conflict (or if you're alone in the universe). Other people are pointing out that thing is arbitrary
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Recommend any papers that make arguments along these lines?
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.