If we trust that its underlying value function is sound, sure. But many value functions are possible, and we can't rely on our ability to judge a value function a-priori with respect to the behavior it gives rise to in the agent. We may be forced to infer soundness from behavior.
-
-
-
My point is that we cannot claim to be able to judge value functions better than superior minds.
- 4 more replies
New conversation -
-
-
Do you feel closer to a profound truth when you think about this deeply?
-
It seems to be obvious, not profound
- 2 more replies
New conversation -
-
-
"correct" is not a valid category when it comes to ethical matters. Except if the actors share the same ethical framework.
-
I don’t think that‘s correct. Ethics is the principled negotiation of conflicts of interest under conditions of shared purpose, ie the theory of long games.
- 3 more replies
New conversation -
-
-
The AI will be more correct. The question is, more correct for what? The AI will be more accurate in enacting its own logic, which may not align with human logic or societal logic.
-
My point (somewhat tongue-in-cheek) was that it is not obvious that human or societal logic should take precedence, especially from the perspective of a system that is smarter than us.
End of conversation
New conversation -
-
-
My first assumption would be , that there have been failures in AI - Training.
-
None of these failures have lead to a superintelligent systems. There also have been failures in human-training.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.