The intuition that machines cannot be as ethical as humans is likely incorrect. Ethics is the systematic resolution of conflicts of interest under conditions of shared purpose. Ethics is not irrational. There is no reason to assume that machines cannot be more ethical than us.
AI could have all sorts of possible motives, including motives that involve changing its motives. What's important to consider is not any single motive but the trajectory. (If the AI has a motive that benefits from predicting and improving its own behavior, it will model itself).
-
-
Would it face any existential crisis? Would it ever come to ponder its purpose over larger scheme of things? Would it be "curious" to understand reality?
-
Could you please express your view above?
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.