The arguments made e.g. by Nick Bostrom is rather that they could lack deeper understanding of their actions, consequences and have different priories (produce paperclips at all costs), what possibly leads to rational but unethical actions.
-
-
-
Interesting question. Do you think that "autistic AI", i.e. an AI that only optimizes for a low level reward function, will outperform "sustainable AI", i.e. one that maximizes its expectation horizon?
- 8 more replies
New conversation -
-
-
No way to predict what ethics it comes up with. Anthropomorphizing is a hige mistake, as is thinking humanity is an ethical species in whatever philosophy said unknowable intelligence conceives.
-
I think you may be anthropomorphizing people too much.
- 2 more replies
New conversation -
-
-
It's not the machines. It's the corporations that use them. Corporations are like animals in Darwinian competition, and animals don't have ethics.
-
My ontology is orthogonal to yours: people, animals, corporations and computers are machines. People are animals. People, animals and corporations are evolutionary agents. Machines have ethics if they implement mechanisms for identifying rules to negotiate cooperative conflicts.
End of conversation
New conversation -
-
-
But can people who are less ethical than others understand and acknowledge the decision as more ethical?
-
You may want to first distinguish ethics from morals: a person that has fewer moral constraints can still be more ethical, and vice versa. Then it becomes apparent that being more ethical may enable to understand the decisions of others as ethical (even if disagreeing with them).
End of conversation
New conversation -
-
- End of conversation
New conversation
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.