possibly worth revisiting the idea of an outcome-based ethics system (instead of an intent-based one)
-
-
-
Fun to read Bostrom for an exploration of potential pitfalls in codifying ethics in machines, eg https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111?ie=UTF8&redirect=true …
- 1 more reply
New conversation -
-
-
seems morality is largely an emergent property of theory of mind much as deception is, decide and itself seems appropriate.
-
disagree on "decide/itself" it deludes the participants from seeing the problem
-
but with machines increasingly taking safety-critical actions in human contexts (driving, e.g.) necessary to discuss morality
-
if nothing else, it can be seen as codification/automated scaled application of a human (the engineer's) decisions
End of conversation
New conversation -
-
-
Society needs to decide whose morality gets encoded into these systems... How do we determine who decides?
-
Initially, insurance underwriters and liability lawyers are going to be the primary authors of what gets encoded, UNLESS...
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.