What happens when a 'benevolent' AI decides that human suffering out-weighs human happiness? @ThomasMetzinger
https://www.edge.org/conversation/thomas_metzinger-benevolent-artificial-anti-natalism-baan …
-
-
You could ask it to extrapolate a human thought process to reflective equilibrium and you might get a startling result.
-
In which case you might question either your morals, or if maybe you defined "reflective equilibrium" unwisely.
-
If an AGI told me that in the limit of reflection I wanted to make paperclips, I'd check "reflection" before I started making paperclips.
-
Further reading: - Orthogonality: https://arbital.com/p/orthogonality/ … - Extrapolated volition:https://arbital.com/p/normative_extrapolated_volition/ …
- 1 more reply
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.