If done right, self improving AI has the potential to end all suffering, by ending conscious life on the planet, but how can we make sure that AI is safe and properly sterilizes the planet so new suffering never springs up again that we would be helpless to prevent?
A learning signal becomes phenomenological pain only when it is being presented to an inner representation of the system as something that can experience pain. A mind can never experience reality itself, only models.
-
-
This also means that by changing the way you represent your self, you can eliminate the phenomenological impact of pain. More generally, a highly aware and self modifying mind gains the freedom to decide how it wants to experience the world. Humans have the potential to do that.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.