If done right, self improving AI has the potential to end all suffering, by ending conscious life on the planet, but how can we make sure that AI is safe and properly sterilizes the planet so new suffering never springs up again that we would be helpless to prevent?
Once you fully integrate your mind, you should be concerned about a thing you want to affect exactly to the degree to which your concern can contribute to your efforts to affect it.
-
-
Ex falso quodlibet; the inference is accepted. I mean, how can an agent “exactly” know that degree?
-
It is recursive. If you don't know, start a process of inquiry that is exactly fueled by the amount of concern that you think is appropriate based on the available information about the value of the information differential, and you know that you have done the best you could.
- 10 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.