None of the current visions for AI X risk mitigation have a chance to prevent things from going wrong, or stopping them once they start going wrong. (Thus holds true regardless of what probability you assign to an apocalyptic AI scenario.)
-
-
That does not nean no solution exists, but I currently don’t see one.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.