None of the current visions for AI X risk mitigation have a chance to prevent things from going wrong, or stopping them once they start going wrong. (Thus holds true regardless of what probability you assign to an apocalyptic AI scenario.)
-
-
Agreed, there already exist powerful self-modifying computer worms (Stuxnet an example from 2012) these tools in themselves although not sentient in anyway still pose a significant threat with the world relying evermore on connected systems
-
This is really not the kind of thing that I am concerned about in this context.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.