None of the current visions for AI X risk mitigation have a chance to prevent things from going wrong, or stopping them once they start going wrong. (Thus holds true regardless of what probability you assign to an apocalyptic AI scenario.)
-
-
Replying to @Plinz
Do you have a stance on whether an effective strategy can exist? And if so, whether it can be arrived at by humans?
1 reply 0 retweets 1 like -
Replying to @sableRaph
Even if Max Tegmark writes more resolutions (which I support!) and MIRI creates a hit team that assassinates all AGI researchers that appear to be close to releasing armageddon and EA funds it, it won’t stop the search.
1 reply 0 retweets 2 likes
Replying to @Plinz @sableRaph
That does not nean no solution exists, but I currently don’t see one.
3:38 PM - 14 Jun 2018
0 replies
0 retweets
1 like
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.