None of the current visions for AI X risk mitigation have a chance to prevent things from going wrong, or stopping them once they start going wrong. (Thus holds true regardless of what probability you assign to an apocalyptic AI scenario.)
Yes, but that is no longer a problem of insufficient compute, but still a problem of not using the right bootstrapping algorithms. It is not clear how long it will take until someone foolish enough to implement them has the right idea.
-
-
We still haven't gone past the Chinese Room Argument, computers that truly understand do not currently exist - perhaps when we are able to model the human brain entirely it will be more likely but until then this is a non-argument
-
@ylecun and@AndrewYNg are correct: the current generation of learning systems is unlikely to result in the creation of minds like ours. But unlike them, I am not convinced that the next generation is necessarily far off. (The Chinese Room argument is nonsense, though.) - 4 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.