So weird that you gain status in the ML community by talking about how impossible AGI is, and it’s a taboo to be excited about it. Imagine if JFK or those running the Apollo program inspired their followers by saying “we won’t get to the moon anytime soon”.
-
-
And to set the record straight, I have never claimed that AGI is "impossible" (or eternal life, for the record). If you read my writing you'll see that I consider it a given that we will eventually build synthetic intelligence comparable to that of humans, and beyond
-
I’m quite curious: are you saying you agree with the notion that AGI will eventually be achieved but with some caveats? I’d like to hear your thoughts on this in more detail if you can direct me to an article or blogpost you’ve written.
End of conversation
New conversation -
-
-
Well you did write about intelligence explosion being impossible. Yes you can maintain slow takeoff scenario but seems more like you’re trying to fix a biased scale by tossing a brick on the other side
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
It is not at all clear when AGI will happen, and some of the public debate is misguided. However, the arguments in your article were non sequiturs, which is extremely irresponsible IMHO, because the public does not understand them either.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.