Q. Could any of the people worried about AGI possibly be worried about the future?
A. No. They all definitely believe that we have AGI right now based on current technology, which is why I, as a serious AI scientist, can tell you that they’re definitely wrong.
#thelaststrawmanhttps://twitter.com/Medium/status/1081507580365680645 …
-
-
What do you think of Moravec's paradox? https://en.m.wikipedia.org/wiki/Moravec%27s_paradox …
-
Especially about this point: "We should expect the difficulty of reverse-engineering any human skill to be roughly proportional to the amount of time that skill has been evolving in animals."
-
This seems to represent a fundamental misunderstanding of how evolution works. Complexity is certainly not linear in total time spent evolving phenotypical characteristics, even ignoring the problems one has in choosing the "start" of the metric.
-
Do you assume that complexity of such artificial system will grow lineary over time?
-
My comment was directed at the word "proportional". But as a separate question, no. There are several expected sources of nonlinearity for the complexity of artificial systems. E.g. hardware improvements changing the search costs for finding design improvements, to name one.
-
Oh, got it. I think, the whole Moravec's paradox states that difficulty of building AI system, while proportional to time, is not proportional to it's complexity. For example, we can invent some abstractions to deal easily with increasing complexity.
-
I think it's best understood as an insight into human psychology: most of what our brains do isn't mentally accessible. Just one more way I expect AI to kick the pants off us, because their self-model can be bit-for-bit perfect. Yay quining!
End of conversation
New conversation -
-
-
Which is why I'm not afraid that the world has been taken over by AI and humans have been eradicated. Except to the extent I doubt the many reasons physics has for time travel being impossible. If further physics discoveries do allow it then I should be afraid for the present.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
"Why should we be worried about 2020 anyway? Check the date, idiots."
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Malicious humans are always a danger. The extent to which malevolent humans use technology has to be constantly monitored. They have the ability to advance the state of the art for their own agenda.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
So that means AGI for 2025?
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.