IME most people who think AI is an existential threat also seem to take Yudkowsky seriously, but there could surely be those out there who don't...
-
-
1) re "intelligent thing risk" my new fave take is https://twitter.com/fire__exit/status/1011762984136445953 … 2) humans took millennia to appear and were shaped by long-term environments. we're motivated to make virtual humans but it's gonna be messy & I think Moore's Law is over 3) "study"?
"sound alarm"? 
-
I think a major point I deeply disagree with is that I don't think Moore's Law will hold. A lot of AI-related fear seems to assume a continuous creep in computing power. How would AI-risk research change if we were forever stuck with the computing power of our current hardware?
-
My current theorizing is that: - the brain is the most efficient physical form for human-like intelligence - the most likely "general AI" is a large brain/"biological machine" - there are deep unknown-unknowns re how this might come to exist and what forms of agency it will have
-
"most efficient physical form" seems really unlikely to me given how many constraints evolution has been under
-
right, the question for efficiency is always "efficient with respect to what metric"? I think human brains are more efficient at human-like thought than computers are. But I don't think people expect a general AI to have human-like thought.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.