Important Thread explaining why AI Safety isn't a concern.https://twitter.com/SimonDeDeo/status/1011255347095523328 …
-
-
i'd be interested if you have a specific example of an ai safety argument that depends on knowing the ai's architecture
-
as for the aliens below the moon thing: artificial general intelligence seems possible because humans are possible
-
and it would help achieve a lot of different goals, which means people will be motivated to try to invent it
-
also, sometimes when you don't know what a thing will be like there are a few possibilities and you can study each of them
-
1) re "intelligent thing risk" my new fave take is https://twitter.com/fire__exit/status/1011762984136445953 … 2) humans took millennia to appear and were shaped by long-term environments. we're motivated to make virtual humans but it's gonna be messy & I think Moore's Law is over 3) "study"?
"sound alarm"? 
-
I think a major point I deeply disagree with is that I don't think Moore's Law will hold. A lot of AI-related fear seems to assume a continuous creep in computing power. How would AI-risk research change if we were forever stuck with the computing power of our current hardware?
-
My current theorizing is that: - the brain is the most efficient physical form for human-like intelligence - the most likely "general AI" is a large brain/"biological machine" - there are deep unknown-unknowns re how this might come to exist and what forms of agency it will have
-
"most efficient physical form" seems really unlikely to me given how many constraints evolution has been under
- 1 more reply
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.