1) re "intelligent thing risk" my new fave take is https://twitter.com/fire__exit/status/1011762984136445953 …
2) humans took millennia to appear and were shaped by long-term environments. we're motivated to make virtual humans but it's gonna be messy & I think Moore's Law is over
3) "study"?
"sound alarm"? 
-
-
I think a major point I deeply disagree with is that I don't think Moore's Law will hold. A lot of AI-related fear seems to assume a continuous creep in computing power. How would AI-risk research change if we were forever stuck with the computing power of our current hardware?
3 replies 0 retweets 0 likes -
exponential hardware power growth probably only matters for hard takeoff
1 reply 0 retweets 1 like -
but hard takeoff is "the big x-risk", no?
1 reply 0 retweets 0 likes -
if we want humans around 50-100 yrs from now, but even much slower development is bad news for long time scales imo
1 reply 0 retweets 0 likes -
Replying to @fire__exit @simpolism and
and this is assuming there's not an equivalent scaling mechanism for distributed computing (which can also improved dramatically)
1 reply 0 retweets 1 like -
I'm not even particularly convinced that "general intelligence" is something we're at all close to knowing how to build, regardless of whether it takes 20 years or 10 seconds to train
2 replies 0 retweets 1 like -
i'm not particularly convinced either, but i'm not particularly convinced of the opposite, which seems enough to sound the alarm
1 reply 0 retweets 0 likes -
i'm opposed to existential alarms on principle
1 reply 0 retweets 0 likes -
Replying to @simpolism @VesselOfSpirit and
even then, i think AI alarm is more of a pascal's wager type alarm than a nuclear weapon type alarm
1 reply 0 retweets 2 likes
pascal's wager is problematic because it involves very unlikely events. i'm arguing agi isn't very unlikely
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.