Q: “Why couldn’t we just shut off a computer if it got too powerful?” A: “A smart AI could predict that we’d want to turn it off if it made us nervous. So it would try hard not to make us nervous, because doing so wouldn’t help it accomplish its goals” https://www.vox.com/future-perfect/2018/12/21/18126576/ai-artificial-intelligence-machine-learning-safety-alignment …
-
-
Replying to @FrankPasquale
Why would we give a computer such a high degree of autonomy, for example, autonomy to self-improve without human intervention or evaluation. This makes no sense from an engineering perspective.
7 replies 2 retweets 10 likes -
Replying to @tdietterich @FrankPasquale
Before we deploy any AI system, we must test it carefully. We wouldn't open a bridge or a skyscraper without first testing the quality of the construction work. The same applies to software. The doomsday scenarios all rely on violating such norms.
3 replies 4 retweets 10 likes -
Replying to @tdietterich @FrankPasquale
A general AI will easily be taken down by a simpler, more specialized AI whose only purpose is to take it down. Just like a simple virus can kill a human. It will be even easier if the simple AI killer has access to the similar computing resources as the general AI.
5 replies 6 retweets 37 likes -
If that was a certainty (not just possible), why did evolution not favor a viral goo over complex organisms?
3 replies 0 retweets 5 likes -
Evolution doesn't favor complex organisms. Complex organisms just happened to occupy a niche in the biosphere.
1 reply 0 retweets 4 likes -
Replying to @IntuitMachine @Plinz and
IIRC Nick Bostrom covers this topic in reasonable detail in his book Superintelligence. Good reference.
1 reply 0 retweets 1 like -
Replying to @FieryPhoenix7 @IntuitMachine and
Superintelligence is a dark fantasy in which humankind places all of its resources at the disposal of an untested and unsupervised AI system. Meanwhile they obsess over turning off the machine because it might contain a mind.
4 replies 0 retweets 5 likes
What people find inspiring or disturbing is entirely orthogonal to what is likely to happen. Focusing on prescriptive outlooks may give a thinker impact today, but tends to confuse our predictions. The future does not care about our emotional biases.
-
-
Replying to @Plinz @tdietterich and
The emotional response to AGI is cultural. The response of Americans to AGI is very different that the Japanese response to AGI. I wonder who has done any studies on this?
2 replies 0 retweets 0 likes -
Replying to @IntuitMachine @Plinz and
It’s similar to the question of how society would react to the discovery of intelligent alien life. Literally dozens of possible answers.
1 reply 0 retweets 0 likes - 4 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.