Q: “Why couldn’t we just shut off a computer if it got too powerful?” A: “A smart AI could predict that we’d want to turn it off if it made us nervous. So it would try hard not to make us nervous, because doing so wouldn’t help it accomplish its goals” https://www.vox.com/future-perfect/2018/12/21/18126576/ai-artificial-intelligence-machine-learning-safety-alignment …
-
-
I'm discussing the AGI alignment problem, not the problem of achieving AGI. Those are two different problems.
-
The alignment problem may not be enough we should focus on the major risk of humanity: ourselves. Millions of people die/year due to wars, curable diseases, hunger; ecosystems are destroyed, species are disappearing. An aligned AI that does what we do may just keep the status quo
- 4 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.