Safely aligning a powerful AGI is difficult.
Perhaps the AGI could put people on pause if there are too many of them? And if that is morally neutral, perhaps it could pause them indefinitely? Perhaps that is even morally required if the AGI can make better people? I find these questions difficult to deflect.
-
-
There's no morals outside of a subjective point of view with subjective feelings and subjective goals. What would AGI's subjective point of view be?
-
The general consensus seems to be that if the AGI is smart enough it would approximate Eliezer's moral positions, but Eliezer and Elon fear it will be almost impossible to make the AGI that smart.
- 2 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.