An 'aligned' powerful AGI is one that can be pointed in any direction at all, even what seems like a simple task that isn't morally fascinating. E.g. "Place, onto this particular plate here, two strawberries identical down to the cellular but not molecular level."
-
Show this thread
-
A 'safely' aligned powerful AI is one that doesn't kill everyone on Earth as a side effect of its operation; or as a somewhat more stringent requirement, one that has less than a 50% chance of killing more than a billion people.
5 replies 8 retweets 48 likesShow this thread -
Replying to @ESYudkowsky
It may be a bit harder to define than that. If you want to preserve human esthetics over more than a few decades or centuries, it may be necessary to limit the number of people living on the planet at any one time.
3 replies 0 retweets 1 like -
-
Replying to @ESYudkowsky @Plinz
And yet you have to rely on people to voluntarily use it
1 reply 0 retweets 0 likes -
If you can make cellularly-but-not-molecularly identical strawberries, you probably don't. Ignoring the demographic transition and assuming that "rely on people to voluntarily use it" isn't viable, forcing birth control is probably not as bad as murder.
1 reply 0 retweets 1 like -
Perhaps the AGI could put people on pause if there are too many of them? And if that is morally neutral, perhaps it could pause them indefinitely? Perhaps that is even morally required if the AGI can make better people? I find these questions difficult to deflect.
1 reply 0 retweets 0 likes -
There's no morals outside of a subjective point of view with subjective feelings and subjective goals. What would AGI's subjective point of view be?
1 reply 0 retweets 0 likes -
The general consensus seems to be that if the AGI is smart enough it would approximate Eliezer's moral positions, but Eliezer and Elon fear it will be almost impossible to make the AGI that smart.
2 replies 0 retweets 1 like -
Why it's that general consensus? Seems completely improbable, tbh
1 reply 0 retweets 1 like
I was kidding, but part of the deeper issue is that whatever moral preference any of us hold is likely going to change if our minds change, and is going to be irrelevant if they do not. Do we want to align AGI with what we are in 2018?
-
-
Moral preference is always tied to subjective viewpoint, is always tied to relative power balance. As soon as we assume that AGI is vastly more powerful than us, it means that we have no possible way to align it to our interests by definition
1 reply 0 retweets 0 likes -
I think that it is tied to what you attempt to regulate. The military has more power than the polis, yet the polis does not serve the military. I have more power than my children, yet I serve them.
0 replies 0 retweets 1 like
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.