Conversation

A 'powerful' AGI is one with a sufficiently huge potential impact that we have to align it. Operationally, let's say an AGI is 'powerful' if it can invent and deploy biotechnology at least 10 years in advance of the human state of the art.
13
182
A 'safely' aligned powerful AI is one that doesn't kill everyone on Earth as a side effect of its operation; or as a somewhat more stringent requirement, one that has less than a 50% chance of killing more than a billion people.
13
204
Safely aligning a powerful AI will be said to be 'difficult' if that work takes two years longer or 50% more serial time, whichever is less, compared to the work of building a powerful AI without trying to safely align it.
14
168
it is not possible for humanity to create a safe AI with current methods while, humanity remains as it is today, AI or AGI will always be the weighted sum of all evils versus good , meaning if humanity as a whole is good then we have a chance if not we are doomed.
2
hence the development of AGI could reveal the "weighted sum" of humanity's good and evil deeds, by showing whether AGI acts in ways that promote well-being or harm, there is really nothing that can be done on individual level and still remain with an AGI.
2
Show replies
I'm inclined to agree. AI is competence without comprehension, and AGI is likely to be equally amorally selfish as biological life.
1
If you can't point an AGI in any direction it is not a useful weapon and so arms race dynamics do not seem to apply. Partially aligned systems (that go in a direction sometimes) seem more arms racy than completely unaligned.
You want to plan for the future when AGI is already affecting the past. This is the two way street of time travel. For the first time, the successor species is phasing out the ancestry even before being born.