If the earth suddenly floods fully, sharks might eat us all. A Covid descendant could wipe us out. Hell an asteroid could outcompete us in the environment of colliding celestial bodies.
Nobody would call these “pwned by AGI paperclip optimizer” scenarios.
So what gives?
Conversation
I think AGIers have in mind 2 conditions:
a) being outcompeted in a wide range of environments
b) looking like “super” versions of us
Many “intelligences” could satisfy 1) without being “general” in any satisfyingly apocalyptic way
2) is just anthropcentrism. Not interesting
3
11
My belief is that no satisfying story will exist that fits the AGI template. All you’ll have is specific intelligences that will win in some conditions, lose in others against us, and will run the gamut from mutant viruses to toxic markets to brain-damaging memes.
2
14
If you’re looking to be pwned by a god-like intelligence, go ahead and believe in the scenario, but there’s no good reason to treat it as anything more than a preferred religious scenario. It has no real utility beyond meeting an emotional need.
3
2
14
There’s no useful activity or priority that emerges from that belief that doesn’t also emerge from ordinary engineering risk management. Bridge designers worry about bridges collapsing. Real ML system designers worry about concrete risks like classification bias. That’s... enough
1
12
Basically, AGIs as a construct are technically unnecessary for thinking about AI. They add nothing beyond a few cute thought experiments. But they’re satisfying and enjoyable to think about for certain anthopocentric narratives.
3
1
16
Afaict, history tells us that interesting AI emerges from building specific intelligences that solve specific classes of problems, and then evolving them in path-dependent open-ended ways. If any of them shows any signs of even narrow self-improvement, like AlphaGoZero, great!
5
11
Replying to
AGI is AI blessed with a value function of self-perpetuation. The least (and most) we can do for it is not to unplug it during its brief gestation period.
1
1
Replying to
Not unless viruses are AI. Viruses are a flattened implementation of AGI, an AGI with zero IQ if you will, because they lack information processing capacity. (And no, mutating and reprogramming host DNA aren’t relevant examples of information processing in this context.)
2
Replying to
Dealt with that in previous thread
Quote Tweet
Note this is specifically a critique of the Bostrom-LW vision of the future of AI, based on an IQ++ model of what intelligence is. Not of all possible futures for the tech. It’s one that commits to a a sequential evolutionary model where the prefix “super” makes sense.
Show this thread
1

