Humans are good at some things (opposable-thumb use, jokes) bad at others (inverting large matrices, computing large primes) and effectively unable to do some things at all due to limited lifespan*speed. These limits are more than a āfinite tapeā constraint on intelligence.
Conversation
āFinite tapeā maps to things like number of neurons in baby brain, number of gates in an FPGA: āsize of blank canvasā measures. Itās general but in a trivial, featureless way like ākg of steelā or āmAh of battery.ā Itās disingenuous to migrate that to an intelligence qualifier.
1
6
Ie you canāt go from a specific to general intelligence by gradually increasing blank canvas size. Itās like a non-constructive existence proof. Presumably GIs would use large canvases but you canāt infer the existence of GIs from the existence of large canvases.
2
10
Von Neumann to the rescue again. Thereās a lower limit of cellular automata size below which self-reproduction is not possible below which is a nice *specific* threshold for a kind of universality since self-reproduction+noise = many *specific* intelligences are evolvable.
1
9
Now this means, obviously, intelligences are plausible that will out-compete humans in *specific* classes of evolutionary environments. Does that mean we have a constructive path to AGI?
Not so fast! Many intelligences can already outcompete us if you limit environment range!
1
9
If the earth suddenly floods fully, sharks might eat us all. A Covid descendant could wipe us out. Hell an asteroid could outcompete us in the environment of colliding celestial bodies.
Nobody would call these āpwned by AGI paperclip optimizerā scenarios.
So what gives?
1
1
5
I think AGIers have in mind 2 conditions:
a) being outcompeted in a wide range of environments
b) looking like āsuperā versions of us
Many āintelligencesā could satisfy 1) without being āgeneralā in any satisfyingly apocalyptic way
2) is just anthropcentrism. Not interesting
3
11
My belief is that no satisfying story will exist that fits the AGI template. All youāll have is specific intelligences that will win in some conditions, lose in others against us, and will run the gamut from mutant viruses to toxic markets to brain-damaging memes.
2
14
If youāre looking to be pwned by a god-like intelligence, go ahead and believe in the scenario, but thereās no good reason to treat it as anything more than a preferred religious scenario. It has no real utility beyond meeting an emotional need.
3
2
14
Replying to
Except a human augmented by AI. That would be a super human intelligence (by definition). Once we figure the brain better and implants work well, it will happen.
1

