Conversation

Replying to
Humans are good at some things (opposable-thumb use, jokes) bad at others (inverting large matrices, computing large primes) and effectively unable to do some things at all due to limited lifespan*speed. These limits are more than a ā€œfinite tapeā€ constraint on intelligence.
1
10
ā€œFinite tapeā€ maps to things like number of neurons in baby brain, number of gates in an FPGA: ā€œsize of blank canvasā€ measures. It’s general but in a trivial, featureless way like ā€œkg of steelā€ or ā€œmAh of battery.ā€ It’s disingenuous to migrate that to an intelligence qualifier.
1
6
Ie you can’t go from a specific to general intelligence by gradually increasing blank canvas size. It’s like a non-constructive existence proof. Presumably GIs would use large canvases but you can’t infer the existence of GIs from the existence of large canvases.
2
10
Von Neumann to the rescue again. There’s a lower limit of cellular automata size below which self-reproduction is not possible below which is a nice *specific* threshold for a kind of universality since self-reproduction+noise = many *specific* intelligences are evolvable.
1
9
Now this means, obviously, intelligences are plausible that will out-compete humans in *specific* classes of evolutionary environments. Does that mean we have a constructive path to AGI? Not so fast! Many intelligences can already outcompete us if you limit environment range!
1
9
If the earth suddenly floods fully, sharks might eat us all. A Covid descendant could wipe us out. Hell an asteroid could outcompete us in the environment of colliding celestial bodies. Nobody would call these ā€œpwned by AGI paperclip optimizerā€ scenarios. So what gives?
1
5
I think AGIers have in mind 2 conditions: a) being outcompeted in a wide range of environments b) looking like ā€œsuperā€ versions of us Many ā€œintelligencesā€ could satisfy 1) without being ā€œgeneralā€ in any satisfyingly apocalyptic way 2) is just anthropcentrism. Not interesting
3
11
My belief is that no satisfying story will exist that fits the AGI template. All you’ll have is specific intelligences that will win in some conditions, lose in others against us, and will run the gamut from mutant viruses to toxic markets to brain-damaging memes.
2
14
If you’re looking to be pwned by a god-like intelligence, go ahead and believe in the scenario, but there’s no good reason to treat it as anything more than a preferred religious scenario. It has no real utility beyond meeting an emotional need.
3
14