Conversation

Replying to
Yes, but.... The greatest failure of AI discourse has probably been the failure to distinguish clearly between Turing completeness and universal function approximation on the one hand, and intelligence on the other. Minecraft though...
Quote Tweet
Replying to @vgr
Humans are general intelligences in the same sense that Minecraft is Turing-complete.
2
24
In the UC model, it is known that random inputs from outside the boundary are required for open-ended evolution, which neatly ties interior and boundary intelligence together in a way that doesn’t garble mental models. In UC/Minecraft type metaphors this is just mutation.
1
11
UTMs in the infinite tape sense are more legible from a programming pov, but less useful from a thinking-about-AI pov. In a UC sense, intelligence is embodied not by the universal “containing” metaphor but the creatures that evolve within it. Down specific but open-ended paths.
2
10
Humans are good at some things (opposable-thumb use, jokes) bad at others (inverting large matrices, computing large primes) and effectively unable to do some things at all due to limited lifespan*speed. These limits are more than a “finite tape” constraint on intelligence.
1
10
“Finite tape” maps to things like number of neurons in baby brain, number of gates in an FPGA: “size of blank canvas” measures. It’s general but in a trivial, featureless way like “kg of steel” or “mAh of battery.” It’s disingenuous to migrate that to an intelligence qualifier.
1
6
Ie you can’t go from a specific to general intelligence by gradually increasing blank canvas size. It’s like a non-constructive existence proof. Presumably GIs would use large canvases but you can’t infer the existence of GIs from the existence of large canvases.
2
10
Von Neumann to the rescue again. There’s a lower limit of cellular automata size below which self-reproduction is not possible below which is a nice *specific* threshold for a kind of universality since self-reproduction+noise = many *specific* intelligences are evolvable.
1
9
Now this means, obviously, intelligences are plausible that will out-compete humans in *specific* classes of evolutionary environments. Does that mean we have a constructive path to AGI? Not so fast! Many intelligences can already outcompete us if you limit environment range!
1
9
If the earth suddenly floods fully, sharks might eat us all. A Covid descendant could wipe us out. Hell an asteroid could outcompete us in the environment of colliding celestial bodies. Nobody would call these “pwned by AGI paperclip optimizer” scenarios. So what gives?
1
5
I think AGIers have in mind 2 conditions: a) being outcompeted in a wide range of environments b) looking like “super” versions of us Many “intelligences” could satisfy 1) without being “general” in any satisfyingly apocalyptic way 2) is just anthropcentrism. Not interesting
3
11
My belief is that no satisfying story will exist that fits the AGI template. All you’ll have is specific intelligences that will win in some conditions, lose in others against us, and will run the gamut from mutant viruses to toxic markets to brain-damaging memes.
2
14
If you’re looking to be pwned by a god-like intelligence, go ahead and believe in the scenario, but there’s no good reason to treat it as anything more than a preferred religious scenario. It has no real utility beyond meeting an emotional need.
3
14
There’s no useful activity or priority that emerges from that belief that doesn’t also emerge from ordinary engineering risk management. Bridge designers worry about bridges collapsing. Real ML system designers worry about concrete risks like classification bias. That’s... enough
1
12
Basically, AGIs as a construct are technically unnecessary for thinking about AI. They add nothing beyond a few cute thought experiments. But they’re satisfying and enjoyable to think about for certain anthopocentric narratives.
3
16
Afaict, history tells us that interesting AI emerges from building specific intelligences that solve specific classes of problems, and then evolving them in path-dependent open-ended ways. If any of them shows any signs of even narrow self-improvement, like AlphaGoZero, great!
5
11