But note what it takes to correct a zero error. A very high-embodiment instrument, a meatbag ape, often working with additional embodiment extensions like magnifiers or spectrum shifters, arbitrarily decides what “zero” is, and TELLS the lower dimensional instrument.
Conversation
There’s a philosophically worrying thing going on here, a more embodied but potentially less intelligent being is “closing the world” for a less embodied, but potentially more intelligent being, by tweaking knobs based on dimensions unseen by the latter.
2
1
38
The problem here is that for the AI to correct even a simple zero error, it may have to expand its world enormously, along data scarce dimensions. This is not a type of problem that will be fixed by more Moore’s law. It’s not a weak compute problem, it’s a weak boundary problem.
2
4
42
The problem with AI as a concept is that disembodiment is part of the definition for no good reason. It’s just a path-dependent effect of early computers being disembodied and cheap embodiment (sense and actuate in rich rather than impoverished ways) being a younger technology
1
6
There’s also a sort of anthropocentrism of people with big brains thinking big brains are the important component in the “intelligence stack” to the point that the two got conflated and nobody noticed for decades till Moravec did.
1
8
I’m not even sure an “intelligence stack” is the right model. Still too anthropomorphic. Human cognition happens to be a stack from nerve endings to brain stem to cerebellum to cerebrum. But this is still a “stock” view of intelligence. What if it is better understood as a flow?
1
6
Stock view as in: an entity “has” experiences that it processes+ compresses into “memory models” (stock) that enables it to process more efficiently (predictively) in future, becoming increasingly indifferent to situatedness, substituting (memory) maps for territorial presence.
2
3
This is roughly Schmidhuber compression progress model. It’s focused on increasing agency/presence ratio — maximal agency for minimal presence. Schmidhuber gestures at “curiosity” and interestingness-seeking as a way to stop this process from converging to disembodied godhood.
1
5
Interestingly, Boyd came up with a folk version of the idea in Destruction and Creation, trying to define intelligence as agents trying to increase their capacity for autonomous action. But by situating the idea in a context of competitive survival he avoids solipsistic regress.
1
3
11
The OODA loop suggests a potential “flow stack” structure for intelligence by way of continuous reorientation. I’m trying to work this out as a system of internal and external flows separated by a semipermeable adaptive boundary.
2
1
10
