Conversation

Replying to
Like bazillions of photos of moving objects can’t make up for the feeling of weight of holding an object or the sense of force generated by throwing it. So learning to throw objects entirely from vision data will have weird uncanny effects I suspect.
1
47
Adding weight and effort sensors to “complete the embodiment” will probably fix things. Otherwise you’ll get strange artifacts.
2
33
I’m also simultaneously impressed and underwhelmed by the effects of combining low-dimensional maps. Text and images are 2 low-dimensional maps of the world. Combining them seems to produce interestingly surreal and quasi-coherent effects but somehow misses the mark.
3
36
The errors seem to me like extremely rich super-versions of banal zero errors. In basic instrumentation, zero error is what an instrument actually reads when it should read zero. You correct it by simply adding a compensating offset to zero it out. It’s basic calibration.
1
29
But note what it takes to correct a zero error. A very high-embodiment instrument, a meatbag ape, often working with additional embodiment extensions like magnifiers or spectrum shifters, arbitrarily decides what “zero” is, and TELLS the lower dimensional instrument.
1
33
There’s a philosophically worrying thing going on here, a more embodied but potentially less intelligent being is “closing the world” for a less embodied, but potentially more intelligent being, by tweaking knobs based on dimensions unseen by the latter.
2
38
The problem here is that for the AI to correct even a simple zero error, it may have to expand its world enormously, along data scarce dimensions. This is not a type of problem that will be fixed by more Moore’s law. It’s not a weak compute problem, it’s a weak boundary problem.
2
42
The problem with AI as a concept is that disembodiment is part of the definition for no good reason. It’s just a path-dependent effect of early computers being disembodied and cheap embodiment (sense and actuate in rich rather than impoverished ways) being a younger technology
1
6
There’s also a sort of anthropocentrism of people with big brains thinking big brains are the important component in the “intelligence stack” to the point that the two got conflated and nobody noticed for decades till Moravec did.
1
8
I’m not even sure an “intelligence stack” is the right model. Still too anthropomorphic. Human cognition happens to be a stack from nerve endings to brain stem to cerebellum to cerebrum. But this is still a “stock” view of intelligence. What if it is better understood as a flow?
1
6
Stock view as in: an entity “has” experiences that it processes+ compresses into “memory models” (stock) that enables it to process more efficiently (predictively) in future, becoming increasingly indifferent to situatedness, substituting (memory) maps for territorial presence.
Replying to
This is roughly Schmidhuber compression progress model. It’s focused on increasing agency/presence ratio — maximal agency for minimal presence. Schmidhuber gestures at “curiosity” and interestingness-seeking as a way to stop this process from converging to disembodied godhood.
1
5
Interestingly, Boyd came up with a folk version of the idea in Destruction and Creation, trying to define intelligence as agents trying to increase their capacity for autonomous action. But by situating the idea in a context of competitive survival he avoids solipsistic regress.
Image
1
11
The OODA loop suggests a potential “flow stack” structure for intelligence by way of continuous reorientation. I’m trying to work this out as a system of internal and external flows separated by a semipermeable adaptive boundary.
2
10
Sometimes the boundary is sealed tight and the intelligence inside is producing low-entropy compression via raw processing. Other times, the boundary is very open, with data flows mixing significantly, and the entity acquiring entropy.
9
Replying to
It seems not entirely unlike the way the neocortex processes visual information in the left hemisphere, reducing more detailed observations to a sort of symbology, the reason many people can't draw better than stick figures. Right brain can see these details but left is dominant.