Adding weight and effort sensors to “complete the embodiment” will probably fix things. Otherwise you’ll get strange artifacts.
Conversation
I’m also simultaneously impressed and underwhelmed by the effects of combining low-dimensional maps. Text and images are 2 low-dimensional maps of the world. Combining them seems to produce interestingly surreal and quasi-coherent effects but somehow misses the mark.
3
36
The errors seem to me like extremely rich super-versions of banal zero errors. In basic instrumentation, zero error is what an instrument actually reads when it should read zero. You correct it by simply adding a compensating offset to zero it out. It’s basic calibration.
1
29
But note what it takes to correct a zero error. A very high-embodiment instrument, a meatbag ape, often working with additional embodiment extensions like magnifiers or spectrum shifters, arbitrarily decides what “zero” is, and TELLS the lower dimensional instrument.
1
33
There’s a philosophically worrying thing going on here, a more embodied but potentially less intelligent being is “closing the world” for a less embodied, but potentially more intelligent being, by tweaking knobs based on dimensions unseen by the latter.
2
1
38
The problem here is that for the AI to correct even a simple zero error, it may have to expand its world enormously, along data scarce dimensions. This is not a type of problem that will be fixed by more Moore’s law. It’s not a weak compute problem, it’s a weak boundary problem.
2
4
42
I’m turning into an AI contrarian of sorts. I think AGI is ill-posed bullshit and embodiment and information boundaries matter a lot more than raw processing power. Most AI people currently believe the exact opposite.
2
8
58
I suspect the “moving goalposts” syndrome will reverse within the decade. Instead of humans moving goalposts when AI beats them, we’ll have to move them to make 99% go to 100% by compensating for uncanny embodiment related failures, because for 0-1 things, 99% is no better than 0
1
1
24
I think there’s a fix though… you could build very high-embodiment, highly situated AIs where almost all salient data paths are closed loop and real time. There’s a word for this — “robot” 🤣
5
2
27
The thing people don't seem to appreciate about the brain-in-the-vat or Nozick experience machine thought experiments is... all the "intelligence" is really in the "vat" or "machine hookups"
2
27
The embodiment-first understanding of intelligence is basically the view that "the vat is the interesting thing"
Replying to
Are you reading on his embodied, robot-utilising “artificial consciousness” project, laid out in the last few chapters of The Hidden Spring? youtu.be/L9h8-HFmcjE Damasio also pursuing this
1
2
This Tweet was deleted by the Tweet author. Learn more
Replying to
I like maturana/varela work, but I don't think a focus on library/info science is particularly useful. The enacted/recorded relationship was in fact what led GOFAI astray, because so little is formally recorded at all, especially in sophisticated symbolic forms
4
Replying to
Have you read David Chapman's thing on embodiment and dancing robots? Possibly relevant
1
1
Replying to
AI is artificial vat.
The embodiment-first understanding of life/living is basically the view that "intelligence is vat is intelligence".



