Adding weight and effort sensors to “complete the embodiment” will probably fix things. Otherwise you’ll get strange artifacts.
Conversation
I’m also simultaneously impressed and underwhelmed by the effects of combining low-dimensional maps. Text and images are 2 low-dimensional maps of the world. Combining them seems to produce interestingly surreal and quasi-coherent effects but somehow misses the mark.
3
36
The errors seem to me like extremely rich super-versions of banal zero errors. In basic instrumentation, zero error is what an instrument actually reads when it should read zero. You correct it by simply adding a compensating offset to zero it out. It’s basic calibration.
1
29
But note what it takes to correct a zero error. A very high-embodiment instrument, a meatbag ape, often working with additional embodiment extensions like magnifiers or spectrum shifters, arbitrarily decides what “zero” is, and TELLS the lower dimensional instrument.
1
33
There’s a philosophically worrying thing going on here, a more embodied but potentially less intelligent being is “closing the world” for a less embodied, but potentially more intelligent being, by tweaking knobs based on dimensions unseen by the latter.
2
1
38
The problem here is that for the AI to correct even a simple zero error, it may have to expand its world enormously, along data scarce dimensions. This is not a type of problem that will be fixed by more Moore’s law. It’s not a weak compute problem, it’s a weak boundary problem.
2
4
42
I’m turning into an AI contrarian of sorts. I think AGI is ill-posed bullshit and embodiment and information boundaries matter a lot more than raw processing power. Most AI people currently believe the exact opposite.
2
8
58
I suspect the “moving goalposts” syndrome will reverse within the decade. Instead of humans moving goalposts when AI beats them, we’ll have to move them to make 99% go to 100% by compensating for uncanny embodiment related failures, because for 0-1 things, 99% is no better than 0
1
1
24
I think there’s a fix though… you could build very high-embodiment, highly situated AIs where almost all salient data paths are closed loop and real time. There’s a word for this — “robot” 🤣
5
2
27
Replying to
In principle nothing that humans can't get either... so it's not a limit so much as a priority
Replying to
is that a rhetorical question? there are ways to pose and approach it if you mean it literally, but I don't think you do
1
Show replies

