There’s a philosophically worrying thing going on here, a more embodied but potentially less intelligent being is “closing the world” for a less embodied, but potentially more intelligent being, by tweaking knobs based on dimensions unseen by the latter.
Conversation
The problem here is that for the AI to correct even a simple zero error, it may have to expand its world enormously, along data scarce dimensions. This is not a type of problem that will be fixed by more Moore’s law. It’s not a weak compute problem, it’s a weak boundary problem.
2
4
42
I’m turning into an AI contrarian of sorts. I think AGI is ill-posed bullshit and embodiment and information boundaries matter a lot more than raw processing power. Most AI people currently believe the exact opposite.
2
8
58
I suspect the “moving goalposts” syndrome will reverse within the decade. Instead of humans moving goalposts when AI beats them, we’ll have to move them to make 99% go to 100% by compensating for uncanny embodiment related failures, because for 0-1 things, 99% is no better than 0
1
1
24
I think there’s a fix though… you could build very high-embodiment, highly situated AIs where almost all salient data paths are closed loop and real time. There’s a word for this — “robot” 🤣
5
2
27
Replying to
In principle nothing that humans can't get either... so it's not a limit so much as a priority
1
Replying to
is that a rhetorical question? there are ways to pose and approach it if you mean it literally, but I don't think you do
1
Replying to
you could model it as a proxy for a survival drive and a utility function that takes self-similarity as an input for valuation etc.

