Conversation

Replying to
The errors seem to me like extremely rich super-versions of banal zero errors. In basic instrumentation, zero error is what an instrument actually reads when it should read zero. You correct it by simply adding a compensating offset to zero it out. It’s basic calibration.
1
29
But note what it takes to correct a zero error. A very high-embodiment instrument, a meatbag ape, often working with additional embodiment extensions like magnifiers or spectrum shifters, arbitrarily decides what “zero” is, and TELLS the lower dimensional instrument.
1
33
There’s a philosophically worrying thing going on here, a more embodied but potentially less intelligent being is “closing the world” for a less embodied, but potentially more intelligent being, by tweaking knobs based on dimensions unseen by the latter.
2
38
The problem here is that for the AI to correct even a simple zero error, it may have to expand its world enormously, along data scarce dimensions. This is not a type of problem that will be fixed by more Moore’s law. It’s not a weak compute problem, it’s a weak boundary problem.
2
42
I’m turning into an AI contrarian of sorts. I think AGI is ill-posed bullshit and embodiment and information boundaries matter a lot more than raw processing power. Most AI people currently believe the exact opposite.
2
58
I suspect the “moving goalposts” syndrome will reverse within the decade. Instead of humans moving goalposts when AI beats them, we’ll have to move them to make 99% go to 100% by compensating for uncanny embodiment related failures, because for 0-1 things, 99% is no better than 0
1
24
I think there’s a fix though… you could build very high-embodiment, highly situated AIs where almost all salient data paths are closed loop and real time. There’s a word for this — “robot” 🤣
5
27