The errors seem to me like extremely rich super-versions of banal zero errors. In basic instrumentation, zero error is what an instrument actually reads when it should read zero. You correct it by simply adding a compensating offset to zero it out. It’s basic calibration.
Conversation
But note what it takes to correct a zero error. A very high-embodiment instrument, a meatbag ape, often working with additional embodiment extensions like magnifiers or spectrum shifters, arbitrarily decides what “zero” is, and TELLS the lower dimensional instrument.
1
33
There’s a philosophically worrying thing going on here, a more embodied but potentially less intelligent being is “closing the world” for a less embodied, but potentially more intelligent being, by tweaking knobs based on dimensions unseen by the latter.
2
1
38
The problem here is that for the AI to correct even a simple zero error, it may have to expand its world enormously, along data scarce dimensions. This is not a type of problem that will be fixed by more Moore’s law. It’s not a weak compute problem, it’s a weak boundary problem.
2
4
42
I’m turning into an AI contrarian of sorts. I think AGI is ill-posed bullshit and embodiment and information boundaries matter a lot more than raw processing power. Most AI people currently believe the exact opposite.
2
8
58
You’re unable to view this Tweet because this account owner limits who can view their Tweets. Learn more
You’re unable to view this Tweet because this account owner limits who can view their Tweets. Learn more
You’re unable to view this Tweet because this account owner limits who can view their Tweets. Learn more
