Conversation

Replying to
Uncanny valleys are created by entire missing data dimensions or something 🤔🤔 Maybe entire long tails of dimensions with low data weight but high salience for constructing models complete enough for reasonable inference.
1
45
Like bazillions of photos of moving objects can’t make up for the feeling of weight of holding an object or the sense of force generated by throwing it. So learning to throw objects entirely from vision data will have weird uncanny effects I suspect.
1
47
Adding weight and effort sensors to “complete the embodiment” will probably fix things. Otherwise you’ll get strange artifacts.
2
33
I’m also simultaneously impressed and underwhelmed by the effects of combining low-dimensional maps. Text and images are 2 low-dimensional maps of the world. Combining them seems to produce interestingly surreal and quasi-coherent effects but somehow misses the mark.
3
36
The errors seem to me like extremely rich super-versions of banal zero errors. In basic instrumentation, zero error is what an instrument actually reads when it should read zero. You correct it by simply adding a compensating offset to zero it out. It’s basic calibration.
1
29
But note what it takes to correct a zero error. A very high-embodiment instrument, a meatbag ape, often working with additional embodiment extensions like magnifiers or spectrum shifters, arbitrarily decides what “zero” is, and TELLS the lower dimensional instrument.
1
33
There’s a philosophically worrying thing going on here, a more embodied but potentially less intelligent being is “closing the world” for a less embodied, but potentially more intelligent being, by tweaking knobs based on dimensions unseen by the latter.
2
38
The problem here is that for the AI to correct even a simple zero error, it may have to expand its world enormously, along data scarce dimensions. This is not a type of problem that will be fixed by more Moore’s law. It’s not a weak compute problem, it’s a weak boundary problem.
2
42
I’m turning into an AI contrarian of sorts. I think AGI is ill-posed bullshit and embodiment and information boundaries matter a lot more than raw processing power. Most AI people currently believe the exact opposite.
2
58
I suspect the “moving goalposts” syndrome will reverse within the decade. Instead of humans moving goalposts when AI beats them, we’ll have to move them to make 99% go to 100% by compensating for uncanny embodiment related failures, because for 0-1 things, 99% is no better than 0
1
24
Replying to
The thing people don't seem to appreciate about the brain-in-the-vat or Nozick experience machine thought experiments is... all the "intelligence" is really in the "vat" or "machine hookups"
2
27
The embodiment-first understanding of intelligence is basically the view that "the vat is the interesting thing"
4
37
Show replies