Moravec’s paradox is really a statement about closed vs open worlds and intelligence-based versus survival-based reward functions. Open-world rewards are illegible because survival is illegible. Persistence of situated subjective agent identity
Conversation
In the spirit of “there is something it is like to be X” definition of subjective consciousness, a coherent definition of survival is “there is something it is like to have persisted across X environmental discontinuity”. This does not imply consciousness though. It is narrower.
1
1
Note that it is a situated definition that includes the discontinuity X. It is a) finding a way to continue playing past X b) while maintaining a continuous player identity. The second condition might be continuity of memory and learning. The new you is not a tabula rasa reboot
1
Oddly enough, “winning once and for all” is an event X that an absolute winner does not survive. They have to find a way to continue playing. But they can’t continue the old game because they’ve exhausted it. Otoh continuing the game in a new way involves expanding the world.
1
What would it mean for DeepMind type AIs to solve “survival” problems? I don’t think it would be that hard. In fact very simple computer viruses arguably already survive in the wild if they persist past X=patch/update events by hiding in computers with sloppy security policies
1
I think the problem is we humans don’t know how to think about true survival competitions. It’s closer to battling cancer or fighting epidemics of mutating viruses than a battle of intelligences. There is no end, and no “point” being proved. Just living to fight another day.
Replying to
Asking “can humans beat AIs?” in an open world context is like asking, “can humans beat the common cold?” We can manage and contain it, but cold viruses already represent a non-human general intelligence that can survive us.
1
3
4
Confusion in interpreting what AIs have been achieving is often due to the fact that intelligence and survivability are coupled but not the same thing. When intelligence can be made legible (human brains as opposed to ant swarms) we get tempted into confusing it for survivability
1
2
You might say that any construction of “intelligence” is like retrieval habits from a compressed map of a particular history of survival. A legibilization of the map results in a formalization. “Grid”: map of NY : NY :: IQ : intelligence : survival.
1
1
Intelligence correlates with survival positively when the map is more correct than not, especially where mortal threats are concerned. The “skill” aspect of intelligence is secondary. It’s just accuracy of map recall. And remember the map is a metaphor here.
1
5
If you eliminate “future is like the past” presumption, intelligence and survival become decoupled. Or worse, negatively coupled. If you try efficient Manhattan grid navigation in the right kind of perversely laid-out city, even a random walk will out-survive you.
1
6
