Conversation

Replying to
Mine is: “the brain is a 100 billion neuron system that from the inside (“mind”) doesn’t *feel* like it has 100 billion elements, but more like dozens to 100s of high level salient emergent phenomena operating on a rich narrative and verbal memory... what else looks like that?”
2
47
The answers are things like markets, ecosystems, weather systems. Billions of atomic moving parts, but quasi-stable macro-phenomenology. There may be nothing it is “like” to be a “market” but setting aside the hard problem of consciousness it is in the brain-class of things.
5
65
The most interesting and salient thing about these systems is that they are coherent and stable in a thermodynamic sense, maintaining boundary integrity and internal structural identity continuity for periods of time ranging from tens to thousands of years.
2
26
The general ability to do that is the superset class that includes what we point to with words like “intelligence.” It’s not quite appropriate to apply to markets or weather, but it helps calibrate Brains : intelligence : mind :: markets :?? : ?? :: weather : climate? : Gaia?
2
18
The foundation of this way of thinking is a complex-systems analogue to what physicists have lately been calling ontic-structural realism. Above my paygrade to explain the physics but Kenneth Shinozuka wrote a great guest post for me about it.
1
25
The central salient aspect of intelligence in this view is the *continuity of identity* a smoothness in what it means to be something in structural terms. Ken explained it via reading the Ship of Theseus fable in terms of physics symmetry preservations etc.
2
20
Let me relate this to the IQ++ way of thinking, which has its utility. In this view, the idea of a “g factor” that correlates robustly with certain abilities for the human-form-factor of “intelligence” is something like “latitude” for a planet’s weather. An ℓ-factor.
1
7
Is the ℓ-factor important in understanding weather/climate? It does correlate strongly to weather patterns. If “snow-math ability” then northern latitudes are “smarter” etc. But there’s something fundamentally besides-the-point about that as a starting point.
2
7
This whole track of AI btw, came from a whole different place... people trying to use GPUs for parallel computing, Moore’s law raising the ceiling, etc. It did not come from pursuit of abstract science-fiction concerns. So those frames are likely to misguide.
Replying to
I suspect to do well with this stuff, you have to kinda toss all that aside and focus on the real existing things, and build mental models of what they actually are, down at the sparse matrix multiplication level, and building up situated abstractions application by application.
1
4
Divergent rather than convergent understandings. An anthropological understanding. Software 2.0 is a better term than AI, since it has less baggage but unfortunately makes the same linear evolution framing error that suggests a Software ∞.0 as the evolutionary asymptote. Still.
3