Conversation

Replying to
To make my own biases clear, I started out in classical controls when I started grad school and has landed in about a 40-30-30 mix of classical controls.robotics, GOFAI, and OR by the time I was done with postdoc and out of research.
1
7
That was 2006, a few years before deep learning took off. My more recent POV has been informed by ~10y consulting for semiconductor companies. Plus tracking the robotics side closely. So I have situated-cognition, hardware-first biases. Specific rather than general intelligence.
1
5
Starting from the control theory end of things creates barbell biases. On the one hand you deal with problems like “motor speed control.” On the other hand you end up dabbling in “system dynamics” which is the same technical apparatus applied to complex systems like economies.
1
8
My rant about “system dynamics” stuff I’ll save for another day. It shares many features of Singularitarianism. The OG system dynamics Limits to Growth report rhymes closely with “runaway AGI” type thinking.
1
10
My most basic commitment might be this: there have been models of universal computers and universal function approximators since Leibniz, but that does NOT mean “general intelligence” is a well-posed concept. I don’t think general intelligences exist basically.
1
12
An intelligence is NOT a powerful universal function approximator wrapped in a “context.” An intelligence is a stable and continuous ontic-structural history for a specific starter lump of mass-energy. The primary way to “measure” it is in terms how long it lives.
1
16
“Death” is dissolution of ontic-structural integrity for a *physical system*, and this destroys it as an existing intelligence. Ideas like uploads and mind-state-transfer are both ill-posed and uninteresting for anything complex enough to be called “intelligent.”
1
9
Unless of course you invent exact quantum-state cloning for macro-scale things. In which case teleporting to Alpha Centauri would be more interesting, and it wouldn’t be a way to cheat death.
1
3
Another way to think of it: intelligence is the whole territory of the physical system that embodies it. No reductive model-based state transfer preserving ontic-structural integrity and continuity will be possible. Cloning an intelligence is not like copying software code.
2
10
Replying to
I’ll link to this 2017 thread I did on my idea of boundary intelligence. I need to revisit and update it. Again obvious biases from control theory (of course I model boundaries as being maintained by a sensor-actuator feedback loop)
Quote Tweet
1/ I'd like to make up a theory of intelligence based on a 2-element ontology: boundary and interior intelligence
Show this thread
2
8
Replying to
I disagree pretty strongly with you on both accounts. The hard problem is only hard because that's the way it is posed. It's the same basic error as Searle's artificial separation of syntax and semantics (obviating pragmatics):
1
5
Replying to
Put philosophically: intelligence is a lens that focuses information in the environment. Contemporary statistical inference AIs live in the informational equivalent of an industrial farming monoculture. Yes it has access to infinite corn. But not the variety of a real environment
1
8