Conversation

Replying to
Afaict while the ensemble and society-of-mind approaches are super influential in *AI in general* (and beyond), they are marginal and strongly underindexed in the Bostrom-LW school of AI because they don’t point cleanly to AGI-like futures but much messier ones.
Quote Tweet
Replying to @vgr and @Aelkus
nah man this is off base. The person is arguing with a straw man in this article. your g Factor theory is as well. The society of mind model is super influential and ensemble intelligences are already state-of-the-art in most environments. it's not about some IQ obsession
2
8
My intent with this thread was to try and broaden the public AI convo of west coast tech scene. There is a weird divergence between what’s happening at the bleeding edge of the tech itself, and the 2013-ish vintage eschatologically oriented “humans vs AI race” conversation frames
1
18
IOW the private conversations around AI tech inside companies looks very different from the conversation in public fora. A broadening would be helpful.
1
7
To make my own biases clear, I started out in classical controls when I started grad school and has landed in about a 40-30-30 mix of classical controls.robotics, GOFAI, and OR by the time I was done with postdoc and out of research.
1
7
That was 2006, a few years before deep learning took off. My more recent POV has been informed by ~10y consulting for semiconductor companies. Plus tracking the robotics side closely. So I have situated-cognition, hardware-first biases. Specific rather than general intelligence.
1
5
Starting from the control theory end of things creates barbell biases. On the one hand you deal with problems like “motor speed control.” On the other hand you end up dabbling in “system dynamics” which is the same technical apparatus applied to complex systems like economies.
1
8
My rant about “system dynamics” stuff I’ll save for another day. It shares many features of Singularitarianism. The OG system dynamics Limits to Growth report rhymes closely with “runaway AGI” type thinking.
1
10
My most basic commitment might be this: there have been models of universal computers and universal function approximators since Leibniz, but that does NOT mean “general intelligence” is a well-posed concept. I don’t think general intelligences exist basically.
1
12
An intelligence is NOT a powerful universal function approximator wrapped in a “context.” An intelligence is a stable and continuous ontic-structural history for a specific starter lump of mass-energy. The primary way to “measure” it is in terms how long it lives.
1
16
“Death” is dissolution of ontic-structural integrity for a *physical system*, and this destroys it as an existing intelligence. Ideas like uploads and mind-state-transfer are both ill-posed and uninteresting for anything complex enough to be called “intelligent.”
1
9
Replying to
Another way to think of it: intelligence is the whole territory of the physical system that embodies it. No reductive model-based state transfer preserving ontic-structural integrity and continuity will be possible. Cloning an intelligence is not like copying software code.
2
10
I’m not saying this quite right. An intelligence exists within a thermodynamic boundary that separates it from the environment but firs not *isolate* it. The nature of the intelligence is entangled with the specific environment and the boundary actually embodies much of it.
5
15
I’ll link to this 2017 thread I did on my idea of boundary intelligence. I need to revisit and update it. Again obvious biases from control theory (of course I model boundaries as being maintained by a sensor-actuator feedback loop)
Quote Tweet
1/ I'd like to make up a theory of intelligence based on a 2-element ontology: boundary and interior intelligence
Show this thread
2
8