Conversation

Replying to
The foundation of this way of thinking is a complex-systems analogue to what physicists have lately been calling ontic-structural realism. Above my paygrade to explain the physics but Kenneth Shinozuka wrote a great guest post for me about it.
1
25
The central salient aspect of intelligence in this view is the *continuity of identity* a smoothness in what it means to be something in structural terms. Ken explained it via reading the Ship of Theseus fable in terms of physics symmetry preservations etc.
2
20
Let me relate this to the IQ++ way of thinking, which has its utility. In this view, the idea of a “g factor” that correlates robustly with certain abilities for the human-form-factor of “intelligence” is something like “latitude” for a planet’s weather. An ℓ-factor.
1
7
Is the ℓ-factor important in understanding weather/climate? It does correlate strongly to weather patterns. If “snow-math ability” then northern latitudes are “smarter” etc. But there’s something fundamentally besides-the-point about that as a starting point.
2
7
See also Chris Lattner’s commentary, which raises some more ideas. This is an AI conversation I actually can sink my teeth into and enjoy. I haven’t felt that way since the Dennett/Hofstadter era of philosophizing in The Mind’s I, which I read in 1996
1
20
In a way, just as there was an AI winter technologically between ~1990-2002, there was a philosophical dry spell. Moravec’s paradox had been identified in the 80s but we didn’t have the tech to attack it till the like 2009-10, and new phenomenology to think about till like 2015.
1
11
I do think the Singularity crowd helped keep the conversation going during the extended winter, and it’s important to acknowledge their institution-building contributions esp via founding influence on OpenAI, DeepMind etc. But both the tech and the conversation are MUCH bigger.
1
8
Reminds me of something similar in early computing history: for some California-obsessed people, the influence of the hippie counterculture on early computing in 1960-1985 via SRI, PARC, Stanford is the whole story, but objectively it’s like 1/5th of the story.
1
11
Replying to
This is by now we’ll know to historians of computing. Somebody with a deeper understanding of AI history should do a similar “thick” version of the AI story. Both dismissing the Singularity crowd as amateur entryists or the whole story is bad historiography.
2
11
They mattered less than they believe, but more than critics are willing to give them credit for. Anyhow... back to the topic at hand. AI futures. What does the AI future look like?
1
6
I think: 1. General purpose post-GPU hardware 2. Application-specific hardware optimization 3. An end to going faster than Moore’s law ceiling 4. A software 2.0 stack that will evolve faster than people realize 5. Rapidly falling costs of AI compute 6. Smaller form factors
1
16
Ugh broke threading further up but this sub thread of 3 tweets fits better here anyway
Image
Quote Tweet
This whole track of AI btw, came from a whole different place... people trying to use GPUs for parallel computing, Moore’s law raising the ceiling, etc. It did not come from pursuit of abstract science-fiction concerns. So those frames are likely to misguide.
Show this thread
2
4
What kind of a) tech trends and b) philosophical conversations can we expect on top of this basic outlook (which I know many agree with)? Key prelim question: are we due for another AI winter due to hitting a new hardware ceiling and/or paradigm-limits of deep learning?
1
5
Afaict while the ensemble and society-of-mind approaches are super influential in *AI in general* (and beyond), they are marginal and strongly underindexed in the Bostrom-LW school of AI because they don’t point cleanly to AGI-like futures but much messier ones.
Quote Tweet
Replying to @vgr and @Aelkus
nah man this is off base. The person is arguing with a straw man in this article. your g Factor theory is as well. The society of mind model is super influential and ensemble intelligences are already state-of-the-art in most environments. it's not about some IQ obsession
2
8
My intent with this thread was to try and broaden the public AI convo of west coast tech scene. There is a weird divergence between what’s happening at the bleeding edge of the tech itself, and the 2013-ish vintage eschatologically oriented “humans vs AI race” conversation frames
1
18
IOW the private conversations around AI tech inside companies looks very different from the conversation in public fora. A broadening would be helpful.
1
7
To make my own biases clear, I started out in classical controls when I started grad school and has landed in about a 40-30-30 mix of classical controls.robotics, GOFAI, and OR by the time I was done with postdoc and out of research.
1
7
That was 2006, a few years before deep learning took off. My more recent POV has been informed by ~10y consulting for semiconductor companies. Plus tracking the robotics side closely. So I have situated-cognition, hardware-first biases. Specific rather than general intelligence.
1
5
Starting from the control theory end of things creates barbell biases. On the one hand you deal with problems like “motor speed control.” On the other hand you end up dabbling in “system dynamics” which is the same technical apparatus applied to complex systems like economies.
1
8
My rant about “system dynamics” stuff I’ll save for another day. It shares many features of Singularitarianism. The OG system dynamics Limits to Growth report rhymes closely with “runaway AGI” type thinking.
1
10
My most basic commitment might be this: there have been models of universal computers and universal function approximators since Leibniz, but that does NOT mean “general intelligence” is a well-posed concept. I don’t think general intelligences exist basically.
1
12
An intelligence is NOT a powerful universal function approximator wrapped in a “context.” An intelligence is a stable and continuous ontic-structural history for a specific starter lump of mass-energy. The primary way to “measure” it is in terms how long it lives.
1
16
“Death” is dissolution of ontic-structural integrity for a *physical system*, and this destroys it as an existing intelligence. Ideas like uploads and mind-state-transfer are both ill-posed and uninteresting for anything complex enough to be called “intelligent.”
1
9
Unless of course you invent exact quantum-state cloning for macro-scale things. In which case teleporting to Alpha Centauri would be more interesting, and it wouldn’t be a way to cheat death.
1
3
Another way to think of it: intelligence is the whole territory of the physical system that embodies it. No reductive model-based state transfer preserving ontic-structural integrity and continuity will be possible. Cloning an intelligence is not like copying software code.
2
10
I’m not saying this quite right. An intelligence exists within a thermodynamic boundary that separates it from the environment but firs not *isolate* it. The nature of the intelligence is entangled with the specific environment and the boundary actually embodies much of it.
5
15
I’ll link to this 2017 thread I did on my idea of boundary intelligence. I need to revisit and update it. Again obvious biases from control theory (of course I model boundaries as being maintained by a sensor-actuator feedback loop)
Quote Tweet
1/ I'd like to make up a theory of intelligence based on a 2-element ontology: boundary and interior intelligence
Show this thread
2
8