Conversation

Deleted previous version of the tweet where I mistakenly attributed it to Bret Victor rather than Maciej Cegłowski. That makes much more sense. I was surprised to find myself agreeing with what I thought was Victor. In my head “idlewords” somehow sounds close to “worrydream”
2
25
My diagnosis was always a kind of anti-projection. a) You think in a totalizing (INTJish) way and are impressed by its power b) You see a machine that thinks in analogous ways and looks like it lacks your limits c) You extrapolate its future as you minus biological limits
Note this is specifically a critique of the Bostrom-LW vision of the future of AI, based on an IQ++ model of what intelligence is. Not of all possible futures for the tech. It’s one that commits to a a sequential evolutionary model where the prefix “super” makes sense.
1
14
The reason I don’t bother engaging with this conversation is that my starting point is ontologically at the opposite pole from IQ++. I don’t find “entrance tests for bureaucratic industrial orgs to test aptitude for their legible functions” to be an interesting place to start.
2
32
Mine is: “the brain is a 100 billion neuron system that from the inside (“mind”) doesn’t *feel* like it has 100 billion elements, but more like dozens to 100s of high level salient emergent phenomena operating on a rich narrative and verbal memory... what else looks like that?”
2
47
The answers are things like markets, ecosystems, weather systems. Billions of atomic moving parts, but quasi-stable macro-phenomenology. There may be nothing it is “like” to be a “market” but setting aside the hard problem of consciousness it is in the brain-class of things.
5
65
The most interesting and salient thing about these systems is that they are coherent and stable in a thermodynamic sense, maintaining boundary integrity and internal structural identity continuity for periods of time ranging from tens to thousands of years.
2
26
The general ability to do that is the superset class that includes what we point to with words like “intelligence.” It’s not quite appropriate to apply to markets or weather, but it helps calibrate Brains : intelligence : mind :: markets :?? : ?? :: weather : climate? : Gaia?
2
18
The foundation of this way of thinking is a complex-systems analogue to what physicists have lately been calling ontic-structural realism. Above my paygrade to explain the physics but Kenneth Shinozuka wrote a great guest post for me about it.
1
25
The central salient aspect of intelligence in this view is the *continuity of identity* a smoothness in what it means to be something in structural terms. Ken explained it via reading the Ship of Theseus fable in terms of physics symmetry preservations etc.
2
20
Let me relate this to the IQ++ way of thinking, which has its utility. In this view, the idea of a “g factor” that correlates robustly with certain abilities for the human-form-factor of “intelligence” is something like “latitude” for a planet’s weather. An ℓ-factor.
1
7
Is the ℓ-factor important in understanding weather/climate? It does correlate strongly to weather patterns. If “snow-math ability” then northern latitudes are “smarter” etc. But there’s something fundamentally besides-the-point about that as a starting point.
2
7
See also Chris Lattner’s commentary, which raises some more ideas. This is an AI conversation I actually can sink my teeth into and enjoy. I haven’t felt that way since the Dennett/Hofstadter era of philosophizing in The Mind’s I, which I read in 1996
1
20
In a way, just as there was an AI winter technologically between ~1990-2002, there was a philosophical dry spell. Moravec’s paradox had been identified in the 80s but we didn’t have the tech to attack it till the like 2009-10, and new phenomenology to think about till like 2015.
1
11
I do think the Singularity crowd helped keep the conversation going during the extended winter, and it’s important to acknowledge their institution-building contributions esp via founding influence on OpenAI, DeepMind etc. But both the tech and the conversation are MUCH bigger.
1
8
Reminds me of something similar in early computing history: for some California-obsessed people, the influence of the hippie counterculture on early computing in 1960-1985 via SRI, PARC, Stanford is the whole story, but objectively it’s like 1/5th of the story.
1
11
In brief, if you want to look it up, there are like 5-6 strands to the story: 1. Semiconductors/Bell labs/Noyce... 2. IAS machine/von Neumann track 3. California track 4. DoD track 5. MIT track 6. Control and cybernetics
1
25
This is by now we’ll know to historians of computing. Somebody with a deeper understanding of AI history should do a similar “thick” version of the AI story. Both dismissing the Singularity crowd as amateur entryists or the whole story is bad historiography.
2
11
They mattered less than they believe, but more than critics are willing to give them credit for. Anyhow... back to the topic at hand. AI futures. What does the AI future look like?
1
6
I think: 1. General purpose post-GPU hardware 2. Application-specific hardware optimization 3. An end to going faster than Moore’s law ceiling 4. A software 2.0 stack that will evolve faster than people realize 5. Rapidly falling costs of AI compute 6. Smaller form factors
1
16
Ugh broke threading further up but this sub thread of 3 tweets fits better here anyway
Image
Quote Tweet
This whole track of AI btw, came from a whole different place... people trying to use GPUs for parallel computing, Moore’s law raising the ceiling, etc. It did not come from pursuit of abstract science-fiction concerns. So those frames are likely to misguide.
Show this thread
2
4
What kind of a) tech trends and b) philosophical conversations can we expect on top of this basic outlook (which I know many agree with)? Key prelim question: are we due for another AI winter due to hitting a new hardware ceiling and/or paradigm-limits of deep learning?
1
5
Afaict while the ensemble and society-of-mind approaches are super influential in *AI in general* (and beyond), they are marginal and strongly underindexed in the Bostrom-LW school of AI because they don’t point cleanly to AGI-like futures but much messier ones.
Quote Tweet
Replying to @vgr and @Aelkus
nah man this is off base. The person is arguing with a straw man in this article. your g Factor theory is as well. The society of mind model is super influential and ensemble intelligences are already state-of-the-art in most environments. it's not about some IQ obsession
2
8
My intent with this thread was to try and broaden the public AI convo of west coast tech scene. There is a weird divergence between what’s happening at the bleeding edge of the tech itself, and the 2013-ish vintage eschatologically oriented “humans vs AI race” conversation frames
1
18
IOW the private conversations around AI tech inside companies looks very different from the conversation in public fora. A broadening would be helpful.
1
7
To make my own biases clear, I started out in classical controls when I started grad school and has landed in about a 40-30-30 mix of classical controls.robotics, GOFAI, and OR by the time I was done with postdoc and out of research.
1
7
That was 2006, a few years before deep learning took off. My more recent POV has been informed by ~10y consulting for semiconductor companies. Plus tracking the robotics side closely. So I have situated-cognition, hardware-first biases. Specific rather than general intelligence.
1
5
Starting from the control theory end of things creates barbell biases. On the one hand you deal with problems like “motor speed control.” On the other hand you end up dabbling in “system dynamics” which is the same technical apparatus applied to complex systems like economies.
1
8
My rant about “system dynamics” stuff I’ll save for another day. It shares many features of Singularitarianism. The OG system dynamics Limits to Growth report rhymes closely with “runaway AGI” type thinking.
1
10
Show replies