Conversation

My own initial reaction to gpt2 helps me understand why pre-modern tribes reacted to being filmed and shown the footage like the camera was stealing their souls. Except, I *like* the idea instead of being horrified by it. Unfortunately as untrue for AIs as for film cameras.
6
72
In fact, gpt2 has helped me clarify exactly why I think the moral/metaphysical panic around AI in general, and AGI in particular, is easily the silliest thing I’ll see in my lifetime. It’s the angels-on-a-pinhead concern of out time. Not even wrong.
3
24
AI won’t steal our souls AI won’t terk ehr jerbs AI won’t create new kinds of risk AI isn’t A or I I’d call it “cognitive optics”... it’s a bunch of lenses and mirrors that reflect, refract, and amplify human cognition. AIs think in the same way telescopes see. Ie they don’t.
9
62
“AIs reflect/reproduce our biases” misses the point by suggesting you could prevent that. That’s ALL they (deep learning algos) do. Biases are the building blocks. Take that out and there’s nothing left. Eg. gpt2 has picked up on my bias towards word like “embody” and “archetype”
2
38
Taking the bias out of AI is like taking the economy out of a planned economy. You’re left with regulators and planners with nothing left to regulate and plan. Or the lenses and mirrors out of a telescope. You’re left with a bunch of tubes with distortion-free 1x magnification.
4
29
None of this is to say a useful and powerful new class of tools isn’t emerging. It definitely is. But you can’t just slap a sensor-actuator loop onto a tool and install a utility function into it and define that to be sentience. That’s just regular automation.
2
26
I’m tired of this so-called “debate”. It’s a spectacular nothingburger attempt to narrativize a natural, smooth evolution in computing power as a phase shift into a qualitatively different tech regime. There *are* real phase shifts underway, this just ain’t one of them.
2
17
Phase transitions in tech evolution do not usually have human-meaningful interpretations. When we went from steam power to electric, big changes resulted. That was a phase transition. But we didn’t decide to call electricity “artificial glucose” or motors “artificial muscles” 😶
Replying to
The output of gpt2 is at once deeply interesting and breathtakingly disappointing. I really was hoping to replace myself with a very small shell script but sadly that outcome ain’t on this vector of evolution, no matter how much compute you throw at it.
2
11
If you take out the breathless narratives, I honestly can’t tell how an apathetic AGI that squashes us on the way to paperclip maximization is any different from Australian wildfires or an asteroid headed at us. Yes it could destroy us. No there’s nothing “intelligent” about it.
2
17
Anthropomorphic design has good justifications (Asimov saw this in 1950). Designing swap-in functional replacements for ourselves cheaply evolves legacy infrastructure. Driverless cars are valuable because driver-cars are a big sunk cost. Without them, we’d automate differently.
2
14
But the possibility of anthropomorphic design shouldn’t lead us to reductively misread a tech evolution anthropomorphically (let alone anthropocentrically). We call it pottery, not container-statues, because molding clay ain’t about us. It’s about the properties of clay.
1
12
Key diff between AlphaGo zero and gpt2 worth mulling: AGZ discarded human training data and conquered Go playing against itself. That can’t happen with gpt2. Because there’s no competition rules or goals about language outside of closed world subsets like crosswords.
3
15
Now if Boston Dynamics robots evolved an internal language in the process of learning how to survive in an open environment that would at least be comparable to how I use language (in a survival closed loop in a social milieu)
1
8
But that example suggests the link between survival, intelligence and computation is much subtler. If you wanted to build tech that simply solved for survival in an open environment, you’re more likely to draw inspiration from bacteria than dogs or apes.
1
11
The only reason to cast silicon-based computation into human like form is a) replace ourselves cheaply in legacy infrastructure b) scare ourselves for no good reason
1
13
This is easy to see with arms and legs. Harder to see with mental limbs like human language. Asimov had this all figured out 50 years ago. The only reason AI is a “threat” is the benefits of anthropomorphic computation to some will outweigh their costs to many, which is fine.
3
8
Non-anthropomorphic computation otoh is not usefully illuminated by unmotivated comparisons to human capability. AlphaGo can beat us at Go A steam engine can outrun us Same story. More geewhiz essentialism that’s it.
2
14
Same steady assault on human egocentricity that’s been going on since copernicus. Not being the center is not the same as being “replaced.” Risks are not the same as malevolence Apathetic harm from a complex system is not the same as intelligence at work
1
19
“Intelligence” is a geocentrism kinda idea. The belief that “our” thoughts “revolve around” us. Like skies seem to revolve around earth. “AI” is merely the ego-assaulting discovery that intelligence is just an illusion caused by low-entropy computation flows passing through us.
4
45
What annoys me about “AI” understandings of statistical algorithms is that it obscures genuinely fascinating questions about computation. For example it appears any Universal Turing Machine (UTM) can recover the state of any other UTM given enough sample output and memory.
1
10
This strikes me as more analogous to a heat engine locally reversing entropy than “intelligence”. But nobody studies things like gpt2 in such terms. Can we draw a Carnot cycle type diagram for it? What’s the efficiency possible?
2
12
The tedious anthropocentric lens (technically the aspie-hedgehog-rationalist projective lens) stifles other creative perspectives because of the appeal of angels-on-a-pinhead bs thought experiments like simulationism. Heat engines, swarms, black holes, fluid flows...
1
4
Most AI watchers recognize that the economy and complex bureaucratic orgs are also AIs in the same ontological sense as the silicon based ones, but we don’t see the same moral panic there. When in fact both have even gone through paperclip-maximizer type phases. Why?
3
25
I’ll tell you why. Because they don’t lend themselves as easily to anthropomorphic projection or be recognizably deployed into contests like beating humans at Go. Markets beat humans at Go via prizes. Bureaucracies do it via medals and training.
14