In the past I have given credit where it was due to Eliezer Yudkowsky for not explicitly advocating violent solutions to the problems with AI development that by his own admission only he and a few other (mostly nontechnical) people see on the horizon
He crossed that line today
Conversation
Since he has graduated from begging the question to openly stating that blowing things and people up is a reasonable thing to do I think we should dispense with the politeness that has been the norm around these discussions in the past
EY does not know what heβs talking about
11
50
763
He and the other hardline anti-AI cultists are out of their depth, both in terms of command of basic technical elements of this field but also in terms of their emotional states
This is a multidecade anxious fixation asking calling for air strikes in Time, not a rational person
The people working on AI systems are not stupid and are not reckless, in fact they have been astonishingly and excessively conservative in deploying these systems given how much potential transformer models have to help people
Many things are coming, Skynet is not one of them
21
42
453
Every day we delay deployment of powerful tools for augmenting human cognition robs kids of tutor systems, deprives scientists and engineers of a interactive natural language index atop our collective knowledge, and ultimately kills people with soon-to-curable diseases
8
59
495
We have seen this movie before
Myopic cowards who have never worked a real problem seriously in their lives assume that all problems are unworkable and ban people who can actually solve problems from doing so
Applying this to nuclear energy directly caused global warming
12
103
761
Btw since we're pulling out emotionally manipulative stuff like "my daughter lost a tooth and that gave me a panic attack about GPT-4"...
A few of my family members have punishing inflammatory disorders
GPT-4 is demonstrably helpful with drug discovery
Banning it hurts them
15
20
336
If you're tempted to take Yudkowsky seriously go engage with his work on the specifics of existential AI risks for a bit, it's ridiculous on its face
This was the sort of thing that was entertaining to read if you're into niche scifi, less so when it involves calls for bombings
5
10
207
The scenario that Eliezer most often cites as a plausible minimum-effort strategy that an emerging superintelligence could use to kill everybody (it convinces a human to synthesize grey goo nanotech) involves lots and lots of highly implausible leaps
4
12
180
Drexlerian nanotech may be possible, grey goo scenarios are concerning, but much like magically emergent volitional superintelligence they are *highly speculative*
Saying this stuff will appear out of nowhere amounts to crying wolf and distracts from actual safety work
6
8
173
Time and energy spent dealing with Eliezer's high IQ version of a paranoid delusional complex is time and energy not spent constraining model read-write capabilities with code that is typechecked such that we have mathematical proofs that it cannot go out of bounds, for example
5
22
230
There are concrete things we can do to make AI systems more safe, and there are plenty of unimplemented lowhanging fruit from other more established areas of software engineering
If you're that worried about runaway AI go build a killswitch FPGA that triggers on network traffic
3
11
142
If you don't want an AI system copying its weights all over the place hash the weights out of band, compute a hash on any outbound network traffic, and then shut it off if any of those hashes show up
Afaik nobody is doing this right now, easy win, no Predator drones required
8
15
146
There is a telling lack of this sort of practical, implementable safety feature in the AI safety discourse, largely because people like Eliezer will barge into discussions and proclaim that the thing they're worried about will be too smart for it to apply
Quote Tweet
Replying to @mattparlmer
That would require engineering and engineering is hard Matt.
3
16
180
If you refuse to engage with the systems as currently implemented by ML researchers and instead demand that we center discourse around magic beings with omniscience and omnipotence it gives the impression that your model is based in esoteric metaphysics and not actual physics
5
28
258
This is all well and good if such discourse is the sort of thing that happens on niche rationalist fora, but when it moves to the arena of public policy, and even beyond that to the most public possible exhortations to do drastic and violent things, it is no longer acceptable
2
10
136
It's tragicomic that one of the great proponents of not fooling yourself into mistaken thinking with cognitive biases, somebody from whom I learned so much, is making a fool of himself with extraordinary claims backed by an extraordinarily minimal body of evidence
12
17
237


