Has anyone written a case against AI x-risk where (a) it is clear they understand the case *for* x-risk superbly well; (b) enough so that proponents of x-risk would agree, "yep, they really understand the argument"?
Conversation
One reason I admire ' thinking about foundations of quantum mechanics: Rob can often explain a point of view he strongly disagrees with much more convincingly than proponents of that pov can
This is a superpower, and it'd be great for thinking about AI risk
5
43
5
1
32
Years ago, I saw Michael Kirby - a High Court Justice in Australia - produce a list of reasons Australia *should* have a Bill of Rights. I agreed! Then he produced a list of reasons Australia *shouldn't* have BoR. I agreed! Then he...
3
18
... synthesized the two points of view. The main effect of the synthesis was to make me think the High Court was in good hands
2
33
One curiosity: I've been sent a bunch of stuff written by people who clearly don't understand the case for x-risk at all
I feel like I asked for a non-alcoholic drink in a bar and they gave me a tequila because "that's a good drink, I like that one!"
2
18
I'm asking for a case written by people who understand the case *very well*, not "not at all" or "a little bit" or "sort of"
3
9
Basically, I want something 100x as strong as this:
(ChatGPT cut itself off for some reason, but the point is hopefully clear. Note that it's already better than at least some accounts...)
3
1
5
Katja Grace's Counterarguments to the Basic AI Risk Case is the best I know of on this:
1
2
44
Thanks! I feel silly for not having read this before! I don't understand why I haven't!
7
What does AI x-risk mean to you? There are plenty of such written against full-on Yudkowskyian doom, but it's hard to imagine a case for "there's a 100% chance that bio humans still exist a billion years from now".
1
2
Read what I wrote, I'd say "carefully", but frankly just reading it would be a good start
1
1
Show replies


