Thanks to a very instructive exchange with @ESYudkowsky (see a retweet of it), I have gone from having my overall impression of the AI risk crowd being of a group of interesting but mislead oddballs to a dangerous group of monomaniacs who are as much of a threat as a benefit.
-
Show this thread
-
I am going to paraphrase here, so
@ESYudkowsky should feel free to correct me if I get his views wrong...I have no interest in straw vulcaning.1 reply 0 retweets 6 likesShow this thread -
He said, roughly, that there is a 98% chance of something called "Artificial General Intelligence" being developed, roughly a 95% chance it wipes out humanity and roughly a 1-2% chance it falls into the hands of some narrow elite.
3 replies 0 retweets 7 likesShow this thread -
Replying to @glenweyl
You literally did not understand the numbers I used. I at no point said it had a 95% chance of wiping out humanity. I gave no number whatsoever there. What I said was that the chance of bad actors gaining control of AGI was 1.5 orders of magnitude lower than the risk of... 1/
2 replies 0 retweets 14 likes -
Replying to @ESYudkowsky @glenweyl
...everyone dying because nobody was in control of it. Your grasp of the rest of these ideas is at a similar level, you do not have any idea of what we believe, and I don't think that asking if you could manage to repeat just one sentence back literally was a bad response... 2/
3 replies 0 retweets 12 likes -
Replying to @ESYudkowsky @glenweyl
...and I don't think you *can* understand the actual content of our words unless you can shift to a frame of mind where you can separate value-free empirical questions about AI from their political implications. But it's clear that I am not the right person to... 3/
3 replies 0 retweets 7 likes -
Replying to @ESYudkowsky
There is nothing value free in anything you said any more than there are in “empirical facts” about racial differences in intelligence. I also find your epistemology deeply problematic and confused. I don’t think this means you are incapable of parsing English.
4 replies 0 retweets 7 likes -
Replying to @glenweyl
"There is nothing value-free in what you said" makes me think I literally do not understand how you are using words. I would not be offended if you asked me to repeat something back to you.
3 replies 0 retweets 11 likes -
Replying to @ESYudkowsky @glenweyl
Empirical questions are not wholly value-free because one is free to make choices in how one frames them, and those choices depend on one's values. Probabilities of events are only defined relative to a sample space, for instance. Seehttps://srconstantin.wordpress.com/2015/04/30/choice-of-ontology/comment-page-1/ …
1 reply 4 retweets 16 likes -
This insight is kind of an interpretive gloss on the No Free Lunch Theorem. You can't make a model that's really totally a "value-free" improvement over another: more accurate, for *all* possible minds, in *all* possible situations.
2 replies 2 retweets 9 likes
Lots of other ways to frame this, of course; it's a Nietzschean insight, it seems related to some postmodernist insights (though I'm less familiar with those.)
-
-
Of course in practice you are not talking to an arbitrary conceivable mind. You are talking to a human being, in fact a human who has many things in common with yourself, and you can justifiably say "C'mon, we share all the relevant values implicit in the statement I'm making."
1 reply 0 retweets 6 likes -
So the interesting question is not "is this or isn't this a value-free discussion" but "ok, so *which* premises implicit in this framing do you not share?" "ok, there are some broad assumptions you might call political that I'm starting with; I'm ok with those; now what?"
1 reply 0 retweets 6 likes - 12 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.