https://xkcd.com/1968/ is #TheLastDerail in a nutshell. I can try to imagine hypotheses, but I'm not sure why Randall Monroe thinks this a clever thing to say. If I wrote a fictional character saying this straight out, I'd be accused of writing a 2D straw character.
-
-
I’m in fact a card-carrying nerd myself, I’m not one to shy away from neediness. But if you really think that weaponized drone swarms coordinated by machines processing massive amounts of information in real time doesn’t fall under “AI risk”.... well then that’s what you think
-
Then I'm uninterested in AI risk. I specialize in the mostly unrelated issues of AGI. Of course they're both made of computers, and likewise computers and assault rifles are both made of matter; but AI and AGI and assault rifles all three have few problems or solutions in common.
-
(*Rarely* there is work on ML robustness general enough that it might genuinely scale up to AGI or to components of AGI. "Adversarial examples" is one honest and unforced example that comes to mind.)
-
I disagree with you on most of that, but I’m happy to have you focus entirely on AGI, which you do very well, while Randall and I fret about lesser machine intelligences that might nonetheless be civilizational game changers
End of conversation
New conversation -
-
-
People have a mental model of AI risk where they say certain things have X amount of intelligence in them, then say that in order to solve the problems of X+1 level of intelligence you have to solve it for X first. Therefore why drones are placed on the same scale as AGI.
-
Maybe ppl do, but that’s not what I’m suggesting and I would challenge anyone who argued it’s all connected linearly like that. What we have here is just disagreement on what falls into the category “AI risk”, plus concern that AGI talk is being derailed, which I just don’t see
-
Think sets, with AGI being a subset of AI, not a linear succession of it. Make sense?
-
I would be ok with that if there was no confusion about AGI overlapping with other problems in the set of AI problems, just as we don't consider gun safety to overlap with AGI safety. The issue seems to be the belief that we've made progress on AGI by working on drone problems.
-
No one here is confused about that. No one made that claim. You’re debating other ppl who aren’t in the room right now. If they happen to show up then yeah, I’m with you on that one
-
If xkcd isn't making that claim, then his argument is identical to throwing in global warming on the timeline in place of killer drones, but I don't think that's quite the same argument as he's going for.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.