this is patently ridiculous. Nobody knows what AGI risk is because nobody ever built anything like that. One may as well put vacuum catastrophe, contagious verbal basilisks and minimum viable ET invasion fleet on there. https://twitter.com/primalpoly/status/1058022980716920834 …
-
-
They all rely on a to of entirely speculative assumptions (just like "minimally viable alien invaders" relies on assumption that a reasonably dangerous alien capable of starflight-grade cryptobiosis can exist). Speculating about risk in poorly studied domains is a waste of time
-
(or really cute grift - incidentally, someone should fund HEIDAC (hostile extraterrestrial intelligence detection and countermeasures) initiatives because speculating about imaginary aliens is the job I was born for (unless sex work is on the table in which case that and HEIDAC)
End of conversation
New conversation -
-
-
The difference is that bioweapons (1) are much better grounded in empirical evidence (2) advances in machine learning will likely make it easier to engineer them (3) the bioterror threat might come sooner than AGI.
-
Well, my pet bioweapon spook is less "human-targeted superplague" and more "death of grass" (more realistically limited, perhaps, to staple crops which, frankly, makes it massively easier because genetic diversity among those is piss poor) BAM! easily 3-4 billion humans dead
- 1 more reply
New conversation -
-
-
That doesn't mean AI is the greatest threat to humanity, because it's not.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.