Spot on.
Fussing over the existential threat of AI as does @elonmusk is at best distracting and at worst dangerous.
There exists clear and present issues now...and not just WRT AI but computing in general.https://twitter.com/randal_olson/status/983744669182976000 …
We could compare which problem eradicates how many total hours of quality life experience (= damage) with what probability (= risk). Could you offer an informed opinion to evaluate the damage and risk of narrow AI vs. bad AGI takeoff?
-
-
Now you are asking a question akin to how one might approach the Drake equation WRT the probability of sentient life elsewhere in the cosmos: how does one meaningfully estimate the factors of your equation?
-
What about probabilities that we live in a universe where "AGI will be built" * "AGI cannot be stopped from self-improving" * "AGI cannot be made safe" divided by "something else get us first"? I'd get to something vaguely like: 0.95 * 0.7 * 0.9 / 0.8. What are your numbers?
- 6 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.