Spot on.
Fussing over the existential threat of AI as does @elonmusk is at best distracting and at worst dangerous.
There exists clear and present issues now...and not just WRT AI but computing in general.https://twitter.com/randal_olson/status/983744669182976000 …
-
-
Replying to @Grady_Booch
I agree that rising unemployment, algorithmic bias, mass surveillance, and citizen scores are important issues. But I don't understand why existential risk is not an important one, and why worrying about it is "at best distracting and at worst dangerous"
2 replies 2 retweets 4 likes -
Replying to @Plinz
It is a relevant one. I have never denied that. The heat death of the cosmos is also a relevant issue. If everything is important then nothing is important: given limited time and resources, one has to prioritize.
2 replies 0 retweets 1 like -
Replying to @Grady_Booch
What is your confidence that AGI is not endangering us sooner than heat death? What is your confidence that your level of confidence is correct and everybody else who thinks differently must be wrong? How do you justify that we must have the same judgement and priorities as you?
2 replies 0 retweets 0 likes -
Replying to @Plinz
No need for hyperbole. I have never represented that “everybody else” is wrong. I express an opinion based on experience, an understanding of history, of being involved in the creation of such systems, and of engaging in and encouraging evidence-based discourse.
1 reply 0 retweets 0 likes -
Replying to @Grady_Booch
I realize that for most people, "worrying about" means "having negative emotions about" and is just an important part of their identification with the social environment. But when we argue about whether
@elonmusk should allocate (his!) resources to it, should not reason apply?1 reply 0 retweets 1 like -
I respect what
@elonmusk is doing, and I celebrate that he even pays attention to the topic. But I can also offer an informed opinion therein: for example, muddling the role of AI in this (as does the documentary he sponsored) does not help yield actionable results.2 replies 0 retweets 0 likes -
Replying to @Grady_Booch @elonmusk
We could compare which problem eradicates how many total hours of quality life experience (= damage) with what probability (= risk). Could you offer an informed opinion to evaluate the damage and risk of narrow AI vs. bad AGI takeoff?
1 reply 0 retweets 0 likes -
Now you are asking a question akin to how one might approach the Drake equation WRT the probability of sentient life elsewhere in the cosmos: how does one meaningfully estimate the factors of your equation?
1 reply 0 retweets 0 likes
What about probabilities that we live in a universe where "AGI will be built" * "AGI cannot be stopped from self-improving" * "AGI cannot be made safe" divided by "something else get us first"? I'd get to something vaguely like: 0.95 * 0.7 * 0.9 / 0.8. What are your numbers?
-
-
We exist in a universe where OGI (organic general intelligence) exists, and so for an OGI I would assert (1.0 * ~1.0 * ~1.0) for the numerator; as for the denominator, am I measuring "the demise of human life" (in which case I'd choose something approaching 0) or something else?
1 reply 0 retweets 0 likes -
With that as a baseline, pivoting to AGI, I'd choose some numbers < 1 (but the denominator would not change).
1 reply 0 retweets 0 likes - 4 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.