Most safety engineering seems to focus on one (or a small enumerable set) of standard functioning modes, and verifies all parts are rated for that amount of eg torque. AI safety looks more like ensuring humanity is rated for a function space over the reals
-
-
-
Climate risk, nuclear arms control, are just 2 examples that also look like that. More so in fact.
- 3 more replies
New conversation -
-
-
I don't think I've seen this argument. You should tweetstorm it.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Yup. I reached basically this conclusion a few years ago. Most conceptualizations of a hostile AI are a "how *I* might be evil" projection. I can guess how Golem allegory is used now. Looks like I might have saved myself some time if I'd persisted with Wiener beyond a quick gloss
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Even the notionally beyond-good-and-evil ideas (kill all humans as apathetic side effect of paperclip maximization) fall prey to the trap of means-ends nihilism
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
If you limit your definition of AI to ML and other narrowly scoped methods applied to similarly narrow domains then yes, AI risks probably arent categorically different to other engineering risks. But are you saying that AGI also is in this category?
-
I don't consider AGI a well posed idea
End of conversation
New conversation -
-
-
Disagree for 2 specific reasons. Other engineered products do not hold the potential to 1. Become independently self replicating 2. Autonomously deviate from initial design.
-
Computer viruses already have that ability in a limited way (independently self-replicating only within specific computer networks, not the open world). Mutating memes in general qualify if you treat human brains as a replication substrate with low agency
End of conversation
New conversation -
-
-
I think one should also consider the grey goo scenarios of AI risk. E.g. personally targeted advertising (weak AI) convinces people to vote badly (scaled social engineering)
-
Yeah I take this kind more seriously, but it seems closer to a disease epidemic
- 1 more reply
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.