I don't think I've seen this argument. You should tweetstorm it.
-
-
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
it seems like an interesting failure mode for systems with embedded AIs. no mechanical failure just "the ship-mind made a bad choice", which is not particularly different than the bad choices humans with executive control of systems make.
-
discussion of AIs that have mysteriously far-reaching powers and metastasize like cancer across the internet still looks like pulpy sci-fi nightmare fuel material to me though.
End of conversation
New conversation -
-
-
Yup. I reached basically this conclusion a few years ago. Most conceptualizations of a hostile AI are a "how *I* might be evil" projection. I can guess how Golem allegory is used now. Looks like I might have saved myself some time if I'd persisted with Wiener beyond a quick gloss
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Even the notionally beyond-good-and-evil ideas (kill all humans as apathetic side effect of paperclip maximization) fall prey to the trap of means-ends nihilism
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
You might be missing the people who do recognize this. One thought experiment about this is so well known that it's been turned into a game: https://www.theverge.com/tldr/2017/10/11/16457742/ai-paperclips-thought-experiment-game-frank-lantz …
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
The self-building, self-programming aspects seem different. And the claim of impossibility of explanation of outcome seems different.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Most safety engineering seems to focus on one (or a small enumerable set) of standard functioning modes, and verifies all parts are rated for that amount of eg torque. AI safety looks more like ensuring humanity is rated for a function space over the reals
-
Climate risk, nuclear arms control, are just 2 examples that also look like that. More so in fact.
- Show replies
New conversation -
-
-
I think one should also consider the grey goo scenarios of AI risk. E.g. personally targeted advertising (weak AI) convinces people to vote badly (scaled social engineering)
-
Yeah I take this kind more seriously, but it seems closer to a disease epidemic
- Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.