I would ask @erichorvitz directly;
When *should* we start to worry about the risk?
How confident are you in your estimated time frame for AI advances?
What about your time frame for convincing everyone to actually do what is needed mitigate risk?
-
-
-
David/Eliezer: Thx. Journalists simplify--removed "idle" from worry, versus "getting active--and working to address" Folks know me as quite engaged on potential challenges of superintelligence, e.g., organized 2008 Asil. symp., recent mtgs. https://bloom.bg/2mwNdJ5 Happy to chat.
-
I guess I'd start by asking your take on: 1. Any coherent utility function can hook up to powerful decision-making 2. Most utility functions imply using atoms not for humans 3. Future AI (not present!) can gain greatly in capability 4. Alignment probably possible, but not easy
-
More detail on why those 4 points, what I mean by them here: http://econlog.econlib.org/archives/2016/03/so_far_unfriend.html …
-
I'll defer to the AI people for the more technical discussion. But on the policy side, it's worth noting that *if* national, or coordinated international, action is important, public pressure over years is critical. (Look at the Nuclear Test ban treaty, for example.)
-
(On the test-ban treaty, it took a decade for a partial ban to be enacted, from 1954-63. The comprehensive treaty, which banned underground testing, it took another 30 years - and isn't universally adopted. I'm not sure we have 4 decades until AI safety needs strong response.)
End of conversation
New conversation -
-
-
That was "would necessarily be" malevolent--a simplification.
-
Ah. My apologies for having assumed that the journalistic quotes were even remotely like the semantic meaning of what you actually said.
End of conversation
New conversation -
-
-
"I see no reason...." as he gets ripped apart and reassembled into a pile of paper clips.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Orthogonality Thesis is wrong:https://entirelyuseless.wordpress.com/2017/10/08/embodiment-and-orthogonality/ …
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
you think that's bad, try this for raising your blood pressure beyond safety levels.https://aeon.co/essays/true-ai-is-both-logically-possible-and-utterly-implausible …
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
My take is that Horwitz/LaCun/Schmidt are not quite as clueless as they pretend, but they genuinely think government regulation would be counterproductive. Their intended audience is politicians / voters.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
"Engineers would build a stop button" has now become the instant derail response I've seen from otherwise smart folks. Meanwhile most bombs in ballistic trajectory do not have such features if you catch my drift.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
or maybe he just dismisses the threat instead
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Between this and the "machines are worthy of love" angle being pushed all over these days, that thought does seem less and less ridiculous.https://darctimes.tumblr.com/post/166400258827/how-i-learned-to-stop-worrying-and-love-machines …
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.