"Understanding does not obey Moore’s Law: Knowledge is acquired by formulating explanations and testing them against reality, not by running an algorithm faster and faster." -Pinker. Refute this part, 30 seconds, go.
-
-
-
If Alpha Zero can outrun all of human civilization deducing the consequences of Go in one day, it shows that we have a very poor idea of the consequences of our hypotheses, and hence make very inefficient use of evidence. Sure, a superintelligence has to learn. It learns FASTER.
-
And if I'm allowed to link, of course, then I just link:http://lesswrong.com/lw/qk/that_alien_message/ …
- 1 more reply
New conversation -
-
-
Please tear it down! PRETTY PLEASE!!!
-
I'll once again post http://arbital.com/p/orthogonality in case anyone looks at this thread and wants to check the more advanced presentation of the ideas.
-
Pinker actually uses the Orthogonality Thesis as a counter-argument in his article. What he misses is the convergent instrumental goals thesis.
End of conversation
New conversation -
-
-
I don't get it either. I'd like to see some well argued analysis of why we shouldn't be worrying about this now. But I don't think I've seen it.
-
Any idea of what we should do
@jeremyphoward? (fwiw I do think that it's worth "worrying", or rather, "getting our heads screwed on straight"). I have some vaguely-formed ideas of what might help, gleaned from playing lots of ideological go-between, but would love to chat more. -
Good to differentiate btwn humanities ppl vaguely familiar w AI forming public opinions vs leading technologists doing so. I'm more worried re: latter i.e. obvs short-term business interests stifling black-swan thinking-coming from black-swan companies w/o in-depth argumentation.
-
Not sure I follow... you're saying you're more worried about tech CEOs publicly denouncing safety concern than you are about psych professors, because their business interests actively incentivize opposition to black-swan thinking? What's the bit about black swan companies?
-
OK sorry, breaking it down: 1 psych/phil profs from non-relevant spaces less harmful 2 tech CEOs w relevant knowledge dismissive re unlikely events v harmful 3 said tech CEOs' success was itself statistically unlikely so should doubly know better 4 profs do this bc tech CEOs did?
-
My view, FWIW, is that worrying about existential threat of out-of-control AI is a distraction right now, since before that becomes an issue, we'll need to deal with the human problems (inequality etc). Failure to do so means our society won't be around anyway!
-
But my particular view doesn't matter really - what matter is the lack of actual informed, thoughtful, respectful debate. As
@ESYudkowsky said, even smart folks who should know better resort to straw-man arguments and do little real research before commenting on this -
- 6 more replies
New conversation -
-
-
I think it’s because he has established an identity for himself as an optimist, which is valuable in a variety of areas...but definitely not in the realm of the control problem.
-
This is also my current best guess for the biggest factor. I think Pinker just had an optimism paint brushed, and painted the value alignment problem with it before taking a hard enough look at the best arguments.
End of conversation
New conversation -
-
-
I'd be really interested in a concise, line-by-line response from you to his excerpt. There seems to be a lot of sweeping dismissal on both sides.
-
(Not saying sweeping dismissal isn't warranted. I just think it's time for a more detailed back-and-forth.)
End of conversation
New conversation -
-
-
I was wondering what you think of the arguments outlined in this article written by the ai researcher francois chollet?https://medium.com/@francois.chollet/the-impossibility-of-intelligence-explosion-5be4a9eda6ec …
-
-
Oh fantastic. Thanks for the response. Reading it now!
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.