If you post an argument online, and the only opposition you get is braindead arguments and insults, does it confirm you were right? Or is it just self-selection of those who argue online?
-
-
I'll check it out.
-
I disagree with your points, and I think you are misinterpreting (or over-interpreting) some of mine. Regardless, thanks for taking the time to write this reply, discussion is always good. I'll see if I have the time to write a follow-up in the future.
-
Thanks for taking the time to write the original!
End of conversation
New conversation -
-
-
IMO the answer is "self-selection". eg I hoped to find the post convincing, because the super-intelligence thing is distracting from more pressing issues like inequality. However I thought it was hand-wavy, & agree with Eliezer's critique. But I didn't have an incentive to debate
-
I agree entirely that inequality is a more pressing issue. I also found the reply (if it was meant as such?) kind of condescending:
@fchollet has published papers on using deep learning for theorem proving, you think he doesn’t know basic Bayesian results? -
I'll edit my reply (though the edit may take a while to push) to make clear that I'm not saying Chollet doesn't already know Bayes; and that I'm discussing Laplace's Rule of Succession to establish common language and for the benefit of other readers following along.
-
That makes a lot more sense. I think it's just sort of ambiguous when you write something as a reply, but also addressing a wider audience.
-
Agreed. Your point was fair.
End of conversation
New conversation -
-
-
It seems like you don't appreciate the difference between environment simulation for AlphaGo vs humans. The Go board is small and Go rules simple, o.t.o.h. simulating reality is a hard problem. Simulating reality is harder than training the model based RL agent on top.
-
When human civilization takes centuries to see, poorly, those implications of simple rules that Alpha Zero sees in hours, how can we possibly know how hard the environment really is to learn? Except that it can only be easier for smart things than it is for humans.
End of conversation
New conversation -
-
-
It is so enriching to read careful, good faith (and polite) argumentation about problems of significant global concern. Thank you.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
This conversation is highly valuable and I would like it to continue.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
@rplevy this covers all my bases better than my still in progress rebuttal, and much more. I will discard my (5th, I think?) Draft now. Read this instead.Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Related to this discussion : - I studied rates of improvements of many technologies here, and some do follow exponential or superexponential trends https://www.google.co.uk/amp/s/nintil.com/2016/04/25/no-great-technological-stagnation/amp/ … - IQ and mental illness: Terman found something along those lines. But modern resesrch does not
-
Find that, or at least not unanimously, eg http://www.sciencedirect.com/science/article/pii/S0165032716315658 …
End of conversation
New conversation -
-
-
Interesting. It seems much of the difference boils down to the individual/collective divide. In the central chimpanzee example: the reason humans are more capable than chimps is that we humans have built a civilization to amplify our abilities.
-
This makes us feel like we're generally intelligent, while we're actually incredibly specialized. I'm not sure you can meaningfully say humans are much more generally intelligent than chimps. But we slowly built this great civilization.
-
For an AI to be recursively self-improving, it would either have to draw on the resources of the civilization that created it, or replicate these resources itself.
-
In the first case, the AI is bound by its need to collaborate with civilization, and to effectively be part of it. This will incur some large limits on self-improvement, and also externalization of its intelligence. In effect, we become the AI. This happens all the time.
-
In the second case, the resources that need to be replicated - in terms of computation, facts/data, and physical manufacturing capacity are immense. Large parts of civilization need to be replicated. Maybe this could be done (by humans), but why?
- 2 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.