It seems to imply that we’re just missing the compute power. An optimistic prediction but the cynical in me says there is more to it than brute force. When true AGI is developed, I suspect it will be through novel techniques/algorithms we may or may not have discovered yet.
-
-
Replying to @FieryPhoenix7 @IntuitMachine
The argument is based on the insight that neural networks can represent any computable function, and the hypothesis that RL can approximate all the functions we need. Even if our RL algorithms are not very efficient, we may possibly derive better ones with RL.
1 reply 0 retweets 0 likes -
Replying to @Plinz @IntuitMachine
I understand that now. Was not my original takeaway. Apologies for the confusion.
1 reply 0 retweets 0 likes -
Replying to @FieryPhoenix7 @IntuitMachine
I think that your objection is possibly valid, and it is not clear to me if current RL can efficiently reach every solution we need, even though it seems not impossible
1 reply 0 retweets 0 likes -
Replying to @Plinz @FieryPhoenix7
Isn't that the crux of the argument? That is that DRL is similarly scalable as DL and therefore there are no more obstacles other than compute? I believe there is a conceptual obstacle, but I don't think it's a big hurdle!
1 reply 0 retweets 0 likes -
Replying to @IntuitMachine @Plinz
There is a good chance we're witnessing a second incarnation of the Church-Turing thesis. My main concern is unsupervised learning: would extra compute solve that particular problem? I don't know, but from the outset it would appear as a primarily conceptual obstacle.
2 replies 0 retweets 0 likes -
Replying to @FieryPhoenix7 @IntuitMachine
What second incarnation do you have in mind? I suspect that it may be a thesis that concerns universal function approximators. Is there a class of computable functions that can effectively approximate all computable functions, and how big is the subset of efficient approximators?
1 reply 0 retweets 0 likes -
Replying to @Plinz @IntuitMachine
I was actually referring to the idea that neural networks can approximate any computable function, which as far as we known is sound.
1 reply 0 retweets 0 likes -
Replying to @FieryPhoenix7 @IntuitMachine
Are you sure? What if the search space is very high dimensional and discontinuous?
1 reply 0 retweets 0 likes -
Replying to @Plinz @IntuitMachine
Not really sure of anything. My understanding is neural networks are essentially Turing-complete. I read a paper not too long ago where it was shown that a neural net can successfully simulate a nondeterministic Turing machine. I can dig it up for you if you're interested.
1 reply 0 retweets 0 likes
I know that paper. The question is not what a neural network can represent, but how you can entrain it with that representation.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.