It seems to imply that we’re just missing the compute power. An optimistic prediction but the cynical in me says there is more to it than brute force. When true AGI is developed, I suspect it will be through novel techniques/algorithms we may or may not have discovered yet.
-
-
-
The argument is based on the insight that neural networks can represent any computable function, and the hypothesis that RL can approximate all the functions we need. Even if our RL algorithms are not very efficient, we may possibly derive better ones with RL.
- 9 more replies
New conversation -
-
-
so basically he is talking about the opposite of intelligence, brute force
-
IMHO, part of the question is if you can brute force meta learning. It is not obvious that you cannot.
End of conversation
New conversation -
-
-
Reminder: It's not my prediction, it's
@ilyasut . I'm just saying that it is not outside the realm of possibility.Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Fun fact from
@ilyasut speaking at NVIDIA: TD Gammon training — the massively compute-intensive 'crazy' RL experiment of its day (1992) — would take only 5 seconds on a Volta GPU. https://www.youtube.com/watch?v=w3ues-NayAs&feature=youtu.be&t=2013 …pic.twitter.com/3QenW4s8wn
-
The great fear of non-connectionist researchers is that in this narrative, their work becomes irrelevant.
End of conversation
New conversation -
-
-
Fun thought experiment
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.