For the record, RNNs are Turing-complete and can implement any function: https://stats.stackexchange.com/questions/220907/meaning-and-proof-of-rnn-can-approximate-any-algorithm …https://twitter.com/fchollet/status/1010278944782708737 …
-
-
I think the distinction between recurrent and feedforward is that the former are Turing complete (run arbitrary programs) but not the latter
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
And of course, we cannot learn every function with current optimization techniques.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
I think it's helpful to emphasize that NN models learn a small subset of all programs because of optimization (not representation) issues.
-
It's also profoundly misleading to draw an artificial distinction between "representation" & "optimization". The choice of an appropriate search space & encoding/structure for that space is the most important part of any search process. DL does not have an optimization problem.
- Show replies
New conversation -
-
-
We've had a good track record of coming up with better optimization methods and expanding the space of programs that can be learned.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.