The gist of it is this: neural nets do *pattern recognition*, which achieves *local generalization* (which works great for supervised perception). But many simple problems require some (small) amount of abstract modeling, which modern neural nets can't learn
-
-
Show this threadThanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
I was surprised to see how hard it is to teach Fizz Buzz if the goal is to generalize so well that it can predict any number correctly with 100 % accuracy
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
How can an ANN with activation functions like Sigmoid, ReLU, and tanh approximate functions with high local variations in value? Take the function f(x) = exp(x) and an ANN with any of the mentioned activation functions. How could it work?
New conversation -
-
-
Could it be that Deep Learning is just brute force memorization as Zhang et. al (2017) concludes? Thus, they are not learning a generalization but just memorizing a statistical pattern that generalizes well?
-
I thought of it as taking a standardized test. You can practice a certain type of questions over and over until you basically memorize a certain problem and solution (aka recognize a simple question pattern and solution) to do well on that type of question and others similar
- Show replies
New conversation -
-
-
Among all 2^2^n Boolean functions of n variables, almost none is "learnable" or susceptible to generalization, regardless of the method. Any family of functions can only learn an exponentially small # of functions with a less-than-exponentially large # of samples.
-
Technically, every _one_ of the 2^2^n boolean functions is learnable. It's classes of functions which are not learnable. Most of the 2^2^2^n subsets of the space of boolean functions are not learnable. In particular, the class of all boolean functions is not learnable.
End of conversation
New conversation -
-
-
Any ideas off the top of your head? I have been experimenting with simple examples of problems from number theory that neural nets fail at solving.
-
A paper I saw recently that looks at a difficult interesting toy problem: https://arxiv.org/abs/1711.02301
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.