How do you know that cavemen didn't use something like symbolic structures? My guess is symbolic thought preceded language, otherwise one needs a strong Whorfian perspective.
Google Neural Machine Translate threw out decades of work in computer linguistics and replaced it with an end to end trained network. Do you suggest it would become better if they added a manually parser into it again?
-
-
I am not suggesting that the current ML methods are going to carry us to AGI (but I think that hardly anybody does). But I doubt that the solution is integration of symbolic AI and feedforward networks. We may need metalearning that discovers the best model for each context.
-
The integration of symbolic AI and feedforward networks is a method with that leads to a lot of real world applications. This isn't how the brain works. Rather, its is networks (TBD) that learns how to do symbolic AI. There is no native symbolic mechanism.
- 1 more reply
New conversation -
-
-
This is exactly what I'm saying. This is empirical evidence that DL brings. That many hand engineered methods in image processing, NLP etc. are being replaced by systems that learn these methods. The future will of course use an evolved variant of DL.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.