Symbolic reasoning at the very least will require an ability to reason about sequences. DL systems like RNNs are able to handle sequences. Interestingly enough, CNNs are also able to do this quite well. You don't need task specific primitives to perform a task.
-
-
I don't quite follow the argument you're making here, which seems to be, CNN's can do sequencing, therefore they can do the entire class of symbolic reasoning. Is that right? In any case, I'm not as confident as you seem to be that this issue is a non issue.
1 reply 0 retweets 0 likes -
Yes, that is the argument. These networks are general enough that they can be repurposed for different tasks. Will they be as efficient as hand engineered algorithms, not every time. At best you can make an argument about being efficiency. You can't argue that it won't work.
1 reply 0 retweets 1 like -
Sure but I think that efficiency is the key argument when dealing with v. high dimensional, continuous spaces.
1 reply 0 retweets 0 likes -
Well you would think that hand crafted algorithms are better than machine crafted algorithms in searching high dimensional spaces. Unfortunately, there is plenty of evidence that you can meta-learn better algorithms. I think everyone is ignoring the evidence that is out there.
1 reply 0 retweets 1 like -
A handcrafted algorithm is usually the result of a search process in a space with a few thousand dimensions. Why anyone would think that people will continue to outperform machines at this is beyond me.
1 reply 1 retweet 1 like -
This discussion keeps devolving to man vs machine, which is not really the point. Marcus didn't say or even imply that we should abandon automated tools, but was rather suggesting we broaden our toolkit to involve other kinds of computation.
1 reply 0 retweets 2 likes -
That was not my point at all. I just think that symbolic AI was doomed mostly because it made too narrow assumptions about the nature of reasoning. We have to move to a more general modeling layer and let machines learn how to reason instead of trying to handcraft it again.
1 reply 0 retweets 1 like -
I don't disagree that symbolic AI was too narrow, I cut my teeth on ABSTRIPS back in the day and it was obvious that it wasn't able to cope with real world complexity. However, there's a lot to be said for an integration of approaches.
1 reply 0 retweets 0 likes -
Google Neural Machine Translate threw out decades of work in computer linguistics and replaced it with an end to end trained network. Do you suggest it would become better if they added a manually parser into it again?
2 replies 0 retweets 0 likes
I am not suggesting that the current ML methods are going to carry us to AGI (but I think that hardly anybody does). But I doubt that the solution is integration of symbolic AI and feedforward networks. We may need metalearning that discovers the best model for each context.
-
-
The integration of symbolic AI and feedforward networks is a method with that leads to a lot of real world applications. This isn't how the brain works. Rather, its is networks (TBD) that learns how to do symbolic AI. There is no native symbolic mechanism.
1 reply 0 retweets 1 like -
I agree, and I'm not as pro symbol as this discussion paints me. But I am pro- hand-engineering in conjunction with other forms of modelling, like DN's
0 replies 0 retweets 0 likes
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.