But that's a well-defined, and discrete state space with a clear objective function. It's possible that such simple domains don't benefit from symbolic reasoning.
-
-
A caveman's environment doesn't require symbolic reasoning. So how can
@GaryMarcus claim that such primitives exist as a consequence of evolution? Homo sapiens haven't existed long enough to have evolved a computer embedded in their brain.1 reply 0 retweets 0 likes -
How do you know that cavemen didn't use something like symbolic structures? My guess is symbolic thought preceded language, otherwise one needs a strong Whorfian perspective.
1 reply 0 retweets 0 likes -
Symbolic reasoning at the very least will require an ability to reason about sequences. DL systems like RNNs are able to handle sequences. Interestingly enough, CNNs are also able to do this quite well. You don't need task specific primitives to perform a task.
1 reply 0 retweets 2 likes -
I don't quite follow the argument you're making here, which seems to be, CNN's can do sequencing, therefore they can do the entire class of symbolic reasoning. Is that right? In any case, I'm not as confident as you seem to be that this issue is a non issue.
1 reply 0 retweets 0 likes -
Yes, that is the argument. These networks are general enough that they can be repurposed for different tasks. Will they be as efficient as hand engineered algorithms, not every time. At best you can make an argument about being efficiency. You can't argue that it won't work.
1 reply 0 retweets 1 like -
Sure but I think that efficiency is the key argument when dealing with v. high dimensional, continuous spaces.
1 reply 0 retweets 0 likes -
Well you would think that hand crafted algorithms are better than machine crafted algorithms in searching high dimensional spaces. Unfortunately, there is plenty of evidence that you can meta-learn better algorithms. I think everyone is ignoring the evidence that is out there.
1 reply 0 retweets 1 like -
A handcrafted algorithm is usually the result of a search process in a space with a few thousand dimensions. Why anyone would think that people will continue to outperform machines at this is beyond me.
1 reply 1 retweet 1 like -
This discussion keeps devolving to man vs machine, which is not really the point. Marcus didn't say or even imply that we should abandon automated tools, but was rather suggesting we broaden our toolkit to involve other kinds of computation.
1 reply 0 retweets 2 likes
That was not my point at all. I just think that symbolic AI was doomed mostly because it made too narrow assumptions about the nature of reasoning. We have to move to a more general modeling layer and let machines learn how to reason instead of trying to handcraft it again.
-
-
I don't disagree that symbolic AI was too narrow, I cut my teeth on ABSTRIPS back in the day and it was obvious that it wasn't able to cope with real world complexity. However, there's a lot to be said for an integration of approaches.
1 reply 0 retweets 0 likes -
Google Neural Machine Translate threw out decades of work in computer linguistics and replaced it with an end to end trained network. Do you suggest it would become better if they added a manually parser into it again?
2 replies 0 retweets 0 likes - 3 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.