Many researchers argue over nuances about symbolic AI vs connectionism, and only talk about how things should be done. Why not just do it? Code up your algorithm to show what your approach can do, also outline what it can't do yet, and write a paper. Isn't that more constructive?
-
Show this thread
-
Replying to @hardmaru
critical next step isn’t an algorithm per se, but a vast array of prior, abstract knowledge, some easier to evolve than to induce, AND a set of algorithms that can leverage that. Those need to evolve together, and requires a field, and not just an individual.
3 replies 2 retweets 25 likes -
Replying to @GaryMarcus @hardmaru
There are ~3-4 implicit hypotheses in this response that should be supported by empirical data. Is it actually better to evolve, learn, or hard code this abstract knowledge? What should usefully comprise its content? How should it be incorporated into our algorithms?
2 replies 0 retweets 2 likes
Agreed. This is the kind of work we should be doing, and that @ylecun once did, when he effectively argued for the innateness of convolution, in the working paper that introduced the idea.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.