Many researchers argue over nuances about symbolic AI vs connectionism, and only talk about how things should be done. Why not just do it? Code up your algorithm to show what your approach can do, also outline what it can't do yet, and write a paper. Isn't that more constructive?
-
Show this thread
-
Replying to @hardmaru
critical next step isn’t an algorithm per se, but a vast array of prior, abstract knowledge, some easier to evolve than to induce, AND a set of algorithms that can leverage that. Those need to evolve together, and requires a field, and not just an individual.
3 replies 2 retweets 25 likes -
Replying to @GaryMarcus @hardmaru
There are ~3-4 implicit hypotheses in this response that should be supported by empirical data. Is it actually better to evolve, learn, or hard code this abstract knowledge? What should usefully comprise its content? How should it be incorporated into our algorithms?
2 replies 0 retweets 2 likes -
To me, at least, the answers to these are empirical; as
@ylecun and others argue, the proof is in the pudding. We can only do so much armchair philosophizing, at some point we find out the answers by showing what works.1 reply 0 retweets 5 likes
Absolutely, but there is empirical work by me (Cognitive Psychology 1998, further explicitated in 2001 book), recently extended by @LakeBrenden that is absolutely crucial and being largely ignored, to the peril of the field.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.