Many researchers argue over nuances about symbolic AI vs connectionism, and only talk about how things should be done. Why not just do it? Code up your algorithm to show what your approach can do, also outline what it can't do yet, and write a paper. Isn't that more constructive?
-
Show this thread
-
Replying to @hardmaru
critical next step isn’t an algorithm per se, but a vast array of prior, abstract knowledge, some easier to evolve than to induce, AND a set of algorithms that can leverage that. Those need to evolve together, and requires a field, and not just an individual.
3 replies 2 retweets 25 likes -
Replying to @GaryMarcus
hardmaru Retweeted Gary Marcus
Thanks for reply! All I’m asking is for more scientific experiments to be conducted, not AGI-in-a-day. I believe that well-conducted research attracts more people to explore an area & also facilitates collaboration, which is inline with what you want, no?https://twitter.com/garymarcus/status/1066021994431270912?s=21 …
hardmaru added,
Gary Marcus @GaryMarcusWhy not just code AGI myself in an afternoon? Because critical next step isn’t an algorithm per se, but a vast array of prior, abstract knowledge, AND a set of algorithms to leverage that. Those need to evolve together, and that requires a field, and not just an individual. https://twitter.com/hardmaru/status/1065778997978484736 …2 replies 1 retweet 11 likes
Absolutely, though one has to be thoughtful about the perils of incrementalism and local minima, and recognize the role that rich human knowledge (much of which is not yet available in machine-interpretable form) plays in cognition.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.