advocates of #machinelearning, I am told that you all know that (current) #ML is limited. fair enough. but which limits are you willing to *publicly* acknowledge?https://twitter.com/NotSimplicio/status/1173373706674085888 …
-
-
Replying to @GaryMarcus
Watch anyone of my talks of the last 4 years. Many even have the title "the power and limits of deep learning" More obvious answer: limitations are pretty much everything everyone in the field is doing research on: self-sup learning, model-based control, reasoning, causality...
4 replies 8 retweets 65 likes -
Replying to @ylecun
agree with
@ylecun on much of this list, and tried (among other things) to explain many of them (sometimes with slightly different terms) to the general public in http://rebooting.ai , because few seem to be widely understood or appreciated.1 reply 1 retweet 14 likes -
Replying to @GaryMarcus @ylecun
@yudapearl has of course done a great job bringing causality to the forefront of public attention. but overall i think there is a mistaken impression that all we need is more data and more compute, when in fact bolder steps are required.5 replies 4 retweets 23 likes -
There is a subtle but important difference between a) all we need is more data and more compute (which I don't know if any expert believes) and b) all we need are general methods like search and learning that leverage data and compute (which is what Richard Sutton argues).
1 reply 0 retweets 0 likes -
There are theoretical impediments that even "general methods" cannot circumvent. If you search for a common point of two parallel lines, you can have the best search-and-learning method in the world and you won't find one. ML confronts such impediments to reaching GAI.
#Bookofway1 reply 0 retweets 10 likes -
That's the whole debate, isn't it? Is the human ability to learn things like mathematics, geometry, formal logic, informal reasoning, and science the result of some ensemble of general search and/or learning processes operating in the brain, or something else?
1 reply 0 retweets 2 likes -
nobody doubts search and learning are important, but those are so broad as to encompass almost everything that has been or will be tried. shocking as it may sound, http://rebooting.ai strongly endorses learning, though of a form that is more compatible w abstract knowledge.
1 reply 0 retweets 3 likes -
Wouldn't the disagreement between your Rebooting AI thesis and Richard Sutton's Bitter Lesson thesis be that Sutton believes the answer is to keep working on general learning methods like reinforcement learning and that you believe the answer is to pivot to symbol manipulation?
2 replies 0 retweets 0 likes
well, no, symbol-manipulation is just one of over a dozen things we point to in http://rebooting.ai in the chapters on common sense and cognitive science.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.