advocates of #machinelearning, I am told that you all know that (current) #ML is limited. fair enough. but which limits are you willing to *publicly* acknowledge?https://twitter.com/NotSimplicio/status/1173373706674085888 …
-
-
No, the reason why the "DL community" is largely focused on supervised is because it works. Self-supervised/unsupervised is harder. Today, self-sup works *really* well in NLP. Not so much in vision. Yet.
-
it works well for some things in nlp, poorly for others, as i explained here:https://www.wired.com/story/adaptation-if-computers-are-so-smart-how-come-they-cant-read/ …
End of conversation
New conversation -
-
-
This point has always been known. But again, it is a limitation of supervised learning, not of the architecture (deep or not). Geoff Hinton's focus on unsupervised learning for the last 40 years (and me for the last 20) stems from this.
- 10 more replies
New conversation -
-
-
What you see as a flaw of an architecture, we see as a flaw of the learning paradigm. You want more structure in the architecture (and we don't necessarily disagree with that). We want a new learning paradigm that extracts more knowledge from raw data (unsup/self-sup)
-
on all that we actually agree
@ylecun. i just want richer ways of integrating abstract knowledge into the mix. - 3 more replies
New conversation -
-
-
I know, which is why I am enjoying your book, but I get to the same conclusion via my roots as a system architect.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.