Deep learning isn't magic, and it won't work at all on the specific problems you want it to work on, but it is capable of more than you know when it comes to problems where the manifold hypothesis applies (and that's a lot of problems)
-
-
Show this thread
-
This is a bit like watching a bunch of people trying to make steam-powered airplanes and rockets work in the year 1800 (it won't) while largely missing the world-changing potential of applying steam power to trains and large industrial machinery
Show this thread -
"Meh, it's just a factory, boring... but look at my cool model rocket!" No, the boring factory is going to change the face of the world, and meanwhile your GPT model rocket won't scale past the toy stage
Show this thread
End of conversation
New conversation -
-
-
Extracting information that isn’t there is a current deep learning mania
-
Here's to hoping we leave that phase in the past decade.
End of conversation
New conversation -
-
-
Sometimes clients come up with expectations from Deep Learning like we are genies from a lamp.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Is that because of a limitation in training or architecture or both? Is it a lack of recurrence? Obviously there are neural networks in the world capable of arbitrary tasks…
-
paraphrasing his tweets: DL is another a heuristic for manifold learning. So if the underlying manifold DL is supposed to approximate is not low-dimensional enough for your problem, then nothing else matters (not the training, nor the architecture).
- Show replies
New conversation -
-
-
This is Insightful, and I'm thankful you provided me material for DL.
- Show replies
New conversation
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.