About 10,000 deep learning papers have been written about "hard-coding priors about a specific task into a NN architecture works better than a lack of prior" -- but they're typically being passed as "architecture XYZ offers superior performance for [overly generic task category]"
-
-
Fitting parametric models via gradient descent, unsurprisingly, works best when what you are fitting is already a template of the solution. Of course, convnets are an instance of this (but in a good way, since their assumptions generalize to all visual data).
Show this threadThanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
This is often done in Bayesian Machine Learning. Step 1: sample data from the generative model. Step 2: use learning algorithm to see how well it learns from this data. If it works well then you start looking at "real world" datasets.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.