About 10,000 deep learning papers have been written about "hard-coding priors about a specific task into a NN architecture works better than a lack of prior" -- but they're typically being passed as "architecture XYZ offers superior performance for [overly generic task category]"
-
-
But how can you avoid to do this?other than "data type prior" (e.g CNN) at the end you need a task-dependent signal & something (ops, architecture) that force your model to learn the useful correlations between your data and your signal (among other causes).
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
It's hard to find any that doesn't at least do a variation of this play.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.