About 10,000 deep learning papers have been written about "hard-coding priors about a specific task into a NN architecture works better than a lack of prior" -- but they're typically being passed as "architecture XYZ offers superior performance for [overly generic task category]"
-
Show this thread
-
Replying to @fchollet
Can you give an example of the kind of thing you are describing? What is the "prior" knowledge? I think there is a huge gap between the language of architectural structures and typical languages for prior knowledge.
1 reply 1 retweet 9 likes -
Replying to @tdietterich
I mentioned the example of bAbI. If you already know how to go from the input data to the answer, you can express the process as a parametric formula and have your network learn the parameters. Most of the work was done by you, not by your network.
2 replies 2 retweets 23 likes -
Replying to @fchollet
Are you referring to the original bAbI paper, or a later one?
1 reply 0 retweets 0 likes
You can rank every bAbI-solving paper by how closely the solution architecture maps to the data-generation template. It's basically the same as ranking by accuracy. This is true to most tasks outside of perception -- bAbI is just an extreme example.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.