Ohhh ok
deep learning tends to underemphasize priors, and often to dismiss symbols (see my last tweet with footage from Hinton earlier this week) that may be required for the right priors. But we can all agree we learn a lot.
-
-
Plenty of priors in DL. E.g., in ConvNets: (1) texture filters, (2) their sizes, (3) max pooling (discard info), (4) use still images (temporal context is irrelevant for vision), (5) ban feedback from higher layers (spatial context is irrelevant for vision), etc.
-
Two points: (1) These hard-coded priors in deep convnets "evolve" (are copied & changed) from previous generations of DL models that succeeded and were published. (2) These very priors in fact prevent robust performance of convnets on real world problems.
#ai - 1 more reply
New conversation -
-
-
I agree that DL views priors as something to be avoided when possible, but I think that’s sensible when we don’t have full knowledge of the right solution. Example: I don’t actually think it’s easy to determine when symbols are the right prior in most real-world applications.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
In practice, DL uses priors a lot. That's why the NLP folks use neural embeddings like Word2Vec, ELMO and BERT. The point is that priors are NOT hand generated but rather learned. Which makes sense in the exaptation in evolution employs the same kind of reuse.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.