I’m flattered, but—while I don’t want to put words in Yann’s or Geoff’s respective mouths—I don’t think any one is trying to strike symbols and concepts from the repertoir of our machine learning methods, even within DL. It’s just that they may become implicit...
-
-
-
“Needed to replace”? “Aetherial symbols” (title of Hinton talk I linked earlier)? I am trying to pin down what they actually mean, but likening symbols to aether that must be replaced is not just saying they are unconscious, it’s saying they are misguided and unnecessary.
- 5 more replies
New conversation -
-
-
I mean, a lot of DL research involves trying to produce models which generalise better by encoding priors about the domain, or inductive biases, into architectures and learning methods. From ConvNets to Capsules, there are plenty of examples of this...
-
Convolution is a great innate prior — defined in advance to work over all instances of a variable.
End of conversation
New conversation -
-
-
So in a sense, I’ve always thought people like Yann, Geoff, Yoshua, etc are more aligned with you in their research than you seem to think, but then again I might be projecting my own views onto how I interpret theirs, and they will be better placed to respond to you.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Anyway, I’m at least glad you think what I’ve been doing, in particular with Richard Evans (
@LittleBimble) has been a move somewhat in the right direction.Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.