IMO, what's "naive" isn't the DNNs. It's the task. When we give a neural net a set of input/output pairs that can be "solved" with a solution we think is "naive", the problem isn't the DNN's fault. We need to focus a LOT more on task, and worry less about architecture!https://twitter.com/GaryMarcus/status/1068279223612129281 …
-
-
My understanding is that current mindset focuses on finding what modifications need to be applied to a DNN to solve a given task. DNNs are expected to be and universal tool. What if they are not?!
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
I also think, it's more than just OOD, it's that typical training setup won't really teach concept of 3D (addressing of which will yield a much harder prob). A fixed flat set of labels is problematic & data augmentation also is (a bus and upside down bus are not always the same)
-
But it also seems that nearly all (save few exceptions which can learn programs) neural nets are only interpolative learners. Favored scheme of supervised training cements this further. This puts a hard ceiling on the learning capability of basically all common neural net setups
- 2 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.