Did I not say above “sure [this particular blunder] can be remedied”? Question is how; please see discussion in my recent Medium piece re extrapolation and symbol-manipulation.
-
-
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
You cite yourself as "Marcus 2001", even in your tweets? Wow
-
referring to a specific work
End of conversation
New conversation -
-
-
Thanks for sharing Brandon's work Gary! As you may know,
@_shrdlu_ and@ChrisKirov are getting impressive results with seq-to-seq modeling Albright and Hayes' past tense data. So as elsewhere, the solution is likely to modify NNs rather than abandon them. -
Don’t know the results or the architectures; send links please
- 1 more reply
New conversation -
-
-
This is a red herring, "The results presented... are from a model with... ten nodes in each hidden layer." Neural networks with a small number of units do not work. Small number of units => high chance of getting stuck in local minima with first-order methods.
-
you are welcome to try with more hidden units with this text; for my original results, increasing number of hidden units didn’t help (even though that was a common suggestion)
- 4 more replies
New conversation -
-
-
The question is, how much innate structure is too much? Maybe, in this case, what is needed is not additional structure but a different approach? I have a serious problem with calls in some quarters for adding more math complexity to AGI. I don't think much math is needed.
-
I should add that math in
#DeepLearning (optimising an objective function) has given us a problem that brains do not have: overfitting. It is obvious that brains learn without optimisation. How do they do it? Neurons are too slow and there is little energy in the brain for math. - 1 more reply
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.