You can handle arbitrarily complex tasks with large parametric models trained with SGD. The problem is that doing it well requires a *dense sampling* of the input/output space you're learning, because the generalization power of these models is extremely weak. That's expensive. https://t.co/Lsc7zlQBFE
-
-
Who do you think is attacking the generalization problem correctly
@fchollet ? Maybe@vicariousai ? -
I know anyone using datasets does not.
End of conversation
New conversation -
-
-
The problem is focusing on the individual mind (whether human or AI). Humans not only know stuff, but we know who knows other stuff we don't know and we can ask. Our collective knowledge far exceeds our subjective knowledge; human intelligence is collaborative and social.
-
Human capabilities do come from our civilization -- externalized intelligence -- but even an individual human, although it is just a clever ape, is still many orders of magnitude more intelligent than AI today.
- Show replies
New conversation -
-
-
This Tweet is unavailable.
-
Yup, and several hundred million of years of evolution has no doubt helped. Animals such as deer - who can stand and walk almost immediately after birth - are a clear example of this. Our firmware is just really well trained to learn adapt to new situations.
- Show replies
-
-
-
Earlier researchers of AI, Douglas Hofstadter specifically, claimed analogy is the core of cognition. Our powerful generalisation ability is pattern-matching composite, abstractions, and tweaking one tiny detail to make it apply in the current situation. So *MORE* experience.
-
I haven't finished EGB yet but Hofstadter seems to be heavy on the loops theory. I think that optimism bias has a large role to play, humans get loads of stuff wrong but we just ignore it and keep on looping! Kahneman shows how fallible human "intuition" is in thinking fast & slo
- Show replies
New conversation -
-
-
Don’t remember who said “glorified nearest neighbor...”
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
@fchollet Mr. Francois Chollet I just wanted to know that how does a model embeds the context when we input text and image vectors like in case of say image caption generation.Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.