of course, a lot rests on what you mean by right architecture, and whether you include symbol-manipulation (which we argue is a key component) in that scope. you and i probably ultimately agree on the value of *hybrid* architecture.
-
-
Replying to @GaryMarcus @mpshanahan and
I think we can agree that the brain has specialized components that work in concert to achieve human cognition. We can furthermore agree that each of these components is built of specialized neural networks. Hybrid architectures are a stepping stone towards more general kinds.
2 replies 1 retweet 4 likes -
Replying to @IntuitMachine @mpshanahan and
if you have specialized components that operate under different principles, you have hybrid systems.
2 replies 1 retweet 3 likes -
Replying to @GaryMarcus @mpshanahan and
Neuroscientists will unanimously agree that every specialized component of the brain is made of up of neural networks.
4 replies 0 retweets 0 likes -
Replying to @IntuitMachine @mpshanahan and
of course. as i have written numerous times, the question is not whether the brain is a neural network, but rather what *kinds* of neural networks the brain uses, whether they have anything to do with the tools in Ml, and what sorts of computations they implement.
2 replies 0 retweets 1 like -
Replying to @GaryMarcus @IntuitMachine and
Indeed. E.g. we cannot solve intelligence under the feedforward networks paradigm which seems particularly suited for motor control but not for reasoning and general problem solving. NNs with supervised learning based on 'output' values are nowhere to be seen in the brain
1 reply 0 retweets 0 likes -
Replying to @Abel_TorresM @IntuitMachine and
oh, i think the brain does some supervised learning, even if we aren’t sure yet of the mechanisms.
2 replies 1 retweet 1 like -
Replying to @GaryMarcus @IntuitMachine and
Agree, supervision is present, but how? Kids don't create a 'dog's representation by hearing the word in the presence of the animal: they associate the label to a representation they *already had created* by other mechanisms; plus, supervised information comes also in the *input*
1 reply 0 retweets 2 likes -
Replying to @Abel_TorresM @IntuitMachine and
ostensive learning through labels is a small part of language but it certainly does happen. and people can learn all kinds of things if they get direct feedback on their errors. supervised learning exits; but it’s just one mechanism of learning among many.
2 replies 0 retweets 3 likes -
Replying to @GaryMarcus @Abel_TorresM and
The kind of learning that humans employ may be entirely different from the supervised learning we find in deep learning (DL). There is no compelling evidence that DL is how even simple animals learn. DL is an alien kind of intuition that isn't found in biology.
1 reply 0 retweets 3 likes
certainly.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.