And they all have The Algebraic Mind on their desks and whatever, I swear!
-
-
Replying to @AdamMarblestone @GaryMarcus and
Given that there are tons of people with those interests, and many more papers being published that reflect that, I think the field of AI is actually decently healthy as a whole from the perspective of incorporating cog sci ideas...
1 reply 0 retweets 2 likes -
Replying to @AdamMarblestone @GaryMarcus and
Including variable binding and compositionality! So why does something like GPT-2 seem to get more attention? Well, many reasons. First, it is something everyone can understand, a very visceral demo, and one with potentially serious immediate application to deep fakes.
1 reply 0 retweets 1 like -
Replying to @AdamMarblestone @GaryMarcus and
But also because it represents a real advance -- among other things, using Transformer architectures, which are not conventional neural nets and are actually much more symbolic in their inductive biases arguably.
1 reply 0 retweets 0 likes -
Replying to @AdamMarblestone @GaryMarcus and
Oddly enough, though, Transformer came out of trying to improve performance on relatively conventional NLP tasks / language modeling, which neglects commonsense grounding and world model building.
1 reply 0 retweets 1 like -
Replying to @AdamMarblestone @GaryMarcus and
Likewise, many important unsupervised learning methods evolve from training on ImageNet. Indeed, we got out of an AI winter because of ImageNet.
1 reply 0 retweets 1 like -
Replying to @AdamMarblestone @GaryMarcus and
I think what you want is a diversity of approaches. Some are taking the kinds of things Gary wants very literally, others less so in their specific foci. As a full ecosystem, and due to innovative cross-disciplinary groups like DeepMind, I think the field is in pretty good shape!
1 reply 0 retweets 2 likes -
Replying to @AdamMarblestone @GaryMarcus and
This is not to say their couldn't be more diverse approaches being taken, nor that even more cross-disciplinarity would not be even better.
1 reply 0 retweets 1 like -
Replying to @AdamMarblestone @GaryMarcus and
But let us not forget that actually getting deep learning models to work on hard benchmarks got us out of the winter and that is what is now allowing many flowers to bloom, including variable binding and neuro-symbolic type flowers.
2 replies 0 retweets 3 likes -
Replying to @AdamMarblestone @GaryMarcus and
As Gary has argued, something like AlphaGo is arguably a neuro-symbolic approach of a kind already. As those things work more and outcompete simpler / less structured approaches (if/when they do), there is no real obstacle to that becoming recognized and scaled...
1 reply 0 retweets 2 likes
I would b more worried in a pure academic environment, actually, which can be vulnerable to fads / monoculture. In a cross-disciplinary AGI-focused pure & applied research org, and huge diverse field, if we challenge ourselves on hard+naturalistic problems we'll keep progressing.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.