I see Val posted the piece. But I would love to see op-eds and articles on the need for transdisciplinary science and also have this call be a part of scientific conferences, like @SPSPnews and med research confs. This really needs to change.
-
-
Replying to @amyalkon @Lester_Domes and
a major if quiet theme of http://Rebooting.AI is that AI can't be solved by ML alone and needs lots of interdisciplinary collaboration. i would be happy to bring out that theme more strongly for some forum. (it's also implicit here: https://www.nytimes.com/2019/09/06/opinion/ai-explainability.html?smid=nytcore-ios-share … )
2 replies 0 retweets 2 likes -
Replying to @patrick_s_smart @GaryMarcus and
Does the Deepmind team include biologists, psychologists and cognitive scientists? I know their team includes neuroscientists although I haven't really seen them pursue neuromorphic ai
1 reply 1 retweet 1 like -
Replying to @connectedregio1 @patrick_s_smart and
matt bottvinick for one, assuming he is still there.
@AdamMarblestone has very broad interests.1 reply 0 retweets 2 likes -
Replying to @GaryMarcus @connectedregio1 and
Many cognitive people. Just from 2-second scan of Matt's Scholar profile u see tons of ppl working on innate & compositionality-focused inductive biases Gary & others have emphasized https://arxiv.org/abs/1901.08162 https://arxiv.org/abs/1901.11390 https://deepmind.com/research/publications/deep-reinforcement-learning-relational-inductive-biases … https://arxiv.org/abs/1806.01261
2 replies 0 retweets 3 likes -
Replying to @AdamMarblestone @GaryMarcus and
And they all have The Algebraic Mind on their desks and whatever, I swear!
1 reply 0 retweets 1 like -
Replying to @AdamMarblestone @GaryMarcus and
Given that there are tons of people with those interests, and many more papers being published that reflect that, I think the field of AI is actually decently healthy as a whole from the perspective of incorporating cog sci ideas...
1 reply 0 retweets 2 likes -
Replying to @AdamMarblestone @GaryMarcus and
Including variable binding and compositionality! So why does something like GPT-2 seem to get more attention? Well, many reasons. First, it is something everyone can understand, a very visceral demo, and one with potentially serious immediate application to deep fakes.
1 reply 0 retweets 1 like -
Replying to @AdamMarblestone @GaryMarcus and
But also because it represents a real advance -- among other things, using Transformer architectures, which are not conventional neural nets and are actually much more symbolic in their inductive biases arguably.
1 reply 0 retweets 0 likes
Oddly enough, though, Transformer came out of trying to improve performance on relatively conventional NLP tasks / language modeling, which neglects commonsense grounding and world model building.
-
-
Replying to @AdamMarblestone @GaryMarcus and
Likewise, many important unsupervised learning methods evolve from training on ImageNet. Indeed, we got out of an AI winter because of ImageNet.
1 reply 0 retweets 1 like -
Replying to @AdamMarblestone @GaryMarcus and
I think what you want is a diversity of approaches. Some are taking the kinds of things Gary wants very literally, others less so in their specific foci. As a full ecosystem, and due to innovative cross-disciplinary groups like DeepMind, I think the field is in pretty good shape!
1 reply 0 retweets 2 likes - 4 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.