There are many examples of this in the DL community, including work by me and colleagues on e.g. program induction https://arxiv.org/abs/1506.02516 and synthesis https://www.jair.org/index.php/jair/article/view/11172 … (aka neurosymbolic ML). Many great papers by other DL/ML peeps on these topics.
-
-
Replying to @egrefen @timkindberg and
Don't get me wrong: it's great that people are pointing these things out and testing the limits of DL. But it's good to form a balanced view. Things aren't as black and white as "omg neural nets can do everything" vs "omg here's one failure mode let's ditch it all"
1 reply 0 retweets 15 likes -
Agree w
@egrefen here, too, but find that no matter how hard I advocate for hybrid models people always think I am arguing against all of ML... I think ML will play a huge role in AGI, but only w proper biases, including representations of operations over variables.2 replies 1 retweet 12 likes -
Replying to @GaryMarcus @egrefen and
Properly understood,
#MachineLearning is the study of the relationship of biases to what can be learned with each (and how to implement them computationally). It's bias, all the way down.1 reply 1 retweet 8 likes -
Replying to @ShlomoArgamon @egrefen and
and yet innateness is a dirty word in ML. amazing.
1 reply 1 retweet 2 likes -
Replying to @gchrupala @GaryMarcus and
Gary, are you really arguing that anything you can't get your ML algorithm to do, you can just build in and declare it to be "innate". It is totally ad hoc science to rely on innateness to solve your machine learning problems.
5 replies 0 retweets 2 likes -
Replying to @tdietterich @gchrupala and
Where are the principles that tell us what should be innate vs learned?
3 replies 0 retweets 3 likes -
Replying to @tdietterich @gchrupala and
That's why some of us study the brain also, in order to learn those principles from a successful example.
1 reply 0 retweets 3 likes -
Replying to @bradpwyble @gchrupala and
I agree that studying the brain is very important, but it is not easy to figure out what is innate and what is learned in biological systems. Furthermore, in AI we often seek to build non-human-like AI systems where we will need non-human forms of innateness. We need principles!
4 replies 0 retweets 4 likes
you should really read some of Spelke’s efforts to understand the principles that guide human children, as a source of possible insight
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.