The problem with #machinelearning in a nutshell, via extensional vs intensional logic:
A: {2, 4, 6, 8, 10, ... X}
B: {2x: x ∈ N}
There is no way for a statistical, asymbolic machine to arrive at B from A, no matter how large you choose X @GaryMarcus @filippie509 #ai
-
-
This isn't a problem with machine learning, but rather with low bias approximators. They fit the data well and can interpolate but have no mathematical reason to extrapolate. A lot of work, including in deep learning, deals with yielding better extrapolation by introducing bias.
3 replies 2 retweets 28 likes -
Replying to @egrefen @timkindberg and
There are many examples of this in the DL community, including work by me and colleagues on e.g. program induction https://arxiv.org/abs/1506.02516 and synthesis https://www.jair.org/index.php/jair/article/view/11172 … (aka neurosymbolic ML). Many great papers by other DL/ML peeps on these topics.
1 reply 1 retweet 12 likes -
Replying to @egrefen @timkindberg and
Don't get me wrong: it's great that people are pointing these things out and testing the limits of DL. But it's good to form a balanced view. Things aren't as black and white as "omg neural nets can do everything" vs "omg here's one failure mode let's ditch it all"
1 reply 0 retweets 15 likes -
Agree w
@egrefen here, too, but find that no matter how hard I advocate for hybrid models people always think I am arguing against all of ML... I think ML will play a huge role in AGI, but only w proper biases, including representations of operations over variables.2 replies 1 retweet 12 likes -
Replying to @GaryMarcus @egrefen and
Properly understood,
#MachineLearning is the study of the relationship of biases to what can be learned with each (and how to implement them computationally). It's bias, all the way down.1 reply 1 retweet 8 likes -
Replying to @ShlomoArgamon @egrefen and
and yet innateness is a dirty word in ML. amazing.
1 reply 1 retweet 2 likes -
Replying to @gchrupala @GaryMarcus and
Gary, are you really arguing that anything you can't get your ML algorithm to do, you can just build in and declare it to be "innate". It is totally ad hoc science to rely on innateness to solve your machine learning problems.
5 replies 0 retweets 2 likes -
Replying to @tdietterich @gchrupala and
Where are the principles that tell us what should be innate vs learned?
3 replies 0 retweets 3 likes
That’s what we need to figure out; one model is to look (as a starting point) at what evolution endowed biological creatures with. Another would be to look at the nature of what is to be learned.
-
-
Replying to @GaryMarcus @tdietterich and
I think this is crucial. We just want to make way too big of a leap with our models, e.g. image -> semantic label. Then we face the age old lack of common sense problem. There are tons of other things to be learned which we skip. E.g. herehttps://wp.me/p8hj6n-ff
1 reply 0 retweets 7 likes - 5 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.