The problem with #machinelearning in a nutshell, via extensional vs intensional logic:
A: {2, 4, 6, 8, 10, ... X}
B: {2x: x ∈ N}
There is no way for a statistical, asymbolic machine to arrive at B from A, no matter how large you choose X @GaryMarcus @filippie509 #ai
-
-
This isn't a problem with machine learning, but rather with low bias approximators. They fit the data well and can interpolate but have no mathematical reason to extrapolate. A lot of work, including in deep learning, deals with yielding better extrapolation by introducing bias.
3 replies 2 retweets 28 likes -
Replying to @egrefen @timkindberg and
There are many examples of this in the DL community, including work by me and colleagues on e.g. program induction https://arxiv.org/abs/1506.02516 and synthesis https://www.jair.org/index.php/jair/article/view/11172 … (aka neurosymbolic ML). Many great papers by other DL/ML peeps on these topics.
1 reply 1 retweet 12 likes -
Replying to @egrefen @timkindberg and
Don't get me wrong: it's great that people are pointing these things out and testing the limits of DL. But it's good to form a balanced view. Things aren't as black and white as "omg neural nets can do everything" vs "omg here's one failure mode let's ditch it all"
1 reply 0 retweets 15 likes -
Agree w
@egrefen here, too, but find that no matter how hard I advocate for hybrid models people always think I am arguing against all of ML... I think ML will play a huge role in AGI, but only w proper biases, including representations of operations over variables.2 replies 1 retweet 12 likes -
Replying to @GaryMarcus @egrefen and
Properly understood,
#MachineLearning is the study of the relationship of biases to what can be learned with each (and how to implement them computationally). It's bias, all the way down.1 reply 1 retweet 8 likes -
Replying to @ShlomoArgamon @egrefen and
and yet innateness is a dirty word in ML. amazing.
1 reply 1 retweet 2 likes -
Replying to @gchrupala @GaryMarcus and
Gary, are you really arguing that anything you can't get your ML algorithm to do, you can just build in and declare it to be "innate". It is totally ad hoc science to rely on innateness to solve your machine learning problems.
5 replies 0 retweets 2 likes
Gary seems to think that nature's evolutionary search has resulted in parameters that cannot be found by any artificial search. The idea is so obviously flawed that it may be driven by motivated reasoning, i.e. an identification with a particular perspective rather than insight.
-
-
Replying to @Plinz @tdietterich and
Parameters? Evolution has built a *system*. In what space does one 'artificially search' for one of those?
2 replies 0 retweets 2 likes -
Replying to @timkindberg @tdietterich and
A system is a machine that can be described by a single global transition function (if the function changes you have a different system). Building a system amounts to traversing the implementation space of transition functions. There is no magic boundary between biology and comp
1 reply 0 retweets 0 likes - 2 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.