This. I'd like to add: bias which can often be learned, which humans did. I don't think that humans were born with hard-coded reasoning capabilities. We learned them from data (observations and interactions). When we humans deal with the problem above, we use years of experience.
-
-
Replying to @chriswolfvision @egrefen and
you might want to read Bonatti’s recent Science paper or Gervain’s newborn work that follows up my infant rule work. or to think about what evolution does to brains over time. (see my book The Birth of The Mind.)
1 reply 0 retweets 1 like -
Replying to @GaryMarcus @egrefen and
Ok, but how does that change the argument? Evolution is machine learning with a longer time span, e.g. an optimization process. It is still machine learning. The bias is learned by evolution.
1 reply 0 retweets 0 likes -
Replying to @chriswolfvision @egrefen and
i addressed this in my arxiv on alphago and innateness
1 reply 0 retweets 0 likes -
Replying to @GaryMarcus @egrefen and
If I am not mistaken, your paper responds to my first reply (we are not born blank), but not to my second reply, which says that "innateness" is not a prior appearing from nowhere. What you call "innate" is the result of an additional learning process, performed by evolution.
1 reply 0 retweets 0 likes -
Replying to @chriswolfvision @egrefen and
actually, i address exactly that
1 reply 0 retweets 0 likes -
Replying to @GaryMarcus @chriswolfvision and
IIRC your paper points out that such args dilute the meaning of learning. I don't agree with this* but I do agree arguments that evolution learned it are non-sequiturs and completely irrelevant. Humans don't start from blank states; it doesn't matter what installed those biases
2 replies 0 retweets 0 likes -
Replying to @sir_deenicus @GaryMarcus and
The question was whether ML can address the problem above. If evolution was abled to learn these biases, why wouldn't ML be able to learn them? In any case, humans have them, and if you rule out that a higher being put them there, some form of learning created them.
1 reply 0 retweets 0 likes -
Replying to @chriswolfvision @GaryMarcus and
I don't think anyone is saying ML can't learn these biases, they're saying human proficiency is because they can cut many hypotheses by leveraging certain inductive biases. Better then,to seed certain biases than trying to search over a vast space before being able to do anything
1 reply 0 retweets 1 like -
Replying to @sir_deenicus @GaryMarcus and
Ok, then we all agree, but Gary seemed to indicate the opposite in his early answers.
2 replies 0 retweets 0 likes
Agree w @sir_deenicus here.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.