or let's say not vulnerable to leaks which occur from using any soft similarity metric--assuming distributed means implicitly acts on vectors. At some point you need to explain how humans can reason crisply to prove things about riemanian sums, measures and gaussian integrals .
-
-
Replying to @sir_deenicus @neurograce and
Sure, but you do not understand Riemann sums innately - you learned about them. Humans clearly have some innate ability to learn about abstract systems that other species may not, but the learning is innate, not the concepts and relations themselves.
1 reply 0 retweets 1 like -
Replying to @tyrell_turing @neurograce and
That's irrelevant though. Humans didn't evolve to learn those things. So there's unique innate machinery that's been repurposed. Eg permission schemas for abduction. Machinery for reasoning spatially with a graph based representation (easily confused for grids) for symbolic, coup
1 reply 0 retweets 1 like -
Replying to @sir_deenicus @tyrell_turing and
coupled with whatever allows us to learn recursive grammars and combinatorially compose atomic concepts. Once you have that, getting to a system that can derive Newton's Laws by looking at how things fall is a comparatively short step.
1 reply 0 retweets 1 like -
Replying to @sir_deenicus @tyrell_turing and
I think for example, the fact that cerebellum and motor cortex is recruited for even mathematical reasoning--and one may quibble over how innate is defined--suggests that at least the basis and initial stages are by analogy with/to innate capabilities.
1 reply 0 retweets 1 like -
Replying to @sir_deenicus @tyrell_turing and
I'm saying: brain does things it didnt evolve to, such as math or sewing. It's much more likely that existing nearby machinery was repurposed and then optimized in humans to be able to do or pick up recursive abstract reasoning than to have that capabiliity learned from scratch
2 replies 0 retweets 2 likes -
Replying to @sir_deenicus @neurograce and
Uh huh... I don't see how anything I said in this thread contradicts anything you've said here. My point is only: (1) there are some structural priors, but (2) we learn a lot, (3) we use distributed representations, and (4) this all fits with the general DL research program.
2 replies 0 retweets 3 likes -
Replying to @tyrell_turing @sir_deenicus and
deep learning tends to underemphasize priors, and often to dismiss symbols (see my last tweet with footage from Hinton earlier this week) that may be required for the right priors. But we can all agree we learn a lot.
3 replies 0 retweets 0 likes -
Replying to @GaryMarcus @tyrell_turing and
Plenty of priors in DL. E.g., in ConvNets: (1) texture filters, (2) their sizes, (3) max pooling (discard info), (4) use still images (temporal context is irrelevant for vision), (5) ban feedback from higher layers (spatial context is irrelevant for vision), etc.
2 replies 1 retweet 2 likes -
Replying to @paklnet @GaryMarcus and
Two points: (1) These hard-coded priors in deep convnets "evolve" (are copied & changed) from previous generations of DL models that succeeded and were published. (2) These very priors in fact prevent robust performance of convnets on real world problems.
#ai4 replies 1 retweet 2 likes
@paklnet btw i am curious on your evidence wrt to point 2: inadequacy v interference
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.