Who’s to say it wouldn’t be good at QR codes with the right data? It’s good at all kinds of arbitrary symbol mapping.
-
-
Replying to @tyrell_turing @dileeplearning and
@tyrell_turing the point@dileeplearning is making here is an appeal to physics: Statistics of natural signals, the rigidity of 3D structure, etc. All those constrain the problem heavily, and your brain would do well to exploit this structure.1 reply 0 retweets 1 like -
Replying to @tarinziyaee @tyrell_turing and
To clarify -- this doesn't even need hardcoding the idea of space -- it can emerge from sensori-motor interactions. (See Science and Hypothesis by Poincare). But it would still require some general assumptions about physics and natural signals.
1 reply 0 retweets 1 like -
-
Replying to @tarinziyaee @dileeplearning and
With all due respect, guys, you're not understanding my point. Yes, the statistics of the environment are key to understanding the computations in the brain, but as you note, we probably learn a lot of that.
1 reply 0 retweets 0 likes -
Replying to @tyrell_turing @tarinziyaee and
So, the point I return to is: we should be trying to understand the learning algorithms in the brain, not simply cataloguing relationships between activity patterns and stimuli/behavior. That would provide more information for AI.
1 reply 0 retweets 3 likes -
Replying to @tyrell_turing @dileeplearning and
@tyrell_turing Sure, the learning-algorithms is the ultimate goal. However the “representations to stimuli” as you say, can help us understand the algorithms being run. I’m not sure they can be cleanly decoupled, esp in the beginning. I’m also not sure why this is controversial.1 reply 0 retweets 2 likes -
Replying to @tarinziyaee @tyrell_turing and
I agree. It looks to me that Richard is looking to neuroscientists and saying 'just tell us how brain implements back-prop and we'll take it form there, thank you very much' :).
1 reply 0 retweets 2 likes -
Replying to @dileeplearning @tarinziyaee and
No, you’re not getting my point... I’m not saying representations or learning can be fully decoupled, nor telling neuroscientists (of which I am one) just tell us how the brain does backprop.
2 replies 0 retweets 2 likes -
Replying to @tyrell_turing @dileeplearning and
I’m saying: neuroscience needs to start thinking in terms of cost functions and optimization, rather than tuning curves and Hebbian plasticity.
1 reply 0 retweets 7 likes
These both seem like good research programs at our present state of understanding. 1) If the signals in the world have a particular (1/14)
-
-
Replying to @AdamMarblestone @tyrell_turing and
structure, it may be possible to use probabilistic graphical models, with pre-built formats reflecting that structure, to learn to (2/14)
1 reply 0 retweets 3 likes -
Replying to @AdamMarblestone @tyrell_turing and
model those world signals, and it may be possible to so more efficiently than one could by backprop from even a well-chosen cost (3/14)
2 replies 0 retweets 5 likes - 15 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.