So, the point I return to is: we should be trying to understand the learning algorithms in the brain, not simply cataloguing relationships between activity patterns and stimuli/behavior. That would provide more information for AI.
-
-
Replying to @tyrell_turing @dileeplearning and
@tyrell_turing Sure, the learning-algorithms is the ultimate goal. However the “representations to stimuli” as you say, can help us understand the algorithms being run. I’m not sure they can be cleanly decoupled, esp in the beginning. I’m also not sure why this is controversial.1 reply 0 retweets 2 likes -
Replying to @tarinziyaee @tyrell_turing and
I agree. It looks to me that Richard is looking to neuroscientists and saying 'just tell us how brain implements back-prop and we'll take it form there, thank you very much' :).
1 reply 0 retweets 2 likes -
Replying to @dileeplearning @tarinziyaee and
No, you’re not getting my point... I’m not saying representations or learning can be fully decoupled, nor telling neuroscientists (of which I am one) just tell us how the brain does backprop.
2 replies 0 retweets 2 likes -
Replying to @tyrell_turing @dileeplearning and
I’m saying: neuroscience needs to start thinking in terms of cost functions and optimization, rather than tuning curves and Hebbian plasticity.
1 reply 0 retweets 7 likes -
Replying to @tyrell_turing @dileeplearning and
These both seem like good research programs at our present state of understanding. 1) If the signals in the world have a particular (1/14)
1 reply 0 retweets 3 likes -
Replying to @AdamMarblestone @tyrell_turing and
structure, it may be possible to use probabilistic graphical models, with pre-built formats reflecting that structure, to learn to (2/14)
1 reply 0 retweets 3 likes -
Replying to @AdamMarblestone @tyrell_turing and
model those world signals, and it may be possible to so more efficiently than one could by backprop from even a well-chosen cost (3/14)
2 replies 0 retweets 5 likes -
Replying to @AdamMarblestone @tyrell_turing and
function. It would be very interesting if the brain did something like that and if so, it would be reflected in the (4/14)
1 reply 0 retweets 4 likes -
Replying to @AdamMarblestone @tyrell_turing and
micro-circuitry. 2) Figuring out a) if and b) how the brain implements a learning algorithm as powerful and general as (5/14)
1 reply 0 retweets 4 likes
backprop/gradient descent, and if so what cost functions it optimizes, is very important regardless of whether a) turns out in the (6/14)
-
-
Replying to @AdamMarblestone @tyrell_turing and
affirmative or negative. Both of you do look closely at the neuroscience literature in these research programs. Why don't *others* (7/14)
1 reply 0 retweets 4 likes -
Replying to @AdamMarblestone @tyrell_turing and
in the ML field look so closely -- I think because it is really hard, and the literature is still murky: we need much better (8/14)
1 reply 0 retweets 4 likes - 10 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.