An equivalent of the Michelson–Morley experiment for the brain + backprop would help..
-
-
Replying to @marcosalvi @pfau
I don't think anyone actually believes the brain backpropates, I think the closest is Hinton saying that the error signal is very useful and the brain uses it somehow
2 replies 0 retweets 5 likes -
Replying to @F_Vaggi @marcosalvi
Maybe not *exactly* backprop, but many people think the brain does something similar to it. Feels very similar to research on "how does the brain do belief propagation?" A normative framework desperately searching for experimental validation.
2 replies 0 retweets 5 likes -
There's more logic in this field than you're recognizing, David. No one has ever demonstrated how effective learning can work without some decent form of credit assignment. Until that day, it is 100% reasonable to investigate how the brain solves the credit assignment problem.
1 reply 2 retweets 21 likes -
Replying to @tyrell_turing @pfau and
To be clear, we may at some point realize that credit assignment is not key. But at this point, that's total pie-in-the-sky. It's not reasonable for scientists to reject an area of study because of an unsubstantiated belief that some future discovery will render it obsolete.
3 replies 0 retweets 22 likes -
Replying to @tyrell_turing @pfau and
Stupidly late to these comments but very well articulated, Blake.
1 reply 0 retweets 2 likes -
Replying to @SussilloDavid @tyrell_turing and
I think it's perfectly reasonable for a scientist to choose to avoid a certain field of theoretical research because they don't believe the tools are there to falsify it experimentally.
1 reply 0 retweets 6 likes -
Replying to @pfau @SussilloDavid and
Yes, totally. But, that's precisely what we have to change (and what a few groups are working on, including mine): our credit assignment models need to start making physiological predictions that can be falsified.
2 replies 0 retweets 6 likes -
Replying to @tyrell_turing @pfau and
Wait, credit assignment = backprop?
2 replies 0 retweets 1 like -
Replying to @xaqlab @tyrell_turing and
The brain does credit assignment by whatever was cool at NeurIPS 5 years ago.
1 reply 0 retweets 3 likes
Let’s add meat to this then, though. What are good options for credit assignment that *don’t* require efficient access to an estimate of the 1st order gradient of an objective function w.r.t. a given synaptic weight deep in a network? Honest question.
-
-
Replying to @AdamMarblestone @pfau and
(I realize many answers will restrict the architecture or objective fxn greatly, to allow specialized non-backprop ways to get such gradients, which wouldn’t work for general fxn approx in arbitrary net topology. What can you really do with those? What’s the best alternative?)
2 replies 0 retweets 1 like -
Replying to @AdamMarblestone @pfau and
I guess it breaks into 2 cases: 1) you don’t need the gradient, or 2) you do but you have a way to get it that is structurally very different than backprop. For instance this paper gets gradient w/ either backprop or EM (perhaps an example of case #2): https://arxiv.org/abs/1202.3732
1 reply 0 retweets 3 likes - 16 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.