Counterpoint: I believe that all parameters, including channel densities etc are learned. Reason? Learning a channel is computationally identical to learning a synapse with constant input.
-
-
W odpowiedzi do to @KordingLab@blake_camp_1 i jeszcze
Sure, I don't disagree with changes in channel densities, spine growth, etc. But there are some aspects that probably can't be modified within a lifetime, e.g. molecular structure of a channel, or the gross architecture of the neuron. I.e. a distribution of constancy values
1 odpowiedź 0 podanych dalej 4 polubione -
W odpowiedzi do to @bradpwyble@blake_camp_1 i jeszcze
fully agree. Although, just to be contrarian, I bet that molecular configuration (but obviously not the amino acid sequence itself) and gross neuron architecture are both learned.
3 odpowiedzi 0 podanych dalej 4 polubione -
W odpowiedzi do to @KordingLab@blake_camp_1 i jeszcze
Do you mean within a lifetime or across generations? Obviously true for the latter, less clear for the former.
1 odpowiedź 0 podanych dalej 2 polubione -
W odpowiedzi do to @bradpwyble@blake_camp_1 i jeszcze
within a lifetime. Would not be contrarian otherwise I think ;)
2 odpowiedzi 0 podanych dalej 1 polubiony -
W odpowiedzi do to @KordingLab@blake_camp_1 i jeszcze
I agree that we don't know given current methods if this is true or not. I'm inclined to believe that having too many dimensions of plasticity can impede learning so there is perhaps some virtue in having some more rigid parameters, esp those that allow for homeostasis. >
2 odpowiedzi 0 podanych dalej 1 polubiony -
W odpowiedzi do to @bradpwyble@blake_camp_1 i jeszcze
but isn't the main thing we learn from DL that giving more parameters tends to always help?
1 odpowiedź 0 podanych dalej 4 polubione -
W odpowiedzi do to @KordingLab@blake_camp_1 i jeszcze
Only if the training set is able to scale up too right? Given a fixed amount of experience, it's not clear that more parameters is always better.
3 odpowiedzi 0 podanych dalej 4 polubione -
W odpowiedzi do to @bradpwyble@blake_camp_1 i jeszcze
you can view large networks as ensembles of smaller networks. So, weirdly, DL systems often have pretty surprisingly good performance at low N.
1 odpowiedź 0 podanych dalej 2 polubione -
Ten tweet jest niedostępny.
Does the increased number of parameters always help if the input data remains constant? Wouldn’t increased parameterisation eventually have marginal gains and thus eventually be energetically unfavourable?
-
-
W odpowiedzi do to @azhir_io @KordingLab i jeszcze
I think Dan (
@neuralreckoning ) can probably weigh in on the parametrisation problem.2 odpowiedzi 0 podanych dalej 1 polubiony -
W odpowiedzi do to @azhir_io @KordingLab i jeszcze
Yes, one might have to weigh each parameter's utility against its metabolic cost. I.e. it presumably costs less energy to have a genetically fixed parameter compared to one that is flexible but has to be stable on a learned value.
0 odpowiedzi 0 podanych dalej 1 polubiony
Koniec rozmowy
Nowa rozmowa -
Wydaje się, że ładowanie zajmuje dużo czasu.
Twitter jest przeciążony lub wystąpił chwilowy problem. Spróbuj ponownie lub sprawdź status Twittera, aby uzyskać więcej informacji.
to 
