Yea, emphasizing that we should have our eye on learning rules that can actually lead to better performance in a large system doesn't mean having nothing to say about what that looks like on a cellular level. Learning rules are implemented through molecular mechanisms.
Right. But wouldn’t one want to be way way more explicit about how one actually thinks about the mechanisms by which these architectures and their structure is brought about. Simply labeling large parts of neuroscience as stamp collectors seems unhelpful at best.
-
-
I do prefer this 2 come w/ large dose of circuit hypothesis detail. Refactor much neuro as looking 4 cost fxns, optimizers, conserved dev programs + what doesn’t fit that. As Blake’s dendritic DL work does. & yes “tuning” gives insight to it. Pytorch code=result, not sole input.
-
I whole heartedly agree.There are however some important
to be made before that can happen. Part of that is going to be pytorch-like output. Concepts might be easier to keep from AI, but I think to really get broad acceptance need to be anchored in the language we use today - Još 1 odgovor
Novi razgovor -
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.
. With sequencing you have to have a pretty good idea what you are looking for; e.g. antisense lncRNA and promoter choice of protocadherins.