A possibility, much discussed of late: u don’t need / there aren’t understandable mechs for every computation. U need to know just the fundamental mechs that *generate* those diverse comp’s, e.g., cost fxn opt. w/ good credit assignment. Doubt can crack those w/ just recording!
-
-
Replying to @AdamMarblestone
I don't disagree this is a useful line of inquiry. But with all we're learning about cell types and cell/compartment-specific interactions in the brain, it seems likely there is a mechanistic basis of neural computation yet to be discovered.
2 replies 0 retweets 0 likes -
Replying to @neurowitz
Yep and I personally doubt we’ll see either the opt mechanism (if any) or the more specific computational built-ins you’re referring to, with pure recording minus perturbation and connectomics.
1 reply 0 retweets 0 likes -
Replying to @AdamMarblestone
If that's true, and cell type-specific perturbation during behavior is required, this will be a very long slog in rodents and near impossible in primates. (I'm more optimistic about connectomics become easy across the board.)
1 reply 0 retweets 1 like -
-
Replying to @AdamMarblestone
It seems likely we can use ML to discover candidate mechanistic models that explain neural data even without constraints from optogenetics or connectomics. Something like a GNN trained to reproduce an animal's task performance and regularized with neural data.
1 reply 0 retweets 3 likes -
Replying to @neurowitz
I guess I’m very sympathetic to that as an approach to improve AI (see recent Tolias paper) but not necessarily as heavily constraining “mechanism”... though one *could* get lucky if one can converge into the right equivalence class of mechanism by fitting enough data...
1 reply 0 retweets 0 likes -
Replying to @AdamMarblestone
How much data would be enough? (As if anyone knows!) My motivating question assumed access to activity by every neuron in a region of the brain under many different behavioral conditions. Data from multiple regions could only help, obvs.
1 reply 0 retweets 1 like -
Replying to @neurowitz
A probably under-appreciated aspect of our present predicament. Yes, DL is just fitting. But, if you fit *enough* real neurons, behavior, functional performance, arch constraints... you’ll at least get some interesting partial clone of the comp’s that evolution & culture built.
2 replies 0 retweets 1 like -
Replying to @AdamMarblestone @neurowitz
True of C. Elegans and true for You.
1 reply 0 retweets 0 likes
Computational universality FTW
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.