Can you imagine a situation in which the model is "perfect" like you describe and yet tells us nothing or very very little? Like https://en.wikipedia.org/wiki/OpenWorm , for example? I don't think OpenWorm is a blip. I think it might be what happens w a lot of reductionist models.
-
-
Replying to @o_guest @seymiotics and
OpenWorm is far from a perfect model. there are obviously things going on in C. elegans that we don't know about yet. But we are a lot closer to being able to describe the behavior of worms in terms of neural activity than we are for humans, in part due to efforts like OpenWorm
1 reply 0 retweets 1 like -
Replying to @neurograce @o_guest and
Also, can someone just give me a clear example of something that has counted as an explanation to them in psych/neuroscience? Because I honestly think a lot of this is just having a different set of questions we consider interesting & thus different acceptable answers
0 replies 1 retweet 2 likes -
Replying to @seymiotics @o_guest and
thanks, that's helpful. I can see how that is a satisfying explanation if you want to map "inputs to outputs" in a sense. But without digging into the actual neural systems in between. I do think this comes down to a difference in interests perhaps
1 reply 0 retweets 2 likes -
Replying to @seymiotics @o_guest and
I agree with that too though. I just wanted to do the neural work, after the input-output mapping has been handed to me by someone else
1 reply 0 retweets 1 like
I am not sure if you disagree on the concepts of levels/layers of analysis/abstraction or not? Because if you agree it's a useful concept then we probably don't disagree fundamentally.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.