This goes beyond my knowledge, just a question: would having specific models at various layers of abstraction lead to a specific output at each level?
-
-
-
-
If so DL would stop being a black box
1 reply 0 retweets 2 likes -
Replying to @twitemp1
Mostly agree. I think understanding (i.e., shining a light into a black box) is linked to predicting but not so straightforwardly.
1 reply 0 retweets 1 like -
Replying to @o_guest
Probably not. I would guess that formal analysis is much needed
1 reply 0 retweets 1 like -
Replying to @twitemp1
Yeah, exactly. You can not understand something (ergo still a black box) and yet still memorise and even duplicate all the input output mappings.
1 reply 0 retweets 1 like -
This is why I take a stand when people say cog sci is reverse engineering the brain or mind. It's so much more than that!
2 replies 0 retweets 1 like -
A while ago somebody on twitter insisted that I need experience in reverse engineering because it's so important to cog sci and science in general. Was painful.
1 reply 0 retweets 1 like -
Also just in case this blows the up again. Just because I said reverse engineering doesn't open up a black box by definition doesn't mean that a human reverse engineering something won't end up opening up the black box. It's just not inherently true.
1 reply 0 retweets 1 like
You can reverse engineer a chip and create an equivalent chip that does the same stuff but in a different way. Or you can end up fully understanding the original chip. But to reverse engineer doesn't 100% imply you understand anything more than the input output mappings.
-
-
Replying to @o_guest
Agreed. It is difficult for me to believe that a single approach would suffice to unravel cognition but then I'm just a learning theorist ;)
0 replies 0 retweets 0 likesThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.