To understand how a neural network or nervous system categorizes, it is crucial to not just look at which features contributed how much to the outcome, but what change of the total input would have led to a change in classification. The space between things is significant, too.
-
-
Replying to @Plinz
If you knew the details of a human’s neural connections, do you think it would be possible to design a subtle “pixel attack” to fool our classification of objects, the way a convolutional neural net can be fooled? Or is our visual intelligence more robust than CovNets?
1 reply 0 retweets 1 like
Replying to @DigPhysics
Of course it is. First of all, we interpret everything we see as being part of an object in 3space, with a temporal history. That is mapping is a huge constraint. Also, our brain is not differentiable and less deterministic, which makes attacks harder.
9:51 AM - 19 May 2018
0 replies
0 retweets
1 like
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.