It has simply not yet shown to be necessary, or even useful. It actually seemed like a more attractive avenue when we knew less and our models performed worse.
-
-
ML models don't attempt to emulate human cognition, and they're solving a different problem than embodied cognition in the first place, with different constraints and different degrees of freedom.
1 reply 0 retweets 1 like -
If your input is a static image that you're trying to classify, that's a very different setup than being an embodied agent immersed in a dynamic world subject to cause and effect. In the former case, processing all the information available in one go is actually more effective
1 reply 0 retweets 2 likes -
How does that work with ambiguity in the signal, though? Even static images can be ambiguous and require active inference. For example, see this Figure from Bar (2004), where the same blob can be seen as a hairdryer or drill depending on active interpretation of the scene.pic.twitter.com/v4ZWzmCanP
1 reply 1 retweet 1 like -
You can take context into account without active perception. Active perception only becomes really useful in a dynamic world where it's possible to formulate & test hypotheses (requires a time component)
1 reply 0 retweets 4 likes -
The current deep learning standard for implementing context-awareness is "neural attention" (cf Transformers), perhaps you know about it. It has very little in common with active perception though.
1 reply 0 retweets 1 like -
The fact is that hardly any ML model takes "the world" as an input (complete with time, cause & effect). Only static snapshots of it. Ultimately this is why active perception hasn't taken off. If all of AI was cognitive developmental robotics it would be a different story.
1 reply 0 retweets 2 likes -
Thanks. This is fascinating. I need to brush up on neural attention from an AI perspective, esp since I study human attention & am curious about the overlap. Curious about what challenges lie ahead for ML models & whether they will entail a need to better emulate human cognition.
1 reply 0 retweets 2 likes -
Curious to hear your thoughts (and happy to explain neural attention if you need). I strongly suspect that neural attention doesn't actually implement "attention" in the human sense (though almost all DL folks do believe that neural attention is in fact a model of attention)
2 replies 0 retweets 0 likes -
Would love to read up on it. Can you recommend any introductions?
1 reply 0 retweets 0 likes
This is funny because I was writing one a couple months ago for the 2nd edition of my textbook. Happy to send you the draft over email.
-
-
That would be awesome, thanks. I've sent you an email.
0 replies 0 retweets 1 likeThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.