I'm sorry, I wasn't aware I was only allowed to tweet novel ideas. Of course this is old and well-known. Yet many folks still haven't fully internalized it. Does everyone in your field behave like this (in public, at that) or is it just you guys?
-
-
I love the fact that you recognize what the interesting questions are in our field, as I'm sure I couldn't even figure out what separates the interesting from mundane questions in yours. It's exciting any time someone with different expertise shares our awe in what we study.
1 reply 1 retweet 2 likes -
Cool, and I appreciate your Neisser reference. Active perception is still not a hot topic in AI today. I used to do some research in that area (active vision with an anticipative eye saccade model) in 2012, and back then these ideas had very little traction. It's trending up tho.
1 reply 0 retweets 2 likes -
That's actually exciting. The fact that active perception is not a hot topic in AI is amazing to me, as it's almost taken for granted in our field. Is that bc its hard to build it into AI, or does it represent a fertile avenue for collaboration between cognitive & AI researchers?
2 replies 1 retweet 4 likes -
It has simply not yet shown to be necessary, or even useful. It actually seemed like a more attractive avenue when we knew less and our models performed worse.
1 reply 0 retweets 1 like -
ML models don't attempt to emulate human cognition, and they're solving a different problem than embodied cognition in the first place, with different constraints and different degrees of freedom.
1 reply 0 retweets 1 like -
If your input is a static image that you're trying to classify, that's a very different setup than being an embodied agent immersed in a dynamic world subject to cause and effect. In the former case, processing all the information available in one go is actually more effective
1 reply 0 retweets 2 likes -
How does that work with ambiguity in the signal, though? Even static images can be ambiguous and require active inference. For example, see this Figure from Bar (2004), where the same blob can be seen as a hairdryer or drill depending on active interpretation of the scene.pic.twitter.com/v4ZWzmCanP
1 reply 1 retweet 1 like -
You can take context into account without active perception. Active perception only becomes really useful in a dynamic world where it's possible to formulate & test hypotheses (requires a time component)
1 reply 0 retweets 4 likes -
The current deep learning standard for implementing context-awareness is "neural attention" (cf Transformers), perhaps you know about it. It has very little in common with active perception though.
1 reply 0 retweets 1 like
The fact is that hardly any ML model takes "the world" as an input (complete with time, cause & effect). Only static snapshots of it. Ultimately this is why active perception hasn't taken off. If all of AI was cognitive developmental robotics it would be a different story.
-
-
Thanks. This is fascinating. I need to brush up on neural attention from an AI perspective, esp since I study human attention & am curious about the overlap. Curious about what challenges lie ahead for ML models & whether they will entail a need to better emulate human cognition.
1 reply 0 retweets 2 likes -
Curious to hear your thoughts (and happy to explain neural attention if you need). I strongly suspect that neural attention doesn't actually implement "attention" in the human sense (though almost all DL folks do believe that neural attention is in fact a model of attention)
2 replies 0 retweets 0 likes - Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.