Still vastly underused in image to image to this day and I don't understand why: Unets and self-attention. I keep seeing papers come out that have obvious problems that could be solved using these two simple things!
-
-
Replying to @citnaj
Hey
@citnaj, I recently had a go at implementing self-attention for images. I'm not sure if my implementation is correct. Would you mind taking a look? Github repo => https://github.com/theairbend3r/how-to-train-your-neural-net/blob/master/pytorch/basics/attention_in_computervision.ipynb …1 reply 0 retweets 1 like -
Replying to @theairbend3r
Sure will take a look tomorrow- quite late here now and I'm being super bad. But did it work for you?
1 reply 0 retweets 2 likes -
Replying to @citnaj
Sure, thing! :) I'm not sure. I tried CNN+Self Attention for image classification as a trial. The code runs but MNIST was a wrong choice (hehe). I'll soon try it on a tougher dataset. I'm also trying to use GradCam to visualize the act-maps. repo => https://github.com/theairbend3r/how-to-train-your-neural-net/blob/master/pytorch/computer_vision/attention/self_attention.ipynb …
4 replies 0 retweets 1 like -
Replying to @theairbend3r @citnaj
Thanks for the feedback
@citnaj. Appreciate it! :) Do you know of any other interesting attention mechanisms that are used on images (trainable by backprop)?1 reply 0 retweets 0 likes
Resnest just came out: https://arxiv.org/abs/2004.08955
-
-
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.