Still vastly underused in image to image to this day and I don't understand why: Unets and self-attention. I keep seeing papers come out that have obvious problems that could be solved using these two simple things!
-
-
Replying to @citnaj
Hey
@citnaj, I recently had a go at implementing self-attention for images. I'm not sure if my implementation is correct. Would you mind taking a look? Github repo => https://github.com/theairbend3r/how-to-train-your-neural-net/blob/master/pytorch/basics/attention_in_computervision.ipynb …1 reply 0 retweets 1 like -
Replying to @theairbend3r
Sure will take a look tomorrow- quite late here now and I'm being super bad. But did it work for you?
1 reply 0 retweets 2 likes -
Replying to @citnaj
Sure, thing! :) I'm not sure. I tried CNN+Self Attention for image classification as a trial. The code runs but MNIST was a wrong choice (hehe). I'll soon try it on a tougher dataset. I'm also trying to use GradCam to visualize the act-maps. repo => https://github.com/theairbend3r/how-to-train-your-neural-net/blob/master/pytorch/computer_vision/attention/self_attention.ipynb …
4 replies 0 retweets 1 like
1/ I took a look. I really think you did a great job of making a clear and simple implementation of self-attention for educational purposes. I don't see anything wrong, though you could do a residual connection on the attention part and add a gamma parameter to ease in learning.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.