Still vastly underused in image to image to this day and I don't understand why: Unets and self-attention. I keep seeing papers come out that have obvious problems that could be solved using these two simple things!
-
-
Replying to @citnaj
Hey
@citnaj, I recently had a go at implementing self-attention for images. I'm not sure if my implementation is correct. Would you mind taking a look? Github repo => https://github.com/theairbend3r/how-to-train-your-neural-net/blob/master/pytorch/basics/attention_in_computervision.ipynb …1 reply 0 retweets 1 like -
Replying to @theairbend3r
Sure will take a look tomorrow- quite late here now and I'm being super bad. But did it work for you?
1 reply 0 retweets 2 likes -
Replying to @citnaj
Sure, thing! :) I'm not sure. I tried CNN+Self Attention for image classification as a trial. The code runs but MNIST was a wrong choice (hehe). I'll soon try it on a tougher dataset. I'm also trying to use GradCam to visualize the act-maps. repo => https://github.com/theairbend3r/how-to-train-your-neural-net/blob/master/pytorch/computer_vision/attention/self_attention.ipynb …
4 replies 0 retweets 1 like
2/ To be clear, the idea behind the gamma parameter is that the conv features are easier to learn and get learned first before attention is weighted in and learned. See SelfAttention here as an example: https://github.com/fastai/fastai/blob/master/fastai/layers.py …
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.