Still vastly underused in image to image to this day and I don't understand why: Unets and self-attention. I keep seeing papers come out that have obvious problems that could be solved using these two simple things!
-
-
Sure, thing! :) I'm not sure. I tried CNN+Self Attention for image classification as a trial. The code runs but MNIST was a wrong choice (hehe). I'll soon try it on a tougher dataset. I'm also trying to use GradCam to visualize the act-maps. repo => https://github.com/theairbend3r/how-to-train-your-neural-net/blob/master/pytorch/computer_vision/attention/self_attention.ipynb …
-
1/ I took a look. I really think you did a great job of making a clear and simple implementation of self-attention for educational purposes. I don't see anything wrong, though you could do a residual connection on the attention part and add a gamma parameter to ease in learning.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.