Still vastly underused in image to image to this day and I don't understand why: Unets and self-attention. I keep seeing papers come out that have obvious problems that could be solved using these two simple things!
-
-
-
Replying to @MaxLenormand
3/ If you start from scratch, you're really putting your network at a needless disadvantage. Stand on the shoulders of giants!
2 replies 0 retweets 4 likes -
Replying to @citnaj
That seems like what transfer learning helps doing, but I'm guessing it's something different? My lack of knowledge of self-attention probably doesn't help to wrap my head around how they could help. One more thing to check out, thanks!
1 reply 0 retweets 2 likes
Yeah it's transfer learning- with the added bonus of a thoughtful design of extracting features from key slices of the vision model and processing them to utilize that transfer learning more effectively. Self-attention should be considered a key component much like fc and conv.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.