1/ In putting together rough ideas for a DeOldify talk tonight, I was reminded of a key insight/approach that has really paid off over the years. And that is: People do research. And people- even the smartest people- are prone to less than optimal problem solving behaviors.
-
Show this thread
-
2/ e.g. Why weren't pretrained Unets or transfer learning employed in GANs in late 2018 for GANs? Probably simply because nobody else was doing it- lack of social validation. It certainly wasn't hard to come up with a proof of concept, especially for a fresh MOOC grad like me.
1 reply 0 retweets 15 likesShow this thread -
3/ Same thing seems to have happened with NLP and transfer learning- see ULMFiT and the resulting NLP revolution. You can even say the same thing for neural networks themselves. They didn't even teach them when I took my computer vision and AI university classes in 2004-2005.
2 replies 0 retweets 10 likesShow this thread -
4/ Since then, time and time again, it has paid off for me, big time, to simply remember to look where others aren't looking, and to prioritize actual observation over trying uncritically assume that smarter people have already covered the territory and are right.
3 replies 4 retweets 46 likesShow this thread
"over trying uncritically assume" Oh that's another thing I find interesting- how brain glitches happen. This one is hardcore bizarre. I swear I try to proofread 

Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.