1/ In putting together rough ideas for a DeOldify talk tonight, I was reminded of a key insight/approach that has really paid off over the years. And that is: People do research. And people- even the smartest people- are prone to less than optimal problem solving behaviors.
-
Show this thread
-
2/ e.g. Why weren't pretrained Unets or transfer learning employed in GANs in late 2018 for GANs? Probably simply because nobody else was doing it- lack of social validation. It certainly wasn't hard to come up with a proof of concept, especially for a fresh MOOC grad like me.
1 reply 0 retweets 15 likesShow this thread -
3/ Same thing seems to have happened with NLP and transfer learning- see ULMFiT and the resulting NLP revolution. You can even say the same thing for neural networks themselves. They didn't even teach them when I took my computer vision and AI university classes in 2004-2005.
2 replies 0 retweets 10 likesShow this thread -
Replying to @citnaj
I took a class on NN in 2004. They were interesting but kind of useless as computers weren't powerful enough yet.
1 reply 0 retweets 1 like
Yeah I get that- it was even before CUDA. But there was already reason to believe at that point that they were powerful and there were already practical demonstrations of this.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.