PS: I believe that @ylecun coined the term "self-supervised learning". I like it!
-
-
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
-
-
The best SSL approaches to learning visual features today use Siamese networks to learn embeddings. Examples: two recent papers from FAIR, "Pretext-Invariant Representation Learning" (Misra et al.) and "MoCo" (He et al.).
-
Thanks a lot - I hadn't seen those papers. It's good that PiRL can be added to any pretext task; the jigsaw and rotation tasks they test on aren't great for medical imaging (which often has little macro structure and can be rotation invariant).
- Još 5 drugih odgovora
Novi razgovor -
-
-
nice written, also denoising seems be a easier pretext task to start...
-
I think it's a less useful pretext task though. I'd expect inpainting to outperform it in most cases.
- Još 2 druga odgovora
Novi razgovor -
-
-
Amazing how many good ideas Hinton covered in his old Coursera lectures. I'm thinking of things like pretraining with an autoencoder, dropout at test time, spectral norm regularization, etc.
-
Yeah it's a gem
Kraj razgovora
Novi razgovor -
-
-
Really liked this post I read on the same subject late last year:https://lilianweng.github.io/lil-log/2019/11/10/self-supervised-learning.html …
-
Very nice!
Kraj razgovora
Novi razgovor -
-
-
@jeremyphoward this is such an amazing summary! 5min read 100% knowledge update! Looking extremely forward to the your fastai BOOK! Here for the people who don't know it yet! https://www.amazon.com/-/de/dp/1492045527/ref=mp_s_a_1_fkmr0_1?keywords=for+light+fastai&qid=1579028414&sr=8-1-fkmr0 … -
Glad you liked it :)
Kraj razgovora
Novi razgovor -
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.