Neat paper! Shows that inner loop adaptation is not necessary at meta-test time for MAML. Removing the final layer and computing cosine similarities (similar to prototypical nets) is sufficient.https://twitter.com/maithra_raghu/status/1176181071727095808 …
This is on the original set of MAML tasks, and results will most likely be different if tasks required more adaptation.
-
-
I'm really looking forward to digging into this soon. It seems like a really informative look at adaptation that may be relevant to broader transfer learning problems like Maithra suggested.
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.