This paper is so, so good. Massive implications. Meta-Learning with Implicit Gradients: https://arxiv.org/abs/1909.04630
"Sec-ond, implicit MAML is agnostic to the inner optimization method used, as long as it can find an approximate solution to the inner-level optimization problem."
-
-
"We show that an –approximate meta-gradient can be computed via implicit MAML using O˜(log(1/)) gradient evaluations and O˜(1) memory, meaning the memory required does not grow with number of gradient steps."
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.