Many of us tried to use adversarial examples as data augmentation and observed a drop in accuracy. And it seems that simply using two BatchNorms overcomes this mysterious drop in accuracy.
-
-
Prikaži ovu nit
-
As a data augmentation method, adversarial examples are more general than other image processing techniques. So I expect AdvProp to be useful everywhere (language, structured data etc.), not just image recognition.
Prikaži ovu nit -
AdvProp improves accuracy for a wide range of image models, from small to large. But the improvement seems bigger when the model is larger.pic.twitter.com/13scFaoQzB
Prikaži ovu nit -
Pretrained checkpoints in Pytorch: https://github.com/rwightman/gen-efficientnet-pytorch … h/t to
@wightmanrPrikaži ovu nit
Kraj razgovora
Novi razgovor -
-
-
This is becoming ridiculous.
@quocleix you are the Serge Bubka of ImageNet, breaking your own records every 2nd week! -
Next week, you will combine Noisy Student (data) and AdvProp (compute) to beat ImageNet again. Go Sergey! The "compute/data tradeoff for structure" story just keeps on giving.
Kraj razgovora
Novi razgovor -
-
-
Nice job
, have you tried other normalization techniques: like layer normalization or weight normalization?? I am just curious hereHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.