2/ Summary: Changing the initialization to be uniformly random is the main contributor towards successful FGSM adversarial training. Generated adversarial examples need to be able to actually span the entire threat model, but otherwise don't need to be that strong for training.
-
-
Prikaži ovu nit
-
3/ Save your valuable time with cyclic learning rates and mixed precision! These techniques can train robust CIFAR10 and ImageNet in 6 min and 12 hrs using FGSM adv training. Super easy to incorporate (just add a 3-4 lines of code), and can accelerate any training method.
Prikaži ovu nit -
4/ Did you try FGSM before and it didn't work? It probably failed due to "catastrophic overfitting": plotting the learning curves reveals that, if done incorrectly, FGSM adv training learns a robust classifier, up until it suddenly and rapidly deteriorates within a single epoch.
Prikaži ovu nit
Kraj razgovora
Novi razgovor -
-
-
What if "catastrophic overfitting" is the thing making your clean predictions more accurate?
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
-
-
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.