Looking forward, when this really works as well on NLP!
-
-
-
In NLP, there is backtranslation method that works quite well as a data augmentation method. You can check out its use in UDA: https://arxiv.org/abs/1904.12848 Link to code:https://github.com/google-research/uda#run-back-translation-data-augmentation-for-your-dataset …
- Još 1 odgovor
Novi razgovor -
-
-
Ever try using style transfer for augmentation? I heard it helps encourage the model to learn shapes instead of textures. Wonder if it would make sense to use it with AutoAugment?
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
-
-
Another fantastic work product from Le and team!
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
-
-
In the paper you say: "The baseline RetinaNet architecture used ... standard data augmentation techniques...This consists of doing horizontal flipping with 50% probability and multi-scale jittering..." Was rotations used? What other techniques?
-
The baseline only used flips and multi-scale jittering. Our learned augmentation (AutoAugment) did use other ops, including Rotate. Table 6 lists all the ops we considered for the learned augmentation policy.
Kraj razgovora
Novi razgovor -
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.
