Qizhe Xie

@QizheXie

PhD Student @ CMU, Student Researcher @ Google Brain

Vrijeme pridruživanja: studeni 2012.

Tweetovi

Blokirali ste korisnika/cu @QizheXie

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @QizheXie

  1. proslijedio/la je Tweet
    25. stu 2019.

    AdvProp: One weird trick to use adversarial examples to reduce overfitting. Key idea is to use two BatchNorms, one for normal examples and another one for adversarial examples. Significant gains on ImageNet and other test sets.

    Prikaži ovu nit
    Poništi
  2. proslijedio/la je Tweet
    25. stu 2019.

    Can adversarial examples improve image recognition? Check out our recent work: AdvProp, achieving ImageNet top-1 accuracy 85.5% (no extra data) with adversarial examples! Arxiv: Checkpoints:

    Poništi
  3. proslijedio/la je Tweet
    21. stu 2019.

    EfficientDet: a new family of efficient object detectors. It is based on EfficientNet, and many times more efficient than state of art models. Link: Code: coming soon

    Prikaži ovu nit
    Poništi
  4. proslijedio/la je Tweet
    18. stu 2019.

    *New paper* RandAugment: a new data augmentation. Better & simpler than AutoAugment. Main idea is to select transformations at random, and tune their magnitude. It achieves 85.0% top-1 on ImageNet. Paper: Code:

    Poništi
  5. proslijedio/la je Tweet
    12. stu 2019.

    Nice new results from researchers on improving the state-of-the-art on ImageNet! "We...train a...model on...ImageNet...& use it as a teacher to generate pseudo labels on 300M unlabeled images. We then train a larger...model on the...labeled & pseudo labeled images."

    Poništi
  6. proslijedio/la je Tweet
    12. stu 2019.
    Odgovor korisniku/ci

    So grateful for all the really useful work you've been releasing, Quoc - I don't know how you do it! :) I love this focus on making the most of data augmentation, pseudo-labeling, and other practically important techniques.

    Poništi
  7. proslijedio/la je Tweet
    12. stu 2019.

    Another view of Noisy Student: semi-supervised learning is great even when labeled data is plentiful! 130M unlabeled images yields 1% gain over previous ImageNet SOTA that uses 3.5B weakly labeled examples! joint work /w , Ed Hovy,

    Poništi
  8. proslijedio/la je Tweet
    11. stu 2019.

    "Self-training with Noisy Student improves ImageNet classification" achieves 87.4% top-1 accuracy. 1 Train a model on ImageNet 2 Generate pseudo labels on unlabeled extra dataset 3 Train a student model using all the data and make it a new teacher ->2

    Poništi
  9. proslijedio/la je Tweet
    12. stu 2019.

    Amazing unsupervised learning results:

    Poništi
  10. proslijedio/la je Tweet
    12. stu 2019.

    Full comparison against state-of-the-art on ImageNet. Noisy Student is our method. Noisy Student + EfficientNet is 11% better than your favorite ResNet-50 😉

    Prikaži ovu nit
    Poništi
  11. proslijedio/la je Tweet
    12. stu 2019.

    Example predictions on robustness benchmarks ImageNet-A, C and P. Black texts are correct predictions made by our model and red texts are incorrect predictions by our baseline model.

    Prikaži ovu nit
    Poništi
  12. proslijedio/la je Tweet
    12. stu 2019.

    Want to improve accuracy and robustness of your model? Use unlabeled data! Our new work uses self-training on unlabeled data to achieve 87.4% top-1 on ImageNet, 1% better than SOTA. Huge gains are seen on harder benchmarks (ImageNet-A, C and P). Link:

    Prikaži ovu nit
    Poništi
  13. proslijedio/la je Tweet
    10. srp 2019.

    Glad to see semi-supervised curves surpass/approach their supervised counterparts with much less labeled data! Checkout the blogpost on our work on "Unsupervised Data Augmentation (UDA) for consistency training" with code release!

    Prikaži ovu nit
    Poništi
  14. proslijedio/la je Tweet
    10. srp 2019.

    This work was conducted by , , , Eduard Hovy, and . Get the code here:

    Prikaži ovu nit
    Poništi
  15. proslijedio/la je Tweet
    10. srp 2019.

    Recent work on "Unsupervised Data Augmentation" (UDA) reveals that better data augmentation leads to better semi-supervised learning, with state-of-the-art results on various language and vision benchmarks, using one or two orders of magnitude less data.

    Prikaži ovu nit
    Poništi
  16. 9. srp 2019.

    Very nice presentation on Unsupervised Data Augmentation (UDA): . Looking forward to seeing works on applying UDA to achieve SOTA performance on more tasks! Thank you and for taking the time to chat with me and present it!

    Poništi
  17. proslijedio/la je Tweet
    26. lip 2019.

    We opensourced AutoAugment strategy for object detection. This strategy significantly improves detection models in our benchmarks. Please try it on your problems. Code: Paper: More details & results 👇

    Poništi
  18. proslijedio/la je Tweet
    19. lip 2019.

    XLNet: a new pretraining method for NLP that significantly improves upon BERT on 20 tasks (e.g., SQuAD, GLUE, RACE) arxiv: github (code + pretrained models): with Zhilin Yang, , Yiming Yang, Jaime Carbonell,

    Poništi
  19. proslijedio/la je Tweet
    18. svi 2019.

    Nice blog post titled "The Quiet Semi-Supervised Revolution" by Vincent Vanhoucke. It discusses two related works by the Google Brain team: Unsupervised Data Augmentation and MixMatch.

    Prikaži ovu nit
    Poništi
  20. proslijedio/la je Tweet
    29. tra 2019.

    Data augmentation is often associated with supervised learning. We find *unsupervised* data augmentation works better. It combines well with transfer learning (e.g. BERT) and improves everything when datasets have a small number of labeled examples. Link:

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·