Quoc Le

@quocleix

Principal Scientist, Google Brain Team.

Vrijeme pridruživanja: travanj 2018.

Medijski sadržaj

  1. 30. sij

    I had another conversation with Meena just now. It's not as funny and I don't understand the first answer. But the replies to the next two questions are quite funny.

    Prikaži ovu nit
  2. 29. sij

    My favorite conversation is below. The Hayvard pun was funny but I totally missed the steer joke at the end until it was pointed out today by

    Prikaži ovu nit
  3. 28. sij

    New paper: Towards a Human-like Open-Domain Chatbot. Key takeaways: 1. "Perplexity is all a chatbot needs" ;) 2. We're getting closer to a high-quality chatbot that can chat about anything Paper: Blog:

    Prikaži ovu nit
  4. 25. stu 2019.

    AdvProp improves accuracy for a wide range of image models, from small to large. But the improvement seems bigger when the model is larger.

    Prikaži ovu nit
  5. 21. stu 2019.

    And latency on CPU and GPU:

    Prikaži ovu nit
  6. 21. stu 2019.

    Architecture of EfficientDet

    Prikaži ovu nit
  7. 21. stu 2019.

    EfficientDet: a new family of efficient object detectors. It is based on EfficientNet, and many times more efficient than state of art models. Link: Code: coming soon

    Prikaži ovu nit
  8. 12. stu 2019.

    I also highly recommend this nice video that explains the paper very well:

    Prikaži ovu nit
  9. 12. stu 2019.

    Full comparison against state-of-the-art on ImageNet. Noisy Student is our method. Noisy Student + EfficientNet is 11% better than your favorite ResNet-50 😉

    Prikaži ovu nit
  10. 12. stu 2019.

    Example predictions on robustness benchmarks ImageNet-A, C and P. Black texts are correct predictions made by our model and red texts are incorrect predictions by our baseline model.

    Prikaži ovu nit
  11. 12. stu 2019.

    Want to improve accuracy and robustness of your model? Use unlabeled data! Our new work uses self-training on unlabeled data to achieve 87.4% top-1 on ImageNet, 1% better than SOTA. Huge gains are seen on harder benchmarks (ImageNet-A, C and P). Link:

    Prikaži ovu nit
  12. 30. srp 2019.

    We released all checkpoints and training recipes of EfficientNets, including the best model EfficientNet-B7 that achieves accuracy of 84.5% top-1 on ImageNet. Link:

    Prikaži ovu nit
  13. 19. lip 2019.

    XLNet: a new pretraining method for NLP that significantly improves upon BERT on 20 tasks (e.g., SQuAD, GLUE, RACE) arxiv: github (code + pretrained models): with Zhilin Yang, , Yiming Yang, Jaime Carbonell,

  14. 29. svi 2019.

    EfficientNets: a family of more efficient & accurate image classification models. Found by architecture search and scaled up by one weird trick. Link: Github: Blog:

  15. 19. svi 2019.

    Key idea in these two papers is to ensure prediction(x) = prediction(x + noise) , where x is an unlabeled example. People have tried all kind of noise, e.g., Gaussian noise, adversarial noise etc. But it looks like data augmentation noise is the real winner.

    Prikaži ovu nit
  16. 18. svi 2019.

    Nice blog post titled "The Quiet Semi-Supervised Revolution" by Vincent Vanhoucke. It discusses two related works by the Google Brain team: Unsupervised Data Augmentation and MixMatch.

    Prikaži ovu nit
  17. 7. svi 2019.

    Introducing MobileNetV3: Based on MNASNet, found by architecture search, we applied additional methods to go even further (quantization friendly SqueezeExcite & Swish + NetAdapt + Compact layers). Result: 2x faster and more accurate than MobileNetV2. Link:

  18. 30. tra 2019.
    Odgovor korisniku/ci

    I don't understand your question well, but the method is used in a semi-supervised learning where you have two losses. One is supervised loss and the other is unsupervised consistency loss. The figure below may help.

  19. 22. tra 2019.

    Exciting new work on replacing convolutions with self-attention for vision. Our paper shows that full attention is good, but loses a few percents in accuracy. And a middle ground that combines convolutions and self-attention is better. Link:

  20. 16. tra 2019.

    We used architecture search to find a better architecture for object detection. Results: Better and faster architectures than Mask-RCNN, FPN and SSD architectures. Architecture also looks unexpected and pretty funky. Link:

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·