Tweetovi

Blokirali ste korisnika/cu @tanmingxing

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @tanmingxing

  1. proslijedio/la je Tweet
    9. sij

    What I did over my winter break! It gives me great pleasure to share this summary of some of our work in 2019, on behalf of all my colleagues at & .

    Prikaži ovu nit
    Poništi
  2. 3. pro 2019.
    Poništi
  3. proslijedio/la je Tweet
    23. stu 2019.

    The TLDR of the paper; use adversarial examples as training data augmentation, maintain separate BatchNorm for normal vs adversarial examples. Neat. As usual I've ported & tested weights

    Prikaži ovu nit
    Poništi
  4. 25. stu 2019.

    Can adversarial examples improve image recognition? Check out our recent work: AdvProp, achieving ImageNet top-1 accuracy 85.5% (no extra data) with adversarial examples! Arxiv: Checkpoints:

    Poništi
  5. 21. stu 2019.

    Excited to share our work on efficient neural architectures for object detection! New state-of-the-art accuracy (51 mAP on COCO for single-model single-scale), with an order-of-magnitude better efficiency! Collaborated with and .

    Poništi
  6. proslijedio/la je Tweet
    12. stu 2019.

    Full comparison against state-of-the-art on ImageNet. Noisy Student is our method. Noisy Student + EfficientNet is 11% better than your favorite ResNet-50 😉

    Prikaži ovu nit
    Poništi
  7. proslijedio/la je Tweet
    12. stu 2019.

    Want to improve accuracy and robustness of your model? Use unlabeled data! Our new work uses self-training on unlabeled data to achieve 87.4% top-1 on ImageNet, 1% better than SOTA. Huge gains are seen on harder benchmarks (ImageNet-A, C and P). Link:

    Prikaži ovu nit
    Poništi
  8. proslijedio/la je Tweet
    25. lis 2019.

    Great to see this collaboration between Google researchers & engineers launch, with major improvement to search quality! The work brings together many things we've been working on over the last few years: Transformers, BERT, , TPU pods, ...

    Poništi
  9. 18. lis 2019.

    AutoML for video neural architecture design. Results are quite promising!

    Poništi
  10. 6. kol 2019.

    Introducing EfficientNet-EdgeTPU: customized for mobile accelerators, with higher accuracy and 10x faster inference speed. blog post: Code and pertained models:

    Poništi
  11. proslijedio/la je Tweet
    30. srp 2019.

    We released all checkpoints and training recipes of EfficientNets, including the best model EfficientNet-B7 that achieves accuracy of 84.5% top-1 on ImageNet. Link:

    Prikaži ovu nit
    Poništi
  12. 24. srp 2019.

    Introducing MixNet: AutoML + a new mixed depthwise conv (MDConv). SOTA results for mobile: 78.9% ImageNet top-1 accuracy under typical mobile settings (<600M FLOPS). Paper: Code & models:

    Poništi
  13. 27. lip 2019.

    AutoAugment works pretty well on detection as well.

    Poništi
  14. proslijedio/la je Tweet
    30. svi 2019.

    New work by Mingxing Tan and of on automatically designing much more efficient-and-highly-accurate computer vision models. This will enable more sophisticated uses of computer vision on mobile devices, et al. Graph below highlights cost v. accuracy tradeoff.

    Poništi
  15. proslijedio/la je Tweet
    29. svi 2019.

    EfficientNets: a family of more efficient & accurate image classification models. Found by architecture search and scaled up by one weird trick. Link: Github: Blog:

    Poništi
  16. 29. svi 2019.

    EfficientNet: surpass state-of-the-art accuracy with 10x better efficiency! If you are still using ResNet or Inception, please give it a try: EfficientNets are up to 16x more efficient than ResNet, and up to 13x more efficient than Inception on ImageNet.

    Poništi
  17. proslijedio/la je Tweet
    11. svi 2019.

    As a pro tip provided by the baker, it turns out you can cut tiramisu with dental floss.

    Prikaži ovu nit
    Poništi
  18. proslijedio/la je Tweet
    7. svi 2019.

    Introducing MobileNetV3: Based on MNASNet, found by architecture search, we applied additional methods to go even further (quantization friendly SqueezeExcite & Swish + NetAdapt + Compact layers). Result: 2x faster and more accurate than MobileNetV2. Link:

    Poništi
  19. proslijedio/la je Tweet
    15. sij 2019.

    On behalf of the whole Google Research & community, I was excited to put together a post describing some of the work that we collectively did in 2018. I hope you enjoy it! Thanks to everyone who helped make this work possible!

    Prikaži ovu nit
    Poništi
  20. proslijedio/la je Tweet
    7. kol 2018.

    Inspired by recent progress in neural architecture search, Google researchers explore an automated approach for designing mobile models with both high accuracy and speed, with results that outperform current state-of-the-art mobile models.

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·