Tweetovi

Blokirali ste korisnika/cu @quocleix

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @quocleix

  1. proslijedio/la je Tweet
    prije 22 sata

    About data (asked by & others): 1. Only one occurrence of "Hayvard" in the training data. 2. The sentence that contains "Hayvard" has meaning similar to "Guess what, I obtained my bachelor from Hayvard." 3. No occurrence of "cow*" in the same conversation.

    Poništi
  2. proslijedio/la je Tweet
    28. sij

    Enabling people to converse with chatbots about anything has been a passion of a lifetime for me, and I'm sure of others as well. So I'm very thankful to be able to finally share our results with you all. Hopefully, this will help inform efforts in the area. (1/4)

    Prikaži ovu nit
    Poništi
  3. proslijedio/la je Tweet
    29. sij

    This video explains 's amazing new Meena chatbot! An Evolved Transformer with 2.6B parameters on 341 GB / 40B words of conversation data to achieves remarkable chatbot performance! "Horses go to Hayvard!"

    Poništi
  4. 30. sij

    I had another conversation with Meena just now. It's not as funny and I don't understand the first answer. But the replies to the next two questions are quite funny.

    Prikaži ovu nit
    Poništi
  5. 29. sij

    My favorite conversation is below. The Hayvard pun was funny but I totally missed the steer joke at the end until it was pointed out today by

    Prikaži ovu nit
    Poništi
  6. 28. sij

    You can find some sample conversations with the bot here:

    Prikaži ovu nit
    Poništi
  7. 28. sij

    New paper: Towards a Human-like Open-Domain Chatbot. Key takeaways: 1. "Perplexity is all a chatbot needs" ;) 2. We're getting closer to a high-quality chatbot that can chat about anything Paper: Blog:

    Prikaži ovu nit
    Poništi
  8. proslijedio/la je Tweet
    3. pro 2019.

    This video explains AdvProp from ! This technique leverages Adversarial Examples for ImageNet classification by using separate Batch Normalization layers for clean and adversarial mini-batches.

    Poništi
  9. proslijedio/la je Tweet
    26. stu 2019.

    Some nice case studies about how 's AutoML products can help tackle real-world problems in visual inspection across a number of different manufacturing domains, being used by companies like Global Foundries and Siemens.

    Poništi
  10. 25. stu 2019.
    Prikaži ovu nit
    Poništi
  11. 25. stu 2019.

    AdvProp improves accuracy for a wide range of image models, from small to large. But the improvement seems bigger when the model is larger.

    Prikaži ovu nit
    Poništi
  12. 25. stu 2019.

    As a data augmentation method, adversarial examples are more general than other image processing techniques. So I expect AdvProp to be useful everywhere (language, structured data etc.), not just image recognition.

    Prikaži ovu nit
    Poništi
  13. 25. stu 2019.

    Many of us tried to use adversarial examples as data augmentation and observed a drop in accuracy. And it seems that simply using two BatchNorms overcomes this mysterious drop in accuracy.

    Prikaži ovu nit
    Poništi
  14. 25. stu 2019.

    AdvProp: One weird trick to use adversarial examples to reduce overfitting. Key idea is to use two BatchNorms, one for normal examples and another one for adversarial examples. Significant gains on ImageNet and other test sets.

    Prikaži ovu nit
    Poništi
  15. 21. stu 2019.

    And latency on CPU and GPU:

    Prikaži ovu nit
    Poništi
  16. 21. stu 2019.

    Architecture of EfficientDet

    Prikaži ovu nit
    Poništi
  17. 21. stu 2019.

    EfficientDet: a new family of efficient object detectors. It is based on EfficientNet, and many times more efficient than state of art models. Link: Code: coming soon

    Prikaži ovu nit
    Poništi
  18. 18. stu 2019.

    RandAugment was one of the secret sources behind Noisy Student that I tweeted last week. Code for RandAugment is now opensourced.

    Poništi
  19. 12. stu 2019.

    I also highly recommend this nice video that explains the paper very well:

    Prikaži ovu nit
    Poništi
  20. 12. stu 2019.

    Method is also super simple: 1) Train a classifier on ImageNet 2) Infer labels on a much larger unlabeled dataset 3) Train a larger classifier on the combined set 4) Iterate the process, adding noise

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·