Raphael Shu

@raphaelshu

Natural language understanding / machine translation / book reading

New York
Vrijeme pridruživanja: prosinac 2011.

Tweetovi

Blokirali ste korisnika/cu @raphaelshu

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @raphaelshu

  1. proslijedio/la je Tweet
    24. sij

    Some representative references on each group with more exciting approaches missed and more to come soon!

    Prikaži ovu nit
    Poništi
  2. proslijedio/la je Tweet

    The "Deep Learning Toolbox" has greatly expanded in the last decade thanks to our wonderful research community. Also, important progress has been made to make our community more inclusive and less toxic. Still, there's LOTS to do, and I plan to keep focusing on advancing both.

    Prikaži ovu nit
    Poništi
  3. proslijedio/la je Tweet
    31. pro 2019.

    At the very last day of this decade our paper w Omar, Seokho and on molecular geometry prediction with generative graph neural nets has been published to . Check it out here . Happy New Year everyone!

    Poništi
  4. 31. pro 2019.

    Happy New year and 2020 !

    Poništi
  5. proslijedio/la je Tweet
    19. pro 2019.

    Decisions released 🎉 Congratulations to accepted papers; to those who we could not accommodate, we wish you success in your ongoing research. See our blog for the first of our reflections. See you soon in Ethiopia. 🇪🇹🌍

    Poništi
  6. 5. pro 2019.

    やっと博論提出済み、久しぶりに東大周辺でご飯食べよう。ACLの締め切りとほぼ被っているのは残念だった。

    Poništi
  7. proslijedio/la je Tweet
    21. stu 2019.

    What does a pruned deep neural network "forget"? Very excited to share our recent work w Aaron Courville, Yann Dauphin and

    Prikaži ovu nit
    Poništi
  8. proslijedio/la je Tweet
    19. stu 2019.

    Sam Bowman is giving an invited talk at ! Check out the live video at

    Poništi
  9. 21. lis 2019.

    Reviewing an ICLR paper with more than 35 pages of content. So tough.

    Poništi
  10. proslijedio/la je Tweet
    17. lis 2019.

    desperate plea for help from AAAI program chairs -- everyone please stop trying to seeing your reviews in CMT so we can fix the problem -- CMT can't handle the load at all and we can't get in anymore ourselves!

    Poništi
  11. 12. lis 2019.

    台風の衝撃は地震のように感じ、周辺エリアが停電続いています。

    Poništi
  12. proslijedio/la je Tweet
    4. lis 2019.

    Talk I gave explaining strategies to supercharge your PyTorch code (16-bit, multi+single node parallelization, dataloaders, accumulated grads, and the flags to turn these features on.

    Prikaži ovu nit
    Poništi
  13. proslijedio/la je Tweet

    The community got treated to 2 outstanding talks in the last 2 days! on “Massive Scale Anslytics” and on “A Generalized Framework of Sequence Generation”

    Poništi
  14. 4. lis 2019.

    Code release for our recent work on non-autoregressive neural machine translation using latent variables and deterministic inference, a collaboration with , and . Check it out . All comments welcomed.

    Poništi
  15. proslijedio/la je Tweet
    2. lis 2019.

    "The traditional declarative programming model of building a graph and executing it via a tf.Session is discouraged" the end of an era

    Poništi
  16. proslijedio/la je Tweet
    1. lis 2019.

    ICML 2021 in SEOUL, KOREA!

    Poništi
  17. 1. lis 2019.

    Releasing the code for my ACL'19 paper for learning discrete syntactic codes for diverse translation, check it out here

    Poništi
  18. proslijedio/la je Tweet
    6. ruj 2019.

    Today, we’re happy to release two new natural language dialog datasets, which capture the richness of natural dialog, for use in training more effective digital assistants that can understand complex language. Learn more and grab the data at ↓

    Poništi
  19. proslijedio/la je Tweet
    6. ruj 2019.

    Thrilled to be able to share what I've been working on for the last year - solving the fundamental equations of quantum mechanics with deep learning!

    Prikaži ovu nit
    Poništi
  20. proslijedio/la je Tweet
    5. ruj 2019.

    Our @emnlp2019 paper on context-aware NMT is out! After our ACL paper, we come further in using less of document-level data and propose an approach for using only monolingual document-level data. , ,

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·