Khoa Duong

@dnanhkhoa

Nothing to say

Vrijeme pridruživanja: siječanj 2014.

Tweetovi

Blokirali ste korisnika/cu @dnanhkhoa

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @dnanhkhoa

  1. proslijedio/la je Tweet
    15. ruj 2019.

    The slides of all talks from the NLP session at are now online, featuring talks from & me

    Poništi
  2. proslijedio/la je Tweet
    8. ruj 2019.

    1/ The blog post is a nice demonstration of some phenomena which, luckily, we have a deep literature on. So allow me to relay some of my current knowledge. Here's a graphical summary again.

    Prikaži ovu nit
    Poništi
  3. proslijedio/la je Tweet
    28. kol 2019.

    There is a trend for huge Transformers. We went the other way: decreasing the size! 🤗 Introducing DistilBERT: a smaller, faster, cheaper, lighter BERT trained w/ distillation! 95% of BERT's GLUE perf w/ 66M parameters. 📃: 💻:

    , , i još njih 7
    Prikaži ovu nit
    Poništi
  4. proslijedio/la je Tweet
    7. svi 2019.

    New blog post and ICLR paper on the *Universal Transformer*. Generalizing the Transformer to give it a recurrent inductive bias and make it computationally universal. Work of Amsterdam's Mostafa Dehghani () while at Google Brain.

    Poništi
  5. proslijedio/la je Tweet
    3. svi 2019.

    Interesting developments happened in 2018/2019 for natural language generation decoding algorithms: here's a thread with some papers & code So, the two most common decoders for language generation used to be greedy-decoding (GD) and beam-search (BS). [1/9]

    Prikaži ovu nit
    Poništi
  6. proslijedio/la je Tweet
    16. tra 2019.

    We used architecture search to find a better architecture for object detection. Results: Better and faster architectures than Mask-RCNN, FPN and SSD architectures. Architecture also looks unexpected and pretty funky. Link:

    Poništi
  7. proslijedio/la je Tweet
    1. svi 2018.

    NEWSROOM -- a corpus of 1.3M (1,321,995) article-summary pairs for automated summarization. It's big, it's diverse, and it's an open challenge. Oh, and we are pretty excited about it! Joint work with Max Grusky and

    Poništi
  8. proslijedio/la je Tweet
    30. tra 2018.

    I'm really loving this article on "Rules of Machine Learning" by Martin Zinkevich ... some favorites:

    Poništi
  9. proslijedio/la je Tweet
    25. tra 2018.

    Happy to announce our QANet models, #1 on question answering dataset (SQuAD). 3 ideas: deep & fast arch (130+ layers), data augmentation, transfer learning. Joint work /w , Quoc Le, et al. See our paper

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·