Tweetovi

Blokirali ste korisnika/cu @ukhndlwl

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @ukhndlwl

  1. Prikvačeni tweet
    3. stu 2019.

    Excited to share new work!!! “Generalization through Memorization: Nearest Neighbor Language Models” We introduce kNN-LMs, which extend LMs with nearest neighbor search in embedding space, achieving a new state-of-the-art perplexity on Wikitext-103, without additional training!

    Prikaži ovu nit
    Poništi
  2. proslijedio/la je Tweet
    22. sij

    Enjoyed the kNN-LM paper by Khandelwal and Levy et al. (2019). Using an interpolated non-parametric and parametric model, they set a SOTA on Wikitext, reducing perplexity by 2.9 points. This approach helps with predicting long-tail language predictions.

    Prikaži ovu nit
    Poništi
  3. proslijedio/la je Tweet
    6. sij

    . people’s papers #1— and colleagues (incl. at ) show the power of neural nets learning a context similarity function for kNN in LM prediction—almost 3 PPL gain on WikiText-103—maybe most useful for domain transfer

    Poništi
  4. 20. pro 2019.

    Our work on nearest neighbor language models has been accepted to Woohoo!! Code coming in the new year!

    Poništi
  5. proslijedio/la je Tweet
    17. pro 2019.

    Update: I could reproduce all NER results from the XLM-RoBERTa paper🤗

    Poništi
  6. proslijedio/la je Tweet

    📆Monday 11/18: & of will share 2 lectures: “Generalization through Memorization: Nearest Neighbor Language Models” & “Probing Neural NLP: Ideas and Problems” 🧠 Join us!

    Poništi
  7. 7. stu 2019.

    Gains on cross-lingual benchmarks are amazing!!! And the 100 languages are clearly identified in the paper Nice work and co!!

    Poništi
  8. proslijedio/la je Tweet
    3. stu 2019.

    Super awesome work! Really nice results, especially on Domain Adaptation!

    Poništi
  9. proslijedio/la je Tweet
    3. stu 2019.

    Generalization through Memorization: Nearest Neighbor Language Models Reduces ppl from 18.27 to 15.79 (sota) in Wikitext-103 using kNN and pretrained Wikitext LM without further training.

    Prikaži ovu nit
    Poništi
  10. proslijedio/la je Tweet
    3. stu 2019.

    Improve your language model by converting it into a deep nearest neighbour classifier! The amazing pushes SOTA on Wikitext-103 by nearly 3 points, without any additional training (and gets a few other surprising results too).

    Poništi
  11. 3. stu 2019.

    Work done at with amazing collaborators , and as well as my advisor !! Paper: Code available soon!

    Prikaži ovu nit
    Poništi
  12. 3. stu 2019.

    We also show that kNN-LM can efficiently scale up LMs to larger training sets and allows for effective domain adaptation, by simply varying the nearest neighbor datastore without further training. It seems to be helpful in predicting long tail patterns, such as factual knowledge!

    Prikaži ovu nit
    Poništi
  13. proslijedio/la je Tweet
    31. lis 2019.

    we talk about "interpretation methods" for neural models, and want our interpretations to be "faithful", but what does it really mean? attempts to clear the mess and, points to where we are and where we should be going.

    Poništi
  14. proslijedio/la je Tweet
    30. lis 2019.

    Excited to share our work on BART, a method for pre-training seq2seq models by de-noising text. BART outperforms previous work on a bunch of generation tasks (summarization/dialogue/QA), while getting similar performance to RoBERTa on SQuAD/GLUE

    Prikaži ovu nit
    Poništi
  15. proslijedio/la je Tweet
    17. lis 2019.

    A story about neural networks and language understanding with quotes from yours truly, , , , and others, plus an amazing illustration of BERT teaching neural networks to other BERTs

    Poništi
  16. proslijedio/la je Tweet
    17. lis 2019.

    "What's the Aquaman actor's next movie?" Complex questions are common in daily comms, but current open-domain QA systems struggle with finding all supporting facts needed. We present a system in paper that answers them efficiently & explainably:

    Prikaži ovu nit
    Poništi
  17. proslijedio/la je Tweet
    14. lis 2019.

    Depends on the number of “positive” examples in the dataset? Ok, I’m done.

    Poništi
  18. proslijedio/la je Tweet
    10. lis 2019.

    Happening today! I am speaking about StanfordNLP, our new toolkit at Dev Conference. Come talk to me if you are around and interested!

    Poništi
  19. proslijedio/la je Tweet
    9. ruj 2019.

    in "Show Your Work," we look at the status quo in experimental reporting in NLP -- it's abysmal -- and propose concrete ways to do better. to appear at EMNLP, by , , , , &

    Poništi
  20. proslijedio/la je Tweet
    9. ruj 2019.

    New paper alert: So, there are quite a few methods for trying to uncover what an NN model _knows_ about some task. If you ask the same question several different ways, will you get the same qualitative conclusion? (1/N)

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·