Rezultati pretraživanja
  1. 7. stu 2019.

    Huge congratulations to on winning Best Paper! (From 2800+ submissions.) And on her delightful talk: video online. Lisa did this work with me as a junior. First time any *ACL best paper award has gone to an undergrad project??

    Prikaži ovu nit
  2. 9. stu 2019.

    Congratulations to the best paper award winners! 4/4 == Best Demo Paper Award == AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models Eric Wallace, Jens Tuyls, Junlin Wang, Sanjay Subramanian, Matt Gardner, Sameer Singh

  3. 8. stu 2019.
  4. prije 16 minuta

    Our pick of the week: Almarwani et al. paper on "Efficient Sentence Embedding using Discrete Cosine Transform". By

  5. 19. stu 2019.

    Just uploaded the GoldEn Retriever poster we presented at (exactly) two weeks ago: Lots of interesting discussions at EMNLP, looking forward to more advances in multi-hop QA in the wild! 📜: 💾:

  6. 17. stu 2019.

    Of the many excellent papers at , this is the one I can't stop thinking about. On NMT Search Errors and Model Errors: Cat Got Your Tongue? by Felix Stahlberg & Bill Byrne Our current NMT models might be "wrong"🚫 👇thread 👇 1/7

    Prikaži ovu nit
  7. 9. stu 2019.

    Congrats to the best paper award winners! 3/4 = Best Resource Paper Award = The FLORES Evaluation Datasets for Low-Resource Machine Translation: Nepali–English and Sinhala–English Guzmán, Chen, Ott, Pino, Lample, Koehn, Chaudhary, Ranzato

  8. 8. stu 2019.

    You couldn't make it to ? You missed some sessions or are interested what other liked? Our impressions with , and Vassilina are out for you to read over the weekend.

  9. 8. stu 2019.

    Congratulations to & for being the best paper runner up for Designing and Interpreting Probes with Control Tasks . And hearty congratulations to the winner, , of course!

  10. 7. stu 2019.

    A nice slide about embedding specialization methods, which covers from classical word2vec to the latest contextualized vector representations such as BERT. 分散表現を目的特化でチューニングする手法を網羅的に紹介したチュートリアル資料。最後の章が特にオススメです。

  11. 7. stu 2019.

    Now available: slides for tutorial on Data Collection and End-to-End Learning for Conversational AI. Presenters: Paweł Budzianowski, Iñigo Casanueva, Ivan Vulić (). University of Cambridge, . URL:

  12. 7. stu 2019.

    Proud of undergrads Jack Merullo and Luke Yeh for doing a great job presenting our work on characterizing racial bias in football commentary! This was their first time attending an conference. Also FYI: Jack is currently applying to PhD programs!

  13. 7. stu 2019.

    Struggling to understand how has managed to accept a paper titled “Charge Based Prison Term Prediction with Deep Gating Network”, this is real brain dead stupid stuff

    Prikaži ovu nit
  14. 6. stu 2019.

    Day3 talk2 in 201A–C: 's talk on "LXMERT: Learning Cross-Modality Encoder Representations from Transformers" w. several new visualizations/analyses (+SotA on 3-4 vis-lang tasks) PS. I am hiring *POSTDOCS* & still around all day today+tmrw so pls chat/share!🙂

  15. 6. stu 2019.

    Really excited to see sharing his successes *and* failures! Also loving the shade thrown at bad reviewers who complain about not getting SOTA

  16. 6. stu 2019.

    Humble start of 's keynote at acknowledging among others for coming up with neural nets for machine translation :)

  17. 6. stu 2019.

    Your comprehensive guide on papers that use in from 😉Part I - Language Models, Conversational AI, KGs from text, KG embeddings. Part II is coming soon! @emnlp2019

  18. 5. stu 2019.

    Interested in knowing whether/how the open-domain NLG is biased? First, you need to go beyond sentiment analysis. Come to Emily’s talk at 4:00pm at session 7C AWE 203-205 to learn more.

    Prikaži ovu nit
  19. 5. stu 2019.
  20. If you’re attending , we’re presenting a live demo today at 13:30 – 15:00 of VizSeq, a visual analysis tool for text generation tasks. Hope to see you there! You can also learn more about VizSeq here: Paper: Code:

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.