Tweetovi
- Tweetovi, trenutna stranica.
- Tweetovi i odgovori
- Medijski sadržaj
Blokirali ste korisnika/cu @lena_voita
Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @lena_voita
-
Prikvačeni tweet
Happy to receive Facebook PhD Fellowship! I feel extremely lucky to be supervised by
@iatitov and@RicoSennrich, who always support me, and to be part of such great groups as@AmsterdamNLP and@EdinburghNLPhttps://research.fb.com/blog/2020/01/announcing-the-recipients-of-the-2020-facebook-fellowship-awards/ …Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Lena Voita proslijedio/la je Tweet
Great talk by
@lena_voita at#amld 2020, and more to read on her blog.pic.twitter.com/v2pnvk52H3
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
NLP track at Applied Machine Learning Days at EPFL is about to start! Come talk to me if here :)
#AMLD2020pic.twitter.com/036LrktDA5
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
I was exited about giving a talk at
@naverlabseurope before, but the snow has definitely made it even more appealing :) Hope it stays till Thursday!https://twitter.com/mgalle/status/1221700252660588544 …Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Today gave a talk at Google Research Berlin about our Transformer analysis work. Met a lot of great people and feel totally happy :) Want to know details? Here are the blog posts: The Story of Heads: https://lena-voita.github.io/posts/acl19_heads.html …, Evolution of Representations: https://lena-voita.github.io/posts/emnlp19_evolution.html …pic.twitter.com/x3EDsrUkjZ
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Lena Voita proslijedio/la je Tweet
Interesting finding from
@lena_voita masked language models (i.e. BERT) are much better at preserving token identity than either machine translation or traditional language models. (Joint work w/@RicoSennrich &@iatitov. More info here: https://lena-voita.github.io/posts/emnlp19_evolution.html …)pic.twitter.com/wdXLu4Whtp
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Lena Voita proslijedio/la je Tweet
#nlphighlights 98:@lena_voita tells us about the relative importance of attention heads in multi-headed attention and evolution of token representations in transformers.@waleed_ammar and I had fun chatting with Lena for this episode.https://soundcloud.com/nlp-highlights/98-analyzing-information-flow-in-transformers-with-elena-voita …Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Lena Voita proslijedio/la je Tweet
You couldn't make it to
#emnlp2019? You missed some sessions or are interested what other liked? Our impressions with@laurent_besacie,@hadyelsahar and Vassilina are out for you to read over the weekend.https://europe.naverlabs.com/blog/emnlp2019-what-we-saw-and-liked/ …Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
3rd day of @emnlp2019:
@kchonyc in his great keynote talks about sequence generation with an adaptive order and mentions (among others) our@NeurIPSConf paper by my Yandex student Dima Emelianenko (w/ Pavel Serduykov). By the way, the paper is out! https://arxiv.org/abs/1911.00176 pic.twitter.com/xg1nck3AzE
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
2nd day of
@emnlp: Evolution of Representations in the Transformer! 16:30-18:00, hall 2A, poster P43 https://www.aclweb.org/anthology/D19-1448.pdf … (another paper with my research parents@iatitov and@RicoSennrich )https://twitter.com/lena_voita/status/1173517191586693120 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Lena Voita proslijedio/la je Tweet
Wednesday presentations at
#emnlp2019 by@lena_voita and Bailin Wang (@berlin1993428) Papers: https://aclweb.org/anthology/D19-1391.pdf … https://aclweb.org/anthology/D19-1448.pdf … Lena's blog with lots of extra details: https://lena-voita.github.io/posts/emnlp19_evolution.html …pic.twitter.com/5L4BvI8moy
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Lena Voita proslijedio/la je Tweet
Mentioned
@lena_voita's NMT paper, 2 other#emnlp2019 papers presented on Tue:@chunchuan_lyu on multi-pass decoding for semantic role labeling (w/Shay Cohen) https://www.aclweb.org/anthology/D19-1099.pdf …@BZhangGo on training deep and fast Transformers (w/@RicoSennrich) https://www.aclweb.org/anthology/D19-1083.pdf …pic.twitter.com/VTnBY59KJB
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
If at EMNLP, come talk to me at the poster for this paper at 16.30 on Wednesday, Hall 2A :) (paper with
@RicoSennrich and@iatitov )https://twitter.com/maithra_raghu/status/1190670033132744704 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Lena Voita proslijedio/la je Tweet
exciting work! Just implemented this in https://github.com/rsennrich/subword-nmt …https://twitter.com/lena_voita/status/1189546512491134977 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Lena Voita proslijedio/la je Tweet
Tue at 13.30, Hall 2A:
@lena_voita will show how a document-level MT model can be trained *without* any document-level parallel data + look into which phenomena are hard to capture with monolingual data alone (w/@RicoSennrich) https://www.aclweb.org/anthology/D19-1081/ …#emnlp2019@EdinburghNLPpic.twitter.com/DvJDo7vZOT
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
[3/3] Takeaway message: we show how to train models with BPE and make them more robust and up to 3 BLEU better. And yes, I do want to make one of my research parents a bit happy :)
@RicoSennrichPrikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
[2/3] Usually, models have a pathological behavior of token embeddings: a vast majority of closest neighbors of rare tokens are rare tokens. But not with BPE-dropout! The embeddings are more sensible, and a model is more robust to misspellings (despite not being exposed to any)pic.twitter.com/eQZqAzbiFI
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
[1/3] BPE-dropout: our new paper by Ivan Provilkov and Dmitrii Emelianenko! https://arxiv.org/abs/1910.13267 In training, we corrupt segmentation procedure of BPE to produce different segmentations of the same word. In inference, we use standard BPE and outperform BPE and sentencepiece.pic.twitter.com/fA3pfHODzL
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Here is also the blog post with more details and visualizations: https://lena-voita.github.io/posts/acl19_heads.html …https://twitter.com/seb_ruder/status/1182682185557561346 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Lena Voita proslijedio/la je Tweet
For more info and analyses, check out the excellent blog post by
@lena_voita: https://lena-voita.github.io/posts/emnlp19_evolution.html … Paper: https://arxiv.org/abs/1909.01380Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.