-
New
#BlackboxNLP paper analyzing attention in GPT-2: https://arxiv.org/abs/1906.04284 Some highlights: attention targets different parts of speech at different layer depths, and a simple algorithm reveals highly specialized generation patterns (e.g. acronyms from names). With@boknilev.pic.twitter.com/fynlz7BfjZ
-
You can remove most of the question's words and still get a correct answer on SQuAD
#EMNLP2018#blackboxnlp pic.twitter.com/a169OdFa1v
-
Check out our new
#BlackboxNLP paper "What Does BERT Look At? An Analysis of BERT's Attention" with@ukhndlwl@omerlevy@chrmanning! https://arxiv.org/abs/1906.04341 Among other things, we show that BERT's attention corresponds surprisingly well to aspects of syntax and coreference.pic.twitter.com/SWh1qMIKX1
-
Hupkes et al.: When the text has disfluencies - attention does the heavy lifting, decoder makes heavy use of it, encoder isn't really doing anything special other than preparing the input for the other modules to process
#BlackboxNLP#emnlp2018pic.twitter.com/bonBCnzknB
-
Can we model a black-box with a black-box? -- Leila Wehbe
@mldcmu#BlackboxNLP#emnlp2018pic.twitter.com/hHOra86KLp
-
I wrote "An Analysis of Source-Side Grammatical Errors for NMT", basically using an eng->deu system to translate noisy and corrected versions of sentences from GEC corpora and analyzing the outputs. Accepted at
#BlackboxNLP 2019. Any feedback welcome! http://www.cs.cmu.edu/~aanastas/research/NMT_Robustness.pdf … -
Looks like we have
#BlackBoxNLP at the Chit Chat tutorial.#emnlp2018pic.twitter.com/tA64u02xRZ
-
Not so soon, but now availble at http://arxiv.org/abs/1808.10503
#BlackBoxNLPPrikaži ovu nit -
RNN LMs can learn some but not all island constraints (E. Wilcox, R. Levy, T. Morita & R. Futrell). Subject islands are hard. Yet unanswered Q (paraphrased M. Baroni): does gradience suggest islands aren't grammatical, but are instead likely processing?
#BlackboxNLP#emnlp2018pic.twitter.com/FGOl1Jgvzh
-
Thank you! Looking forward to next year’s
#BlackBoxNLP! -
Just noticed that the latest release of
@PyTorch includes tools for model interpretability. Not sure yet how useful they are, but happy to see this given some attention in a mainstream framework.#BlackboxNLPPrikaži ovu nit -
It's an honor to see
@yoavgo in person at#BlackBoxNLP and#emnlp2018 and get his words written on his authored book!pic.twitter.com/dCPhTnvnK9
-
Our paper "On the Realization of Compositionality in Neural Networks" with Jana Leible, Mitja Nikolaus, David Rau, Dennis Ulmer, Tim Baumgärtner,
@_dieuwke_ and Elia Bruni accepted at#ACL2019's#BlackboxNLP workshop! Check out the preprint at https://arxiv.org/abs/1906.01634 -
#BlackboxNLP Dennis Ulmer explaining compositionality using "falling cows sign" example love it !!#BlackboxNLP pic.twitter.com/yBJODDTpCb
-
New paper at
#BlackboxNLP workshop#emnlp2018 with@_dieuwke_ and Sanne Bouwmeester on analyzing how seq2seq models process disfluencies, using synthetic task-oriented dialogue data https://arxiv.org/abs/1808.09178 -
Three of my questions for the
#BlackboxNLP panel https://twitter.com/boknilev/status/1154118104273825797 …pic.twitter.com/bMHQ0pCB1T
-
A blog https://zerobatchsize.net/2018/09/11/dknn.html … and preprint https://arxiv.org/abs/1809.02847 for our EMNLP
#BlackboxNLP paper "Interpreting Neural Networks With Nearest Neighbors" with@ihsgnef@boydgraber. We address model overconfidence issues, leading to improvements in interpretability (1/2)pic.twitter.com/GvFBS95T84
Prikaži ovu nit -
Leila Wehbe giving the third invited talk of
#blackboxnlp#emnlp2018pic.twitter.com/u0CpJJJxsy
Prikaži ovu nit -
2019
#BlackboxNLP paper is out! Neural LMs trained on English can suppress and recover syntactic expectations, approximating stack-like data structures; but recovery is imperfect! With@roger_p_levy@rljfutrell https://arxiv.org/pdf/1906.04068.pdf …pic.twitter.com/QwrvoXzNnB
-
Looking at some of the
#BlackboxNLP papers, I'm amazed but also a little concerned: imho many deserve a main conf slot. Did they get rejected from main confs and if so wtf is wrong with our field?
Prikaži ovu nit
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.