Medijski sadržaj
- Tweetovi
- Tweetovi i odgovori
- Medijski sadržaj, trenutna stranica.
-
The Diversity-Innovation Paradox in Science: Why does greater diversity in research teams increase innovation but not directly reward minority scholars? New from
@BasHofstra,@viveksck, Sebastian Munoz-Najar Galvez, Bryan He,@jurafsky, and Dan McFarland. http://arxiv.org/abs/1909.02063 pic.twitter.com/05ebN7HORq
-
.
@stanfordnlp#ICLR2020 papers #3—How to use distributionally robust optimization to avoid poor results on “atypical” data on overparametrized neural networks that fit all training data by Shiori Sagawa, Pang Wei Koh et al.—MNLI results—see you in Addis! https://openreview.net/forum?id=ryxGuJrFvS …pic.twitter.com/8HdTgPttq0
-
.
@stanfordnlp people’s#ICLR2020 papers #2—ELECTRA:@clark_kev and colleagues (incl. at@GoogleAI) show how to build a much more compute/energy efficient discriminative pre-trainer for text encoding than BERT etc. using instead replaced token detection https://openreview.net/forum?id=r1xMH1BtvB …pic.twitter.com/lFAr6XYSWx
-
.
@stanfordnlp people’s#ICLR2020 papers #1—@ukhndlwl and colleagues (incl. at@facebookai) show the power of neural nets learning a context similarity function for kNN in LM prediction—almost 3 PPL gain on WikiText-103—maybe most useful for domain transfer https://openreview.net/forum?id=HklBjCEKvH …pic.twitter.com/5yKRhhjZMr
-
Here’s the latest data on Stanford NLP course enrollment,
@NathanBenaich—we now teach 10x the students per year as during 1999–2004, and twice as many as 2012–2014, even though some of our classic#DeepLearning pubs came out then, like RNTNs and the Stanford Sentiment Treebank https://twitter.com/NathanBenaich/status/1213805245022781440 …pic.twitter.com/zxhnmhKdtI
-
Stanford CS224N: Natural Language Processing with Deep Learning is back for 2020, starting Jan 7, with over 500 students enrolled: http://web.stanford.edu/class/cs224n/
#cs224npic.twitter.com/DJIRfxVXrv
-
It was great to have
@yoavartzi with us today telling us about his really exciting new work on grounded semantics: Robot Control and Collaboration in Situated Instruction Following – even despite the Dec 9@aclmeeting deadline.#NLProc https://nlp.stanford.edu/seminar/details/yartzi.shtml …pic.twitter.com/32mdJm45zd
-
Great to see renewed discussion & new
#NLProc data on the conceptual/linguistic basis of inference/entailment by Ellie Pavlick@BrownCSDept &@tmkwiat@GoogleAI—often there is no single answer, but multiple answers depending on assumed context/implicatures https://www.mitpressjournals.org/doi/full/10.1162/tacl_a_00293 …pic.twitter.com/WEsuXDzWsp
Prikaži ovu nit -
Natural Language Inference (NLI) over tables by
@WilliamWangNLP et al. https://arxiv.org/abs/1909.02164 Tables are a ubiquitous but little studied human information source stuck between text and structured data—though see semantic parsing work, e.g., https://www.aclweb.org/anthology/P15-1142 … by@IcePasupatpic.twitter.com/ghlFbOFJXN
-
Can
#NLProc reduce bias in our news and politics? Automatically Neutralizing Subjective Bias in Text Pryzant,@jurafsky, …#AAAI2020 Parallel corpus of 180k biased and neutralized sentences Models for editing subjective bias out of text#Wikipedia#npov https://arxiv.org/abs/1911.09709 pic.twitter.com/6U6CdlwqVz
-
Stanford Dependencies get around the place –
@sociolinguista’s backdroppic.twitter.com/rszY6yS8Xh
-
Congratulations to
@johnhewtt &@percyliang for being the#emnlp2019 best paper runner up for Designing and Interpreting Probes with Control Tasks https://arxiv.org/abs/1909.03368 . And hearty congratulations to the winner,@XiangLisaLi2, of course!pic.twitter.com/IYei71O09X
-
Our “robust” research contributions are in the 16:30–18:00 poster session at
#emnlp2019 today: Certified Robustness to Adversarial Word Substitutions@robinomial et al. Distributionally Robust Language Modeling Yonatan Oren, Shiori Sagawa, Tatsunori Hashimoto, and Percy Liangpic.twitter.com/S2fDiLC1CB
-
At
#emnlp2019 today: Answering Complex Open-domain Questions Through Iterative Query Generation by@qi2peng2 et al. Poster 10:30–12:00 TalkDown: A Corpus for Condescension Detection in Context@zijianwang30 and@ChrisGPotts Poster 15:30–16:18pic.twitter.com/jEVxGwG9N6
-
You had to wait, but there are
@stanfordnlp papers at#emnlp2019 today! Find out if your probes are reliable: Designing and Interpreting Probes with Control Tasks@johnhewtt &@percyliang 13:30–13:48pic.twitter.com/laCEgIiG0h
-
Fixing anti-social language use one problem at a time. TalkDown: A Corpus for Condescension Detection In Context by
@zijianwang30 &@ChrisGPotts will appear at#emnlp2019. Context helps; BERT doesn’t solve the problem. https://arxiv.org/abs/1909.11272#NLProcpic.twitter.com/ZYDAnW9IqP
-
Actually for all the four issues highlight for this example,
@stanfordnlp CoreNLP’s output is correct or at least basically good. Perhaps that’s part of why for real-world usage, CoreNLP still shines far brighter than its CoNLL or OntoNotes F1 score.pic.twitter.com/jXoPSPAJ84
-
People just haven’t been paying attention to the relative over-representation of Ireland and Qatar at recent
#NLProc conferences! Is this an issue that needs to be addressed?
[from @MarekRei The Geographic Diversity of NLP Conference http://www.marekrei.com/blog/geographic-diversity-of-nlp-conferences/ … ]pic.twitter.com/QASEUY5f8a
-
Correction: The paper counts in the original graph were wrong. Corrected graph attached, from
@dcharrezt (https://medium.com/@dcharrezt/neurips-2019-stats-c91346d31c8f …). All the text statements remain true, and, looking beyond the head, the graph has a very heavy tail, mostly comprised of universities.pic.twitter.com/0iKKnVIzjn
Prikaži ovu nit -
But there is still a long way to go and a lot of change needed for academia to solve its CS faculty staffing shortfall, as this graph shows. Source: https://www.insidehighered.com/news/2018/05/09/no-clear-solution-nationwide-shortage-computer-science-professors …pic.twitter.com/uXQcY6bHeQ
Prikaži ovu nit
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.