Stanford NLP Group

@stanfordnlp

Computational Linguistics—Natural Language—Machine Learning—Deep Learning—Technology from Silicon Valley. (, , & )

Stanford, CA, USA
Vrijeme pridruživanja: veljača 2010.

Medijski sadržaj

  1. 17. sij

    The Diversity-Innovation Paradox in Science: Why does greater diversity in research teams increase innovation but not directly reward minority scholars? New from , , Sebastian Munoz-Najar Galvez, Bryan He, , and Dan McFarland.

  2. 8. sij

    . papers #3—How to use distributionally robust optimization to avoid poor results on “atypical” data on overparametrized neural networks that fit all training data by Shiori Sagawa, Pang Wei Koh et al.—MNLI results—see you in Addis!

  3. 7. sij

    . people’s papers #2—ELECTRA: and colleagues (incl. at ) show how to build a much more compute/energy efficient discriminative pre-trainer for text encoding than BERT etc. using instead replaced token detection

  4. 6. sij

    . people’s papers #1— and colleagues (incl. at ) show the power of neural nets learning a context similarity function for kNN in LM prediction—almost 3 PPL gain on WikiText-103—maybe most useful for domain transfer

  5. 5. sij

    Here’s the latest data on Stanford NLP course enrollment, —we now teach 10x the students per year as during 1999–2004, and twice as many as 2012–2014, even though some of our classic pubs came out then, like RNTNs and the Stanford Sentiment Treebank

  6. 4. sij

    Stanford CS224N: Natural Language Processing with Deep Learning is back for 2020, starting Jan 7, with over 500 students enrolled:

  7. 5. pro 2019.

    It was great to have with us today telling us about his really exciting new work on grounded semantics: Robot Control and Collaboration in Situated Instruction Following – even despite the Dec 9 deadline.

  8. 1. pro 2019.

    Great to see renewed discussion & new data on the conceptual/linguistic basis of inference/entailment by Ellie Pavlick & —often there is no single answer, but multiple answers depending on assumed context/implicatures

    Prikaži ovu nit
  9. 30. stu 2019.

    Natural Language Inference (NLI) over tables by et al. Tables are a ubiquitous but little studied human information source stuck between text and structured data—though see semantic parsing work, e.g., by

  10. 25. stu 2019.

    Can reduce bias in our news and politics? Automatically Neutralizing Subjective Bias in Text Pryzant, , … Parallel corpus of 180k biased and neutralized sentences Models for editing subjective bias out of text

  11. 18. stu 2019.

    Stanford Dependencies get around the place – ’s backdrop

  12. 8. stu 2019.

    Congratulations to & for being the best paper runner up for Designing and Interpreting Probes with Control Tasks . And hearty congratulations to the winner, , of course!

  13. 5. stu 2019.

    Our “robust” research contributions are in the 16:30–18:00 poster session at today: Certified Robustness to Adversarial Word Substitutions et al. Distributionally Robust Language Modeling Yonatan Oren, Shiori Sagawa, Tatsunori Hashimoto, and Percy Liang

  14. 5. stu 2019.

    At today: Answering Complex Open-domain Questions Through Iterative Query Generation by et al. Poster 10:30–12:00 TalkDown: A Corpus for Condescension Detection in Context and Poster 15:30–16:18

  15. 5. stu 2019.

    You had to wait, but there are papers at today! Find out if your probes are reliable: Designing and Interpreting Probes with Control Tasks & 13:30–13:48

  16. 23. lis 2019.

    Fixing anti-social language use one problem at a time. TalkDown: A Corpus for Condescension Detection In Context by & will appear at . Context helps; BERT doesn’t solve the problem.

  17. 20. lis 2019.
    Odgovor korisnicima

    Actually for all the four issues highlight for this example, CoreNLP’s output is correct or at least basically good. Perhaps that’s part of why for real-world usage, CoreNLP still shines far brighter than its CoNLL or OntoNotes F1 score.

  18. 7. lis 2019.

    People just haven’t been paying attention to the relative over-representation of Ireland and Qatar at recent conferences! Is this an issue that needs to be addressed? 🙃 [from The Geographic Diversity of NLP Conference ]

  19. 8. ruj 2019.

    Correction: The paper counts in the original graph were wrong. Corrected graph attached, from (). All the text statements remain true, and, looking beyond the head, the graph has a very heavy tail, mostly comprised of universities.

    Prikaži ovu nit
  20. 8. ruj 2019.

    But there is still a long way to go and a lot of change needed for academia to solve its CS faculty staffing shortfall, as this graph shows. Source:

    Prikaži ovu nit

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·