Eric Wallace

@Eric_Wallace_

CS Ph.D. student at working on machine learning, NLP, deep learning, and ML security. Former , .

Vrijeme pridruživanja: srpanj 2011.

Tweetovi

Blokirali ste korisnika/cu @Eric_Wallace_

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @Eric_Wallace_

  1. Prikvačeni tweet
    3. ruj 2019.

    Introducing Universal Adversarial Triggers Phrases that cause a specific model prediction when concatenated to 𝘢𝘯𝘺 input. Result - GPT-2 turns racist - SQuAD models predict "to kill american people" for 72% of "why" questions - Classifier acc 90%->1%

    Prikaži ovu nit
    Poništi
  2. proslijedio/la je Tweet
    prije 21 sat

    Introducing the diversity & inclusion committee for -- thank you to all of these volunteers!

    Poništi
  3. 13. sij

    I think both are positive changes. I'm still torn on the anonymity window, and I feel like it is not discussed enough. My colleagues send great NLP work to ICLR/ICML/NeurIPS instead of ACL/EMNLP/NAACL because of the arXiv policy.

    Prikaži ovu nit
    Poništi
  4. 13. sij

    EMNLP CFP Sticky reviews: "a paper that has been rejected from another venue are invited to submit alongside their paper the previous version of the paper, the reviews and an author response" And a reproducibility list on hyperparameters, code, etc.

    Prikaži ovu nit
    Poništi
  5. proslijedio/la je Tweet
    10. sij
    Poništi
  6. proslijedio/la je Tweet
    11. sij
    Prikaži ovu nit
    Poništi
  7. 11. sij
    Prikaži ovu nit
    Poništi
  8. 11. sij

    Despite the research interest in differentially private ML, it still seems like there are few (maybe zero?) uses I can find in practice. I’ve seen non-ML applications like Google’s RAPPOR and the US Census. Are there real-world deployments of privacy preserving ML I am missing?

    Prikaži ovu nit
    Poništi
  9. proslijedio/la je Tweet
    8. sij

    Excited that our work on characterizing racial bias in football commentary was covered by ! Thanks to for thoroughly explaining and contextualizing our research to a non-academic audience. We're continuing with this work, so look out for more soon!

    Poništi
  10. proslijedio/la je Tweet
    6. sij

    We had the most viewed 2019 news story at Maryland (brought down their servers, even). If you haven't watched the videos, now's your chance!

    Poništi
  11. proslijedio/la je Tweet

    What're your favorite pre-2012 AI/NLP papers?

    Poništi
  12. proslijedio/la je Tweet
    19. pro 2019.
    Odgovor korisniku/ci

    Also, let me clarify that if you are in production settings (not research), don’t start with BERT. Start with regular expressions, bag of words, etc., and only add complexity when you aren’t getting the satisfactory performance.

    Poništi
  13. 18. pro 2019.

    (We had a good laugh afterwards). Crazy to see how our methods have changed in just a short time.

    Prikaži ovu nit
    Poništi
  14. 18. pro 2019.

    The state of NLP in 2019. I’m talking with an amazing undergrad who has already published multiple papers on BERT-type things. We are discussing deep into a new idea on pretraining. Me: What would TFIDF do here, as a simple place to start? Him: .... Me: .... Him: What’s TFIDF?

    Prikaži ovu nit
    Poništi
  15. proslijedio/la je Tweet
    10. pro 2019.

    Dear authors, thank you for your patience with the submission system during the homestretch. We're giving everyone two extra hours to submit. Softconf has made some changes in the last 10 minutes so hopefully this does the trick!

    Poništi
  16. 21. stu 2019.

    For those struggling to come up with a title for their ACL submission... =)

    Poništi
  17. proslijedio/la je Tweet
    19. stu 2019.

    Happy to announce that Matt () and I are working on a free, Web-based AllenNLP () Course that provides onboarding of AllenNLP and in-depth tutorials on how to use the framework and its abstractions for various NLP tasks. Stay tuned!

    Poništi
  18. proslijedio/la je Tweet
    14. stu 2019.

    Machine learning is a bit like cocaine in the 1880s: - been used in a weaker form for centuries - some surprisingly successful early applications led to it now being over-prescribed - beginning to understand that performance degrades after repeated use, negative feedback loops

    Prikaži ovu nit
    Poništi
  19. proslijedio/la je Tweet
    13. stu 2019.

    Prof. Marti Hearst is hiring an NLP / IR postdoc to work on intelligent interfaces for augmented reading of scientific papers, in collaboration with ! Apply at

    Prikaži ovu nit
    Poništi
  20. 9. stu 2019.

    We are really happy to win the best demo award. You can find tutorials and more information here . The tutorials likely miss some use cases, if you need help, feel free to open an issue on the AllenNLP Github and I will personally help you get set up!

    Poništi
  21. 7. stu 2019.

    With EMNLP closing, what are people’s biggest takeaways? A few for me: 1.) lots of people are thinking about dataset biases, artifacts, etc., although few solutions 2.) your baseline better be a pre-trained LM. 3.) model analysis, interpretability, probing, etc. are hot

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·