Leshem Choshen

@LChoshen

PhD NLP student - Hebrew University. Interested: semantics, MT, GEC, RL and much more. Keeping it professional good science

Jerusalem
Vrijeme pridruživanja: lipanj 2018.

Tweetovi

Blokirali ste korisnika/cu @LChoshen

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @LChoshen

  1. Prikvačeni tweet
    20. pro 2019.
    Prikaži ovu nit
    Poništi
  2. prije 8 sati

    Beware fellow submitters, remember to anonymize

    Poništi
  3. 29. sij

    Do you guys just put on everything you say? are there any other useful hashtags to know?

    Prikaži ovu nit
    Poništi
  4. 29. sij

    Thought it might interest you.\ have interesting thoughts.

    Prikaži ovu nit
    Poništi
  5. 29. sij

    PS it all works better on ALBERT, but their suggestion is extreme and has some Implications, so it is to be expected.

    Prikaži ovu nit
    Poništi
  6. 29. sij

    Something is rotten in the state of probing. An unintuitive way to boost BERT and many interesting implications. All layers can reconstruct?!

    Prikaži ovu nit
    Poništi
  7. proslijedio/la je Tweet
    25. sij

    Not just for students. I think people generally underestimate how much the system favors native English speakers. Just think about the extra time and effort it costs to write in another language, even if you feel relatively competent.

    Poništi
  8. proslijedio/la je Tweet
    22. sij

    The quiet semisupervised revolution continues

    Poništi
  9. proslijedio/la je Tweet
    21. sij

    Let me highlight this amazing work I've read recently on in NLP, in which you'll find both: - a deep discussion of what it means for a neural model to be compositional - a deep and insightful comparison of LSTM, ConvNet & Transformers! 👉

    Poništi
  10. 19. sij

    More layers give more expressiveness and performance. In which simple tasks (vision included) is the addition of layers really beneficial? Negative example: Resnet shows that 100 instead of 50 networks changes scores from 80 to 80.1.

    Poništi
  11. 18. sij

    Good news: I got only 4 papers. Bad news: not even one seems remotely related to my work. 3 embedding papers...

    Prikaži ovu nit
    Poništi
  12. 18. sij

    guys ready set go. Reviews arrived. How did you fare with the new system?

    Prikaži ovu nit
    Poništi
  13. proslijedio/la je Tweet
    15. pro 2019.

    Some of the keywords in paper titles that have seen the most change from NeurIPS 2018 to 2019. - meta-learning, kernel methods, reinforcement learning are 🔥 - more hardware-aware, more theory-driven - recurrent & convolutional get little love Full NeurIPS recap coming soon!

    Poništi
  14. 15. sij
    Poništi
  15. 15. sij

    Did not check it, but it's worth a shot.

    Poništi
  16. 15. sij

    Public speaking tip (from the coursera 2ns course) : If something doesn't work, don't tell the audience how good it was, it puts emphasis on what they are missing.

    Poništi
  17. 14. sij

    Any suggested reading on meta-learning for DL (preferably in NLP perspective)? Includes: architecture learning, initialization learning, learning to learn, policy\loss learning etc.

    Poništi
  18. 14. sij

    Yet another gender bias avoidance. Not! A real world (non cs) study of gender bias in language.

    Poništi
  19. proslijedio/la je Tweet
    10. sij

    Now that neural nets have fast implementations, a bottleneck in pipelines is tokenization: strings➡️model inputs. Welcome 🤗Tokenizers: ultra-fast & versatile tokenization led by : -encode 1GB in 20sec -BPE/byte-level-BPE/WordPiece/SentencePiece... -python/js/rust...

    Prikaži ovu nit
    Poništi
  20. 10. sij

    I like this trend a lot, my only worry is that we are barking on the dataset trees (easy) and ignore the models (hard) . Well, actual general purpose models.

    Poništi
  21. proslijedio/la je Tweet
    8. sij

    We want better reviewers? As a start, let's make this part of teaching curricula (and I don't just mean to use students as secondary reviewers in conferences)

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·