Alon Talmor

@AlonTalmor

NLP PhD at TAU, researcher , Entrepreneur, Traveler. Try

Tel Aviv
Vrijeme pridruživanja: kolovoz 2009.

Tweetovi

Blokirali ste korisnika/cu @AlonTalmor

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @AlonTalmor

  1. Prikvačeni tweet
    1. sij

    We present our new year special: “oLMpics - On what Language Model pre-training captures״, , Exploring what symbolic reasoning skills are learned from an LM objective. We introduce 8 oLMpic games and controls for disentangling pre-training from fine-tuning.

    Prikaži ovu nit
    Poništi
  2. proslijedio/la je Tweet
    3. velj

    Check out BREAK - a new NLU benchmark for testing the ability of models to break down a question into the required steps for computing its answer. A work by Tomer Wolfson, accepted to TACL 2020.

    Poništi
  3. proslijedio/la je Tweet
    3. velj

    New TACL paper involving a lot of hard work from my twitter-less student Tomer, along with great collab. at AI2 and TAU. Paper/website at 1/2

    Prikaži ovu nit
    Poništi
  4. proslijedio/la je Tweet
    15. sij

    Exclusive: Apple acquires , edge AI spin-out from Paul Allen’s AI2, for price in $200M range

    Poništi
  5. proslijedio/la je Tweet
    9. sij

    We ( Ananth, , , ) , are pleased to announce the release of ORB, an Open Reading Benchmark. This is an evaluation server that tests a single model on a variety of reading comprehension datasets (SQuAD, DROP, Quoref, ...).

    Prikaži ovu nit
    Poništi
  6. 1. sij

    Joint work with and !

    Prikaži ovu nit
    Poništi
  7. proslijedio/la je Tweet
    3. stu 2019.

    talking about the MRQA 2019 shared task baseline model!

    Poništi
  8. proslijedio/la je Tweet
    3. stu 2019.

    kicking off MRQA 2019! Please come by to room 201-BC at EMNLP for some discussion on machine reading!

    Poništi
  9. proslijedio/la je Tweet
    25. lis 2019.

    The MRQA 2019 workshop is approaching! Today we are pleased to release our findings from the shared task: Congrats to for the best system, D-Net! It achieved 72.5 F1 on the 12 held-out datasets, 10.7 higher than our BERT-large baseline .

    , , i još njih 3
    Poništi
  10. proslijedio/la je Tweet
    26. ruj 2019.

    It seems like you hear "not another QA dataset..." a lot these days. I wrote a short opinion piece with , , and , talking about why this isn't necessarily a problem. When should you use QA for your dataset?

    Prikaži ovu nit
    Poništi
  11. proslijedio/la je Tweet
    4. ruj 2019.

    The accompanying code for our paper about annotator bias is out! Now you can generate annotator-based data splits and reproduce our experiments on any NLU dataset with annotator identifiers.

    Poništi
  12. proslijedio/la je Tweet
    31. kol 2019.

    paper by with . We use GNNs to globally reason about the relation between words and DB constants in zero-shot sem. parsing. It's the BERT best-less model! I mean the best BERT-less model!

    Poništi
  13. proslijedio/la je Tweet
    30. kol 2019.

    paper by . Originally titled "Building a semantic parser that works overnight", but we changed the name... Careful analysis of issues in "overnight" data collection method leads to a new and improved procedure!

    Poništi
  14. proslijedio/la je Tweet
    24. kol 2019.

    New paper by and with about annotator bias. We check whether models capture properties of the annotators rather than the task when annotators create language utterances at scale. You'll never guess what we found out! :)

    Poništi
  15. proslijedio/la je Tweet
    14. kol 2019.

    The shared task is over now. We have received a good number of submissions and seen very impressive improvements over BERT baselines. More info to come, stay tuned! Next: our regular paper submissions are due on August 19th. Cross-submissions are allowed. Consider submitting!

    Poništi
  16. proslijedio/la je Tweet
    10. kol 2019.
    Odgovor korisnicima i sljedećem broju korisnika:

    We had quite a lot of submissions to CommonsenseQA () and results have improved but in pretty small increments. Would love to see how RoBERTa does on this.

    Poništi
  17. proslijedio/la je Tweet
    1. kol 2019.

    We have extended our shared task deadline by one week. It’s now due August 5th (Monday)! Our task focuses on generalization ability of QA models. More info here:

    Poništi
  18. proslijedio/la je Tweet

    There you go: Thanks for a very neat talk. Here is also the link to all presentations in case you are interested:

    Poništi
  19. proslijedio/la je Tweet
    30. srp 2019.

    Check out our two presentations on QA today! (1) 11:30 Hall 4 "Compositional Questions Do Not Necessitate Multi-hop Reasoning" (2) 4-5:40 Poster#B "Multi-hop Reading Comprehension through Question Decomposition and Rescoring" Come and say hi 😀

    Poništi
  20. proslijedio/la je Tweet
    30. srp 2019.

    Come check out our poster for "Representing Schema Structure with Graph Neural Networks for Text-to-SQL Parsing" today (Wed) at 10:30, Poster Session 6A! Joint work with and

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·