Tom Hosking

@tomhosking

PhD student in NLProc . Also photography and bad jokes.

Vrijeme pridruživanja: travanj 2009.

Tweetovi

Blokirali ste korisnika/cu @tomhosking

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @tomhosking

  1. proslijedio/la je Tweet
    10. sij

    We are sill accepting applications until 31 January for the 4-year PhD program in the Centre in Doctoral Training in NLP and for the 3-year PhD program in ILCC

    Poništi
  2. 2. sij

    In my defence, BERT expects a mask that is the inverse of built in transformers. A hangover from the TF conversion I guess?

    Prikaži ovu nit
    Poništi
  3. 2. sij

    Pro tip: try masking the padding tokens instead of the actual tokens

    Prikaži ovu nit
    Poništi
  4. 30. pro 2019.

    I got bored of curating Slurm jobs by hand, so I built a super lightweight monitoring app: It's pretty basic for now, but PRs very welcome! (cc )

    Poništi
  5. 21. pro 2019.

    Live footage of me trying to integrate BERT into my model

    Prikaži ovu nit
    Poništi
  6. proslijedio/la je Tweet

    6th June 2020 is party night. Bring friends, lovers & glad rags for memories money can't buy. Team GB Tokyo Olympic trials, European 10,000m Cup & more. You are the event. 📸

    , , i još njih 7
    Poništi
  7. proslijedio/la je Tweet

    Facebook AI is sharing MLQA, an extractive question answering (QA) evaluation benchmark aligned across Arabic, German, Hindi, Spanish, Vietnamese, and Simplified Chinese. It will help the AI community improve and extend QA in more languages.

    Poništi
  8. 25. lis 2019.

    I've been working with 's SQuAD for a while now, but only just noticed that this question appears 6 times in the training set: "I couldn't could up with another question. But i need to fill this space because I can't submit the hit. " Thanks, mystery turker 👍

    Poništi
  9. proslijedio/la je Tweet
    17. lis 2019.

    QA Models should work in any language. So, we're releasing MLQA, a new cross-lingual QA evaluation dataset! Check out the paper and dataset: With Barlas Oguz, Ruty Rinott, , 🚀

    Prikaži ovu nit
    Poništi
  10. proslijedio/la je Tweet

    Applications very welcome for the UKRI Centre for Doctoral Training in Natural Language Processing at University of Edinburgh ! Details to apply: Deadlines: ⏰- non EU/UK applications: 29.11.19 ⏰- EU/UK applications: 31.1.20

    Poništi
  11. proslijedio/la je Tweet

    These are the numbers of actions that take in their categories. We're talking millions and billions. You can see why is very busy.

    Poništi
  12. 30. ruj 2019.

    Question generation leaderboard update: UniLM from et al. makes it back onto the board after being evaluated on the standard split. Cool paper that shows the power of transfer learning!

    Poništi
  13. proslijedio/la je Tweet

    Our two-step, self-supervised approach to extractive question answering (QA) first trains a model to generate questions, then uses those questions to train a standard extractive QA model.

    Poništi
  14. 18. ruj 2019.

    ...and just found an paper that claims improvements over SotA, based on results from 2017. Recent scores are almost 50% (!!) higher. This is really problematic! Claiming SotA != achieving SotA

    Prikaži ovu nit
    Poništi
  15. 17. ruj 2019.

    cc - I think their use of a very small learning rate during fine-tuning also helped ;)

    Prikaži ovu nit
    Poništi
  16. 17. ruj 2019.

    This is a really nice paper - and gives an example where a really careful choice of reward when fine tuning a NLG model *can* give better output

    Prikaži ovu nit
    Poništi
  17. proslijedio/la je Tweet
    17. ruj 2019.

    I want to share a thread of discussion about QG task. I believe it's time to standardized the SQuAD QG task's dev-test setup. Otherwise, the claim of SOTA makes no sense. I suggest using the first QG paper's split from ().

    Poništi
  18. 17. ruj 2019.

    Apologies to who has better attention to detail than I do - after updating the leaderboard to account for the many different splits in circulation (which are not comparable to each other), their paper currently sits at the top!

    Poništi
  19. 17. ruj 2019.

    Two papers accepted to @emnlp2019 claim SotA in question generation, but do not outperform an approach from a EMNLP 2018 paper!

    Prikaži ovu nit
    Poništi
  20. 17. ruj 2019.

    QPP from also looks strong, but they create their own test set (rather than using the standard split from - ) so the results are sadly not directly comparable

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·