Lianhui Qin

@Lianhuiq

nlp PhD student at

Seattle
Vrijeme pridruživanja: listopad 2018.

Tweetovi

Blokirali ste korisnika/cu @Lianhuiq

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @Lianhuiq

  1. Prikvačeni tweet
    12. ruj 2019.

    What if Harry Potter had been a Vampire? Our @emnlp2019 paper, “Counterfactual Story Reasoning and Generation”, presents the TimeTravel dataset that tests causal reasoning capabilities over natural language narratives.1/2 Paper: from and

    Prikaži ovu nit
    Poništi
  2. proslijedio/la je Tweet
    25. pro 2019.

    Video & slides for LIRE workshop @ are now up: Check out the Talks and Panel by Jeff Bilmes Tom Griffiths & more. Thanks to all speakers & presenters for making the workshop a success!

    , , i još njih 3
    Poništi
  3. 13. pro 2019.
    Poništi
  4. proslijedio/la je Tweet
    27. stu 2019.

    Hey, so we took a stab at trying to build a task that looks at physical commonsense (). The data was specifically made to be adversarial to BERT, but RoBERTa still leaves a lot to be desired (). Author's Note(s): ...

    Prikaži ovu nit
    Poništi
  5. proslijedio/la je Tweet
    26. lis 2019.

    Super excited to join from Fall 2020. I am actively looking for PhDs/Postdocs interested in (database) systems, formal methods, and applied crypto. I wrote a small essay about my research agenda here (RT Welcome!)

    Prikaži ovu nit
    Poništi
  6. proslijedio/la je Tweet
    5. stu 2019.

    Check out our talk tomorrow (Wednesday) at 14:42-15:00, AWE 201, "A Discrete Hard EM Approach for Weakly Supervised Question Answering" with

    Poništi
  7. proslijedio/la je Tweet
    3. stu 2019.

    terrific new benchmark from lab that does indeed appear to get at the challenge I have been raising, re getting AI systems to understand how events unfold over time.

    Poništi
  8. proslijedio/la je Tweet
    3. stu 2019.

    (3/3) using this Info-Bottleneck intuition, we find the summary Z of the input sentence X that can better predict the next sentence Y than X ( 🅿️(Y|Z) > 🅿️(Y|X) ) using a pre-trained language model 🅿️, which leads to better summaries than the reconstruction loss in auto-encoders.

    Prikaži ovu nit
    Poništi
  9. proslijedio/la je Tweet
    2. stu 2019.

    🍾Information Bottleneck🍾 in action at ! (1) Specializing embeddings for parsing () by Xiang & (2) 🍾BottleSum🍾 unsupervised & self-supervised summarization () with

    Prikaži ovu nit
    Poništi
  10. proslijedio/la je Tweet
    3. stu 2019.

    😍Social IQA😭 (), the EQ test for AI at , trivial for humans, hard for neural models, the kind wants more of. It's also a resource for transfer learning of commonsense knowledge, achieving new SOTA on related tasks (Winograd, COPA).

    Prikaži ovu nit
    Poništi
  11. proslijedio/la je Tweet
    2. stu 2019.

    The⏱TimeTravel⏱dataset of our paper, 🎞Counterfactual Story Reasoning and Generation🎞 () tests counterfactual reasoning over events that unfold over time, directly addressing 's call for a challenge against current neural models....

    Prikaži ovu nit
    Poništi
  12. proslijedio/la je Tweet
    2. stu 2019.

    Lianhui (Karen) will give a talk at⏱: Nov 7 Wed 13:30–13:48 and 🏢: AWE HALL 2C with at and

    Prikaži ovu nit
    Poništi
  13. proslijedio/la je Tweet
    26. lis 2019.
    Odgovor korisniku/ci

    Gary, try by typing "Gary stacks kindling and logs and drops some matches". Sorry I used deep learning... :)

    Poništi
  14. proslijedio/la je Tweet
    17. lis 2019.

    Super excited to release Texar-PyTorch v0.1 An ML library integrating the best of TensorFlow into PyTorch - replicating many useful TF modules & designs to enhance PyTorch, incl. data, model & training. See how Texar-Pytorch builds a Conditional-GPT2 1/5

    Prikaži ovu nit
    Poništi
  15. proslijedio/la je Tweet
    7. lis 2019.

    The video for the talk I gave at 2019 on COMET, a training framework for adapting pretrained language models for knowledge graph construction, is now available: w/

    Prikaži ovu nit
    Poništi
  16. proslijedio/la je Tweet

    RT : Our workshop on Learning with Rich Experience (LIRE) is now accepting late-breaking submissions! Due next Monday 09/30. via

    Poništi
  17. proslijedio/la je Tweet

    RT : Our workshop on Learning with Rich Experience (LIRE) is now accepting late-breaking submissions! Due next Monday 09/30. via

    Poništi
  18. proslijedio/la je Tweet
    26. ruj 2019.

    RT : Our workshop on Learning with Rich Experience (LIRE) is now accepting late-breaking submissions! Due next Monday 09/30. via

    Poništi
  19. proslijedio/la je Tweet
    26. ruj 2019.

    Our workshop on Learning with Rich Experience (LIRE) is now accepting late-breaking submissions! Due next Monday 09/30.

    Poništi
  20. proslijedio/la je Tweet
    24. ruj 2019.

    Sentence summarization with Information Bottleneck 🍾 and no supervision! 😎 Self-supervised and unsupervised approaches 🍾BottleSum🍾 to appear at @emnlp2019 With at and

    Poništi
  21. proslijedio/la je Tweet
    18. ruj 2019.

    Not trying to double-post, but the webpage for Neural Naturalist is finally really, *really* done. If you'd like to bask in birds and a lot of Javascript:

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·