Tweetovi

Blokirali ste korisnika/cu @shfaithy

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @shfaithy

  1. proslijedio/la je Tweet
    21. sij

    Let me highlight this amazing work I've read recently on in NLP, in which you'll find both: - a deep discussion of what it means for a neural model to be compositional - a deep and insightful comparison of LSTM, ConvNet & Transformers! 👉

    Poništi
  2. proslijedio/la je Tweet
    6. sij

    Any time you tell a project maintainer that something is "simple": 1. You're wrong 2. The code change may be small, but there's other process-related details like tests, documentation, etc. 3. You just insulted the maintainer by saying you know better than them 4. You're wrong

    Prikaži ovu nit
    Poništi
  3. proslijedio/la je Tweet
    6. sij

    How To Be Successful (At Your Career, Twitter Edition)

    Prikaži ovu nit
    Poništi
  4. proslijedio/la je Tweet
    4. sij

    I liked the LSH attention in the reformer Sparse, efficient, simple Dynamic sparse attn is fascinating & mostly dealt by – softmax+topK: Recurrent Independent Mech. (MILA) Product-Key Mem (FB) – 𝛂-entmax: Adap. Sparse Transformer (DeepSPIN) links👇[1/3]

    Prikaži ovu nit
    Poništi
  5. proslijedio/la je Tweet
    14. pro 2019.

    so, here are a bunch of stuff i find interesting. no particular order. and definitely not comprehenssive. - creative ways to apply massive LMs. Sure we can fine-tune them with extra supervision. What else can we do with them?

    Prikaži ovu nit
    Poništi
  6. proslijedio/la je Tweet
    5. pro 2019.

    New work with Kazuma Hashimoto, , , and at and ! Our trainable graph-based retriever-reader framework for open-domain QA advances state of the art on HotpotQA, SQuAD Open, Natural Questions Open. 👇1/7

    Prikaži ovu nit
    Poništi
  7. proslijedio/la je Tweet
    3. sij

    “The black box argument is bogus … brains are also black boxes, and we’ve made a lot of progress in understanding how brains work.” Top minds in machine learning predict where AI is going in 2020 via

    Poništi
  8. proslijedio/la je Tweet

    At the start of 2019, I wrote down every lesson I learned throughout the year. Some are personal, others professional. Some are small, others bigs. But they were all learned from experience, not clipped from a book. Here are a few.

    Prikaži ovu nit
    Poništi
  9. proslijedio/la je Tweet
    1. sij

    I wouldn't recommend basing your career on currently popular trends, since these are likely to change by the time you graduate. Instead, figure out what questions/problems most fascinate you and how to make a career out of those. Define your own fields if you have to.

    Poništi
  10. proslijedio/la je Tweet
    29. pro 2019.

    “To start a PhD in ML, without insider referral, you need to do work equivalent to half of a PhD. Hence, in Apr 2019, I decided to dedicate all my time until Jan 2020 to publish in either NeurIPS or ICLR. If I fail, I would become a JavaScript programmer.” — ‼️

    Prikaži ovu nit
    Poništi
  11. proslijedio/la je Tweet
    4. pro 2019.

    So many great lessons in Richard Hamming's "You and Your Research" (1986): - "Continue to plant the little acorns from which the mighty oak trees grow." - Change a "defect to an asset". - "Just hard work is not enough—it must be applied sensibly."

    Poništi
  12. proslijedio/la je Tweet
    1. pro 2019.

    Ever wondered what is sound? How does it get stored inside the computer? I try to answer these and similar questions in this repository + make your computer sound like a violin / harpsichord, play sounds directly from NBs, and more!

    Prikaži ovu nit
    Poništi
  13. proslijedio/la je Tweet
    17. stu 2019.

    Excited to give a guest lecture at later today: "Machine learning interviews: Lessons from both sides". Here are the slides for those who want to follow along. Feedback & questions welcome!

    Poništi
  14. proslijedio/la je Tweet
    27. lis 2019.

    Useful Paper Writing (and reviewing) tips from .

    Poništi
  15. proslijedio/la je Tweet
    2. stu 2019.

    🍾Information Bottleneck🍾 in action at ! (1) Specializing embeddings for parsing () by Xiang & (2) 🍾BottleSum🍾 unsupervised & self-supervised summarization () with

    Prikaži ovu nit
    Poništi
  16. proslijedio/la je Tweet
    1. stu 2019.

    Next week at EMNLP, Aditya Gupta will be presenting his work on "Effective Use of Transformer Networks for Entity Tracking". This paper studies procedural text: descriptions of processes involving complex entity interactions like recipes, scientific processes, etc 1/n

    Prikaži ovu nit
    Poništi
  17. proslijedio/la je Tweet
    31. lis 2019.

    Exciting work by (+ )! Adversarial NLI, a large dataset collected via a multi-round adversarial (weakness-finding) human-&-model-in-the-loop process; allows moving/lifelong-learning target for NLU😀

    Prikaži ovu nit
    Poništi
  18. proslijedio/la je Tweet
    30. lis 2019.

    Excited to share our work on BART, a method for pre-training seq2seq models by de-noising text. BART outperforms previous work on a bunch of generation tasks (summarization/dialogue/QA), while getting similar performance to RoBERTa on SQuAD/GLUE

    Prikaži ovu nit
    Poništi
  19. proslijedio/la je Tweet
    28. lis 2019.

    PyTorch-Struct (v0.3 ). New features: autoregressive models / beam search, sparse-max dp, alignment/dtw, parallel semi-markov, k-max, pretty docs () Fun example: gradients of time-warping crf under different semirings.

    Prikaži ovu nit
    Poništi
  20. proslijedio/la je Tweet
    25. lis 2019.

    Thoughts after reading the T5 paper of et al. Thread. An amazing paper (requiring significant compute) that teases apart the effect of various ingredients proposed in Muppetland in the last few months (years?). Some things that stood out / were surprising:

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·