Lx

@LxYuan

Psalm 128:2

Malaysia
Vrijeme pridruživanja: ožujak 2012.

Tweetovi

Blokirali ste korisnika/cu @LxYuan

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @LxYuan

  1. Prikvačeni tweet
    18. tra 2016.

    Accept responsibility for your own actions.No excuses, no regrets, no alibis, don't point the finger, don't blame anybody else.-Tony Doherty

    Poništi
  2. proslijedio/la je Tweet
    31. sij

    I chatted with about the and her advice is to see a doctor sooner rather than later. I guess it's not a bad one & hope everyone is well! On the other hand, Meena is also excited about technology, especially VR!

    Poništi
  3. proslijedio/la je Tweet
    31. sij

    That's cool. Pandas has added a `to_markdown()` method for formatting dataframes.

    Prikaži ovu nit
    Poništi
  4. proslijedio/la je Tweet
    20. kol 2019.

    New blogpost! Transformers from scratch. Modern transformers are super simple, so we can explain them in a really straightforward manner. Includes pytorch code.

    Poništi
  5. proslijedio/la je Tweet
    15. sij

    I've started to upload the videos for the Neural Nets for NLP class here: We'll be uploading the videos regularly throughout the rest of the semester, so please follow the playlist if you're interested.

    Poništi
  6. proslijedio/la je Tweet
    14. sij

    Happy to release NN4NLP-concepts! It's a typology of important concepts that you should know to implement SOTA NLP models using neural nets: 1/3 We'll reference this in CMU CS11-747 this year, trying to maximize coverage. 1/3

    Prikaži ovu nit
    Poništi
  7. proslijedio/la je Tweet
    10. sij

    Now that neural nets have fast implementations, a bottleneck in pipelines is tokenization: strings➡️model inputs. Welcome 🤗Tokenizers: ultra-fast & versatile tokenization led by : -encode 1GB in 20sec -BPE/byte-level-BPE/WordPiece/SentencePiece... -python/js/rust...

    Prikaži ovu nit
    Poništi
  8. proslijedio/la je Tweet
    10. sij

    Workera recently published a report on AI career pathways. It doesn't mention hardware. I also don't see the difference b/w SWE-ML & ML Engineer. But it highlights some important distinctions. I also like talk on the structure of AI teams

    Poništi
  9. proslijedio/la je Tweet
    9. sij

    如何拯救无法「深度学习」的制造业,2019工业智能灵魂10问

    Poništi
  10. proslijedio/la je Tweet
    25. pro 2019.

    Looking back, my last decade was like a neural network. Some parts were linear. Some were nonlinear. I never seemed to get enough data, and always got stuck in local minima. There was a lot of learning. I can't explain how any of it worked, but the results came out alright.

    Poništi
  11. proslijedio/la je Tweet
    23. pro 2019.

    It's incredibly sad for me to say that my time at NVIDIA has ended. I'm grateful for the chance to work w/ so many wonderful people on challenging projects. As I'm going on a new adventure, I put down a quick note on the lessons I learned over the year.

    Poništi
  12. 19. pro 2019.

    Hugging Face raises $15 million to build the definitive natural language processing library – TechCrunch

    Poništi
  13. proslijedio/la je Tweet
    28. stu 2019.

    New preprint "How Can We Know What Language Models Know?" Recent work queries LMs for knowledge ("profession") w/ textual questions ("X's profession is Y"). We show you need the *right* Qs: with BERT, just changing how you ask raises accuracy 31% to 38%!

    Prikaži ovu nit
    Poništi
  14. proslijedio/la je Tweet
    14. stu 2019.

    ACL 2019开源论文 | 基于Attention的知识图谱关系预测

    Poništi
  15. proslijedio/la je Tweet
    9. stu 2019.

    XLM-RoBERTa: Amazing results on XLU and GLUE benchmarks from Facebook AI: large transformer network trained on 2.5TB of text from 100 languages. - ArXiv paper:...

    Poništi
  16. 9. stu 2019.

    From the perspective of researchers who want to use T5, its size is a huge obstacle: The full model is more than thirty times the size of established general-purpose NLP models like BERT. Google T5 Explores the Limits of Transfer Learning by

    Poništi
  17. 6. stu 2019.

    “Lessons from How to Lie with Statistics” by

    Poništi
  18. 26. lis 2019.
    Poništi
  19. 28. ruj 2019.

    "But adults hiding behind children to avoid the difficult conversations that must take place about how to achieve solutions is nothing other than moral cowardice..."

    Poništi
  20. proslijedio/la je Tweet
    25. ruj 2019.

    I watched the lectures (they are free online), loved the course, recommended it to everyone, and now I'm super excited to be part of it.

    Poništi
  21. proslijedio/la je Tweet
    18. ruj 2019.

    Oh hey, there's a new paper about the YAKE unsupervised keyword extraction algorithm we've been talking about during the live-coding. 📑👀 Looks like it'll be free to read until November 6:

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·