Piotr Czapla

@PiotrCzapla

A technology passionate focusing on applications of deep learning. Co-Founder and CEO at

Vrijeme pridruživanja: kolovoz 2009.

Tweetovi

Blokirali ste korisnika/cu @PiotrCzapla

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @PiotrCzapla

  1. proslijedio/la je Tweet
    25. sij

    Fascinating. I'm really surprised Google is using depthwise convs so much in their research, when it performs so badly on their TPUs.

    Poništi
  2. proslijedio/la je Tweet
    24. sij

    is one of the most important techniques I don't often recommend PhD Thesis' - 's is exceptional. He's a brilliant writer! Check out this taxonomy / table of contents!!! 👇👇👇

    Poništi
  3. proslijedio/la je Tweet
    7. sij

    I completed my 1st data science project ~30 years ago. Since then I've been continuously developing a questionnaire I use for all new data projects, to ensure the right info is available from the start. I'm sharing it publicly today for the first time.

    Poništi
  4. proslijedio/la je Tweet
    8. sij

    TIL Google invented universal language model pre-training with BERT.

    Poništi
  5. proslijedio/la je Tweet
    8. sij

    If you're interested in learning about what's coming in fastai v2, you might be interested in this overview of the key pieces. Many thanks to , , , & all the Swift for TensorFlow team for providing this opportunity.

    Poništi
  6. proslijedio/la je Tweet
    9. sij
    Odgovor korisnicima

    ML is often better in-house. APIs have to be fully general, which is a big disadvantage. For any particular task you have lots of contextual features, and you care about some types of errors and not others. If the project has high reward, a custom solution is often worth it.

    Poništi
  7. proslijedio/la je Tweet
    11. sij

    In our forthcoming book & course, you'll learn how to build a real deep learning web app from scratch, including downloading images using Bing's API. You'll also learn what can go wrong! (h/t )

    Poništi
  8. proslijedio/la je Tweet
    5. sij

    NLP Year in Review — 2019 An extensive list of interesting publications, creative and societal applications, tools and datasets, articles, and resources of 2019 by .

    Poništi
  9. proslijedio/la je Tweet
    18. pro 2019.
    Poništi
  10. proslijedio/la je Tweet
    4. pro 2019.

    The GermEval 2020 shared task on "the prediction of intellectual ability and personality traits from text" was just announced on the corpora mailing list. Hopefully even the title of this has pinged your problem sensors. >>

    Prikaži ovu nit
    Poništi
  11. proslijedio/la je Tweet
    13. pro 2019.

    We're going to look back over the 20's and see the work of being the most significant contribution to AI in the decade. I can't think of anything cooler than working with them.

    Poništi
  12. 13. pro 2019.

    One of the best news of 2019. If some is to tackle the information overload in ML it would be them after joining fb AI.

    Poništi
  13. proslijedio/la je Tweet
    12. pro 2019.

    Machine Learning in a company is 10% Data Science & 90% other challenges It's VERY hard. Everything in this guide is ON POINT, and it's stuff you won't learn in an ML book "Best Practices of ML Engineering" This is a lifesaver project

    Poništi
  14. proslijedio/la je Tweet
    13. pro 2019.

    I had a chance to discuss research w/ these amazing researchers. Takeaways: 1. Too many ML papers & most are bad. 2. For PhD apps, & look at blog & OSS too. 3. Avoid trendy topics. Work on what you believe in. 4. Research may not be applicable & it's ok.

    Poništi
  15. proslijedio/la je Tweet
    12. pro 2019.

    Hey Twitter data science folks! Interested about fastai and want to learn fastaiv2 along with some techniques not shown in the original lectures? Come learn with me! I’ll be running a course and study groups next semester! More info:

    Prikaži ovu nit
    Poništi
  16. proslijedio/la je Tweet
    3. pro 2019.
    Odgovor korisniku/ci

    We recently found that a randomly initialized + fine-tuned BERT performs surprisingly well in 5/6 NLP tasks (80% acc for sentiment analysis!). I guess fine-tuning could be interpreted as tweaking the net so as to amplify the successful subnetwork? Paper:

    Poništi
  17. proslijedio/la je Tweet

    Encoder 🦄🤝🦄 decoders are now part of the 🤗 transformers library! I wrote a tutorial to explain how we got there and how to use them 👉 Bonus: a sneak peak into upcoming features ✨

    Poništi
  18. proslijedio/la je Tweet
    4. pro 2019.

    Interesting work! I believe that sooner than later someone will use neural networks to translate textbooks and papers into theorem prover language like lean or coq.

    Poništi
  19. proslijedio/la je Tweet
    4. pro 2019.

    Our new paper, Deep Learning for Symbolic Mathematics, is now on arXiv We added *a lot* of new results compared to the original submission. With (1/7)

    Prikaži ovu nit
    Poništi
  20. proslijedio/la je Tweet
    3. pro 2019.

    If you're interested in learning both classic and modern NLP techniques in a code first way, there's a study group for 's NLP course (taught by ), organized through , starting in 10 days.

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·