Emmanuel Ameisen

@mlpowered

ML Engineering Writing a book for about building practical ML Previously: ML , Head of AI at

San Francisco, CA
Vrijeme pridruživanja: lipanj 2017.

Tweetovi

Blokirali ste korisnika/cu @mlpowered

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @mlpowered

  1. Prikvačeni tweet
    25. srp 2019.

    13 months. 250 pages. I wrote an ML book! Want to learn how to ship ML in practice? Check it out! Includes tips from , , and more! It'll be out in winter & you can preorder it now. Amazon: O'Reilly:

    Poništi
  2. proslijedio/la je Tweet
    1. velj

    Hey , look what showed up! 😃

    Poništi
  3. proslijedio/la je Tweet
    1. velj

    Just got my copy of Building Machine Learning Powered Applications by Emmanuel Ameisen Having worked with as Head of AI at , I vouch for how fantastic his guidance is on this topic — Congrats Manu!

    Poništi
  4. 31. sij

    Some of the first readers have tweeted or dm’d me photos of their books when they’ve received them. It warms my heart every time, please feel free to keep doing so😍!

    Poništi
  5. 30. sij

    Copies of BMLPA () are starting to arrive! Definitely the first part of this writing process that has been ahead of schedule😛

    Poništi
  6. 29. sij

    It’s ready (ish), my blog is live at It will be where I post my writing going forward. For now, I’ve ported my 2019 predictions over from Medium, and they were surprisingly accurate! Check them out and subscribe for updates!

    Poništi
  7. 28. sij

    I am being told the slide in question did not make the editing cut... oops Here is the entire slide deck, all slides included

    Prikaži ovu nit
    Poništi
  8. 27. sij

    Last week the team had me over to chat about some of the practical ML tips I’ve been writing about. The recording is available now. It’s a short video about why and how you should look at your data, including a slide copied from :)

    Prikaži ovu nit
    Poništi
  9. 24. sij

    Much of the success of ML is conditioned on how you represent data. This is just as true for deep learning, especially in domains outside of vision. If you can encode your data in a format usable by SotA models, you are in a good spot. If you can’t, modeling gets harder.

    Poništi
  10. 23. sij

    Just gave a talk on vectorization to generate features and inspect mode errors at ’s NLP breakfast. Nice and engaging group, and a relatively short talk (20-30 minutes) I’ll share the video when it’s out but if you want the takeaways ahead of time, check out the book!

    Poništi
  11. 23. sij

    At the inaugural x , and it is starting off strong! First, a talk about using transformers to predict results of chemical reactions by Karl Heyer. The model works by translating text representations of molecules, crazy! Next up, and MuseNet!

    Poništi
  12. 22. sij

    Working on spinning up a blog and instantly deactivated all options to comment. I love hearing feedback or recommendations but comments feel like the worst medium for that. Even big news sites’ comment sections are a dumpster fire.

    Poništi
  13. 22. sij

    BMLPA () sometimes uses arbitrary names for prod ML concepts: Have a model screen inputs to another model -> filtering model Have a model in prod but don’t use it -> shadow mode May not be the best names, but it beats naming each concept after a muppet.

    Poništi
  14. proslijedio/la je Tweet
    21. sij

    We will have a new this Thursday in our office! An exciting discussion on how to build intuitions before building models with , whose new ML book just made it to the top of the charts! How to join & learn more:

    Poništi
  15. 17. sij

    The goal of BMLPA () is to be practical, so it comes with: - Notebooks demonstrating methods to explore data, debug models, and more - A Flask app serving models built throughout the book! Check it out, PRs and suggestions welcome!

    Poništi
  16. 17. sij

    This morning when I tried to play a song, my Google Play Rick rolled me so I’m definitely not trusting anyone or anything anymore.

    Poništi
  17. 16. sij

    Getting into spaced repetition for memory thanks to and ’s work. It feels like unsupervised vs supervised learning Normal reading is unsupervised Spaced repetition provides labels you get tested on at successive epochs, to minimize memory loss

    Poništi
  18. 16. sij

    Amazing articles on ML in production, and constraints of applied systems and organizations. I particularly enjoyed reading twelve truths of ML for the real world:

    Poništi
  19. 15. sij

    One open problem in ML is designing interfaces between models. Having a model as part of your application is a challenge. Building a new model that depends on the previous one is even harder. Now you can’t train the first one without retraining the second...

    Poništi
  20. 14. sij

    For the book I built four successive versions of an ML app: First a heuristic with a bunch of rules. Then a simple model. Then, a much more complex model. Finally, a model that simplified the previous approach. Same lifecycle I’ve seen in industry!

    Poništi
  21. proslijedio/la je Tweet
    14. sij

    I think self-supervised learning is probably one of the most interesting, feasible & useful (and maybe a bit under-appreciated) approaches in deep learning

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·