Olivier Grisel

@ogrisel

Engineer at , scikit-learn developer supported by . Tweets about Python and Machine Learning / Deep Learning.

Paris, France
Vrijeme pridruživanja: kolovoz 2008.

Tweetovi

Blokirali ste korisnika/cu @ogrisel

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @ogrisel

  1. Prikvačeni tweet
    27. ožu 2017.

    Slides and notebooks for the deep learning course and I taught at recsys vision nlp

    Poništi
  2. proslijedio/la je Tweet
    3. velj
    Poništi
  3. proslijedio/la je Tweet
    31. sij

    Transformers 2.4.0 is out 🤗 - Training transformers from scratch is now supported - New models, including *FlauBERT*, Dutch BERT, *UmBERTo* - Revamped documentation - First multi-modal model, MMBT from , text & images Bye bye Python 2 🙃

    Prikaži ovu nit
    Poništi
  4. proslijedio/la je Tweet
    30. sij

    Did you know that Breiman published both the bagging and random forest papers *after* he retired?!? (This is from our forthcoming book: )

    Poništi
  5. proslijedio/la je Tweet
    30. sij

    Happy birthday !! Ten years from the first release and still young!

    Poništi
  6. proslijedio/la je Tweet
    29. sij

    Numba 0.48.0 has been released! This one comes shortly after 0.47.0 as we had many PRs that didn't make it in time. Quite a few bug fixes (including many CUDA fixes thanks to NVIDIA) and many new string methods contributed by Intel. More details:

    Poništi
  7. proslijedio/la je Tweet
    30. sij

    Pandas 1.0 is here! * Read the release notes: * Read the blogpost reflecting on what 1.0 means to our project: * Install with conda / PyPI: Thanks to our 300+ contributors to this release.

    Poništi
  8. 30. sij
    Poništi
  9. proslijedio/la je Tweet
    29. sij

    Amazing work. PS: we're all so screwed.

    Poništi
  10. proslijedio/la je Tweet
    29. sij

    10 PR already submitted and the first one just merged! Having fun at the Paris Sprint of the Decade!

    Prikaži ovu nit
    Poništi
  11. 29. sij

    In the mean time the Quick and Simple Text Selection extensions is a good language agnostic workaround for list, dict, set and strings:

    Prikaži ovu nit
    Poništi
  12. 29. sij

    vscode-python users: please upvote the smart select support for Python code: . This would be helpful to refactor code without having to use the mouse to select structured code sections. Here is how it looks for Javascript code:

    Prikaži ovu nit
    Poništi
  13. proslijedio/la je Tweet
    28. sij

    If you're interested in becoming involved in my new project dabl , I tagged some easy first issues. Given that it's pretty early in the development, the barrier to entry should hopefully be much lower than in sklearn for example:

    Poništi
  14. proslijedio/la je Tweet
    15. sij

    1/ New paper on an old topic: turns out, FGSM works as well as PGD for adversarial training!* *Just avoid catastrophic overfitting, as seen in picture Paper: Code: Joint work with and to be at

    Prikaži ovu nit
    Poništi
  15. proslijedio/la je Tweet
    22. sij

    The quiet semisupervised revolution continues

    Poništi
  16. proslijedio/la je Tweet
    22. sij

    Happy to share some of the work and I have been putting in to do with on 1 billion rows.

    Poništi
  17. proslijedio/la je Tweet
    22. sij

    FixMatch: focusing on simplicity for semi-supervised learning and improving state of the art (CIFAR 94.9% with 250 labels, 88.6% with 40). Collaboration with Kihyuk Sohn, Nicholas Carlini

    Prikaži ovu nit
    Poništi
  18. proslijedio/la je Tweet
    21. sij

    This is a very interesting paper. It shows that a tweaked ResNet50 is about as accurate as EfficientNet-B4 but >3x faster. The EfficientNet paper measured FLOPS, which is a theoretical performance measure, rather than time, which is what actually matters.

    Prikaži ovu nit
    Poništi
  19. proslijedio/la je Tweet
    17. sij

    Just in case anyone thinks that wind power forecasting is a solved problem: Here's the most recent UK national wind power forecast (in yellow) & actual (in red). That's a 4 GW error! More research definitely needed :) From

    Poništi
  20. proslijedio/la je Tweet
    17. sij

    Thanks to *big* team effort, we released the code and the trained models from our LibriSpeech acoustic model architecture study and SOTA results here

    Poništi
  21. proslijedio/la je Tweet
    17. sij

    Cool theory paper presenting a problem that: - can be efficiently learned by SGD with a DenseNet with x^2 nonlin, - cannot be efficiently learned by any kernel method, including NTK.

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·