Sander Dieleman

@sedielem

Research Scientist at DeepMind. I tweet about deep learning (research + software), music, generative models, Kaggle, Lasagne ()

Vrijeme pridruživanja: prosinac 2014.

Tweetovi

Blokirali ste korisnika/cu @sedielem

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @sedielem

  1. 21. sij

    Deep Learning Lecture Series at UCL, starting Feb 3rd. (Free) tickets are available now. I will be talking about convolutional neural networks on Feb 17th!

    Poništi
  2. proslijedio/la je Tweet
    15. sij

    Differentiable Digital Signal Processing (DDSP)! Fusing classic interpretable DSP with neural networks. ⌨️ Blog: 🎵 Examples: ⏯ Colab: 💻 Code: 📝 Paper: 1/

    Prikaži ovu nit
    Poništi
  3. proslijedio/la je Tweet
    1. sij

    Very excited to share where we show an AI system that outperforms specialists at detecting breast cancer during screening in both the UK and US. Joint work with and published in today!

    Prikaži ovu nit
    Poništi
  4. proslijedio/la je Tweet
    18. pro 2019.

    My AI art online gallery for 2019 is finally live 🎉 🎉🎉 Check out all the art, music and design projects submitted to our Workshop😍

    , , i još njih 7
    Poništi
  5. proslijedio/la je Tweet
    14. pro 2019.

    Poster Session 1 happening at the Workshop until 2.30pm. Drop by for Korean abstract painting, creative GANs and generative models without data 🚀

    Poništi
  6. proslijedio/la je Tweet
    13. pro 2019.

    I'm so excited about the program we've put together for Saturday's ML for Creativity and Design Workshop 3.0. Aside from the amazing accepted talks and posters, we have a diverse set of invited speakers I want to highlight in this thread.

    Prikaži ovu nit
    Poništi
  7. proslijedio/la je Tweet
    10. pro 2019.

    Full Schedule and accepted papers for our Creativity Workshop now live on See you on Saturday 14th Dec in West 223-224 🤖🎨

    , , i još njih 5
    Poništi
  8. proslijedio/la je Tweet
    9. pro 2019.

    Unsupervised pre-training now outperforms supervised learning on ImageNet for any data regime (see figure) and also for transfer learning to Pascal VOC object detection

    , , i još njih 2
    Poništi
  9. 8. pro 2019.

    Heading to Vancouver for ! I'll be around all week, check out our workshop on ML for creativity and design on Saturday!

    Poništi
  10. proslijedio/la je Tweet
    6. pro 2019.

    Looking for something to read in your flight to ? Read about Normalizing Flows from our extensive review paper (also with new insights on how to think about and derive new flows) with

    Prikaži ovu nit
    Poništi
  11. proslijedio/la je Tweet
    26. stu 2019.

    End-to-end training of sparse deep neural networks with little-to-no performance loss. Check out our new paper: “Rigging the Lottery: Making All Tickets Winners” (RigL👇) ! 📃 📁 with and

    80% sparse ResNet-50
    Prikaži ovu nit
    Poništi
  12. proslijedio/la je Tweet
    26. stu 2019.

    “Fast Sparse ConvNets”, a collaboration w/ [], implements fast Sparse Matrix-Matrix Multiplication to replace dense 1x1 convolutions in MobileNet architectures. The sparse networks are 66% the size and 1.5-2x faster than their dense equivalents.

    Prikaži ovu nit
    Poništi
  13. proslijedio/la je Tweet
    5. stu 2019.

    Here the slides of our tutorial on "Waveform-based music processing with deep learning" organised by , Jongpil Lee and myself! - Zenodo: - Google Slides:

    Prikaži ovu nit
    Poništi
  14. 3. stu 2019.

    I'm at until Wednesday! My first ISMIR in 5 years :) I'll be co-presenting a tutorial about raw audio music processing and generation with and Jongpil Lee on Monday afternoon.

    Poništi
  15. proslijedio/la je Tweet
    30. lis 2019.

    My PhD thesis is now available on arXiv: Neural Density Estimation and Likelihood-free Inference There's a lot in it for those interested in probabilistic modelling with normalizing flows, and in likelihood-free inference using machine learning. (cont.)

    Prikaži ovu nit
    Poništi
  16. 9. lis 2019.

    Update: a new version of our paper is now on arxiv. It includes a human evaluation study to assess sample realism with ratings and pairwise comparisons, covering likelihood-based and adversarial models.

    Prikaži ovu nit
    Poništi
  17. proslijedio/la je Tweet
    26. ruj 2019.

    We've developed a new model for text-to-speech using GANs (TTS-GAN), combining high quality with efficient generation. More details in the paper: , and the abstract as read by TTS-GAN:

    Prikaži ovu nit
    Poništi
  18. proslijedio/la je Tweet
    19. ruj 2019.

    If you are reeling from a NeurIPS rejection or stressing about an ICLR submission, remember that some of the best papers were never published anywhere except arxiv. Thread of a few favorites (1/5):

    Prikaži ovu nit
    Poništi
  19. proslijedio/la je Tweet
    6. ruj 2019.

    Thrilled to be able to share what I've been working on for the last year - solving the fundamental equations of quantum mechanics with deep learning!

    Prikaži ovu nit
    Poništi
  20. proslijedio/la je Tweet
    3. ruj 2019.
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·