Rezultati pretraživanja
  1. 16. lip 2019.
  2. 11. lip 2019.

    Modern convnets ignore the Nyquist sampling criterion, making them unstable. Come see how simple antialiasing can make your net more stable, accurate, and robust! 3pm tomorrow (Wed) in Seaside Ballroom.

  3. best paper honourable mention for Carl Allen and for their work on explaining word embeddings! Paper:

  4. 12. lip 2019.

    Attending and interested in uncertainty, calibration & out-of-distribution robustness? Chk out our poster at the Uncertainty in DL workshop :) Can You Trust Your Model’s Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift Paper:

    Prikaži ovu nit
  5. Best paper award at main idea: unsupervised learning of disentangled representations is fundamentally impossible without inductive biases. Verified theoretically & experimentally.

  6. Mila members are presenting twenty-two papers at this week! Here are some highlights:

  7. 15. lip 2019.

    Slides from my talk on "Recent Trends in Personalization: A Netflix Perspective" at the workshop on Adaptive and Multi-Task Learning.

  8. 12. lip 2019.

    highlight (jetlag special): A Contrastive Divergence for Combining Variational Inference and MCMC

  9. Iranian Women have strong presence at . Stop by their poster if you are attending the conference next week.

    Prikaži ovu nit
  10. 15. lip 2019.

    So glad that we got the Best Paper Award at the AI for social good Workshop at 🥳

  11. 18. lip 2019.

    Slides and video are now available for the Theoretical Physics for Deep Learning workshop at

  12. 8. svi 2019.
  13. 19. stu 2019.

    For all researchers working on music auto-tagging, here is the new MTG-Jamendo dataset we have created this year and presented on at : . Over 55,000 full audio tracks labeled with almost 200 tags by genres, instruments, and moods/themes.

  14. 15. lip 2019.

    Slides from my talk at the Multitask and Lifelong RL workshop at : . Thanks for a great program!

  15. 15. lip 2019.

    Come to Invertible Neural Nets and Normalizing Flows () at 14:20 where I will be giving an invited talk on Neural Ordinary Differential Equations for Continuous Normalizing Flows & FFJORD!

  16. I'm glad to share that we have received the Best Paper Award!

  17. Pre-training is a hot topic in NLP research and models like BERT and GPT have definitely delivered exciting breakthroughs. The challenge is in upping our game in finer sequence to sequence based language generation tasks. Enter MASS:

  18. 24. lip 2019.

    Blog post - Five more music papers from with my incredibly shallow introductions.

  19. 14. lip 2019.

    Come and check out the "tractable probabilistic models" workshop at in room 202. My students will present 8 papers, on tractable reasoning, missing data, discriminative learning, probabilistic programming and databases, and hybrid models.

  20. Should we be trying to prove anything at all? While it seems much ML theory comes via physicists, *the physicist in the Deep Phenomena audience* characterizes physics as a “purely empirical” discipline, questions whether we should try to prove anything in ML.

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.