Michael Figurnov

@mfigurnov

Research Scientist @ Deepmind

Vrijeme pridruživanja: svibanj 2009.

Tweetovi

Blokirali ste korisnika/cu @mfigurnov

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @mfigurnov

  1. 12. pro 2019.

    [4/4] We hope that this class of estimators will find exciting machine applications! The paper is available online at

    Prikaži ovu nit
    Poništi
  2. 12. pro 2019.

    [3/4] It has low variance, similar to the reparameterization gradients, and works with non-differentiable functions and discrete distributions, just like REINFORCE. The downside is the higher computational complexity that grows with the number of parameters.

    Prikaži ovu nit
    Poništi
  3. 12. pro 2019.

    [2/4] Measure valued derivatives are a class of Monte Carlo gradient estimators that has been introduced 30 years ago by Georg Pflug, but is almost unknown in the machine learning community.

    Prikaži ovu nit
    Poništi
  4. 12. pro 2019.

    [1/4] I will be talking about Measure Valued Derivatives for Approximate Bayesian Inference, our joint work with @elaClaudia , at the Bayesian Deep Learning workshop at 16:05 tomorrow.

    Prikaži ovu nit
    Poništi
  5. 9. pro 2019.

    I’m at this week. Let me know if you’d like to catch up!

    Poništi
  6. 26. stu 2019.

    Cool paper from : REBAR-like control variates for Plackett-Luce, a distribution over permutations, with application to learning of causal graphs. Check it out!

    Poništi
  7. proslijedio/la je Tweet
    22. stu 2019.

    The code reproducing the experiments is this paper is now available at:

    Poništi
  8. proslijedio/la je Tweet

    : Grandmaster level as all 3 races on , w/ a pro approved interface (camera & APM limits). 2 years ago I thought this was impossible! How? Imitation learning (Diamond) -> multiagent League (Grandmaster)

    Prikaži ovu nit
    Poništi
  9. proslijedio/la je Tweet
    3. lis 2019.

    A new paper on tweaking SPIRAL (). What's new: • Spectral normalization of discriminator (Miyato, 18) ⇒ sharper images • Reward shaping by (Ng, 99) ⇒ longer episodes • In-painting instead of stacking ⇒ better reconstructions Lots of nice samples :)

    Poništi
  10. proslijedio/la je Tweet
    6. ruj 2019.

    Thrilled to be able to share what I've been working on for the last year - solving the fundamental equations of quantum mechanics with deep learning!

    Prikaži ovu nit
    Poništi
  11. proslijedio/la je Tweet
    20. kol 2019.

    We’re excited to release episodes 1 - 4 of the ! Get the inside track on some of the big questions and challenges the field is wrestling with today. No need to be an expert - the amazing speaks to the people behind the science.

    Prikaži ovu nit
    Poništi
  12. proslijedio/la je Tweet
    31. srp 2019.

    Really excited to share our latest paper in today on machine learning for health data to make early predictions of acute kidney injury. Has been an amazing journey over the last 2 years and with an amazing set of people.

    Prikaži ovu nit
    Poništi
  13. proslijedio/la je Tweet
    23. srp 2019.

    After a short delay, the code in a notebook to reproduce the graphs in section 3 of our paper () is online. More to be come soon. See thread above👆🏾. 👩🏾‍💻

    Prikaži ovu nit
    Poništi
  14. proslijedio/la je Tweet
    5. srp 2019.

    For anyone interested in constrained optimisation with DL models (e.g. as in ), we just released a few handy tools to deal with inequality constraints for Sonnet (). Thanks !

    Poništi
  15. proslijedio/la je Tweet
    3. srp 2019.

    Do you ever feel like a Bayesian distribution?

    Prikaži ovu nit
    Poništi
  16. proslijedio/la je Tweet
    26. lip 2019.

    Exited to share our new paper: 'Monte Carlo Gradient Estimation in Machine Learning', with @elaClaudia . It reviews of all the things we know about computing gradients of probabilistic functions. 🐾Thread👇🏾

    Prikaži ovu nit
    Poništi
  17. proslijedio/la je Tweet

    Right now , , and Oleg Ivanov with present their works at . Catch then while you can! - The Deep Weight Prior, #48 - Variance Networks, #72 - VAE with Arbitrary Conditioning, #74

    Prikaži ovu nit
    Poništi
  18. proslijedio/la je Tweet
    10. tra 2019.

    Our new blog post overviews unsupervised learning, a paradigm for creating artificial intelligence that learns about data without a particular task in mind. Read more about how we might teach computers to learn for the sake of learning:

    Poništi
  19. proslijedio/la je Tweet

    Yesterday successfully defended his PhD thesis! Congratulations!

    Poništi
  20. proslijedio/la je Tweet
    12. ožu 2019.

    Likelihood is a great loss fn, it's all about the space you measure it in! Our latest work on hierarchical AR image models (w/ , Karen Simonyan): We generated 128x128 & 256x256 samples for all ImageNet classes: (1/2)

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·