Danilo J. Rezende

@DeepSpiker

Senior Staff Research Scientist and Team Lead @ Working on probabilistic decision making, generative models and causal inference. All opinions my own.

London, England
Vrijeme pridruživanja: rujan 2012.

Tweetovi

Blokirali ste korisnika/cu @DeepSpiker

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @DeepSpiker

  1. 27. sij

    Going to for a talk and discussion panel on ML(generative models) and physics. Always great to be back at and looking forward to all the discussions on applying ML to fundamental sciences.

    Poništi
  2. proslijedio/la je Tweet
    21. sij

    We are organizing an Workshop on Geometric and Relational Deep Learning! Registration invites will be shared soon. Interested in participating? Consider submitting an abstract or get in touch: w/

    Prikaži ovu nit
    Poništi
  3. proslijedio/la je Tweet
    15. sij

    We have 2 papers published in today! 🎉 One describes AlphaFold, which uses deep neural networks to predict protein structures with high accuracy. AlphaFold made the most accurate predictions at the 2018 scientific community assessment CASP13. 1/4

    Prikaži ovu nit
    Poništi
  4. 15. sij

    I'm honored to be giving a talk about generative models tomorrow at the Simons Center for Geometry and Physics! Thanks for the invite and for the remote arrangements! Looking forward to it!

    Poništi
  5. proslijedio/la je Tweet
    15. sij

    Had a fun time preparing my talk for presenting deep learning in the perspective of a LEGO brick box with infinitely composable functional blocks 🧱 Slides are available at

    Poništi
  6. 15. sij

    Stay tuned for the call for submissions and updates to our workshop "Causal Learning for Decision Making" at Jointly organised by , and

    Poništi
  7. proslijedio/la je Tweet
    9. sij

    We are organizing a workshop on Causal learning for Decision Making at along with , Jovana Mitrovic, , Stefan and . Consider submitting your work!

    Poništi
  8. proslijedio/la je Tweet
    8. sij

    Our paper "Variational Autoencoders and Nonlinear ICA: A Unifying Framework" has been accepted to AISTATS'20. With , Ricardo Pio Monti and Aapo Hyvarinen (UCL). Surprisingly strong and general identifiability results, with rigorous proofs!

    Poništi
  9. proslijedio/la je Tweet

    Happy New Decade everyone! Hard to believe it is 2020 - feels like the year when the future should be invented. We started back in 2010, incredible to see how far AI has come in the last decade, but really this is just the beginning!

    Poništi
  10. 29. pro 2019.

    These mathematical principles are independent of how the model is instantiated (e.g as a Lagrangian, as some non-parametric posterior or as a bunch of matrices squeezing non-linearities).*What matters is how effectively we can enforce them in the model class that we work on* 5/5

    Prikaži ovu nit
    Poništi
  11. 29. pro 2019.

    We can further enrich the notion by connecting to symmetry/equivariance principles: A good model is one that possesses/exposes the largest possible set of symmetries, while being compatible with all observed and experimental(interventional) data 4/5

    Prikaži ovu nit
    Poništi
  12. 29. pro 2019.

    We can enrich this notion of simplicity by connecting it to robustness principles: A good model is one that is simple and yet robust to a family of hypothetical perturbations established upfront (i.e. causally correct) 3/5

    Prikaži ovu nit
    Poništi
  13. 29. pro 2019.

    Generalisation is about the simplicity of the model class that is compatible with all observed and experimental(interventional) data. Simplicity doesn't mean a small number of parameters. It means that we need a small number of bits to describe the model 2/5

    Prikaži ovu nit
    Poništi
  14. 29. pro 2019.

    Some people in ML share the illusion that models expressed symbolically will necessarily/magically generalise better compared to, for example, parametric model families fit on the same data. This belief seems to come from a naive understanding of mathematics 1/5

    Prikaži ovu nit
    Poništi
  15. proslijedio/la je Tweet
    26. pro 2019.
    Odgovor korisniku/ci

    "Deep Learning" is such a poor name, let's call it what it really is: "Differentiable Software". It's hard to remain dogmatic against the field when you realize it's just about writing programs that you can take the analytic derivative of and optimize via gradient descent.

    Poništi
  16. 26. pro 2019.

    Hey , what statistics do you suggest for assessing the quality of a proposal distribution given known target density up to normalization? Anything I could look for beyond ESS, MCMC acc. rate and Fisher score? Any paper pointers welcome! :)

    Poništi
  17. 25. pro 2019.

    Each of these principles is an active area of research in the DL community with a growing interest. In fact I expect an enormous progress in the next few years in merging physics and DL.

    Prikaži ovu nit
    Poništi
  18. 25. pro 2019.

    It is also orthogonal to other principles of robustness and interpretability from statistics and physics such as compositionality, disentanglement, equivariance and gauge invariance.

    Prikaži ovu nit
    Poništi
  19. 25. pro 2019.

    This is orthogonal to causality : we can build SEMs from DL modules with generative components, do interventions, counterfactuals and also last but not least fit DL models with interventional data.

    Prikaži ovu nit
    Poništi
  20. 25. pro 2019.

    Rephrasing with my own words: DL is a collection of tools to build complex modular differentiable functions. These tools are devoid of meaning, it is pointless to discuss what DL can or cannot do. What gives meaning to it is how it is trained and how the data is fed to it

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·