Tweetovi
- Tweetovi, trenutna stranica.
- Tweetovi i odgovori
- Medijski sadržaj
Blokirali ste korisnika/cu @DeepSpiker
Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @DeepSpiker
-
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
Danilo J. Rezende proslijedio/la je Tweet
We are organizing an
@ELLISforEurope Workshop on Geometric and Relational Deep Learning! Registration invites will be shared soon. Interested in participating? Consider submitting an abstract or get in touch: https://geometric-relational-dl.github.io/ w/@erikjbekkers@wellingmax@mmbronsteinpic.twitter.com/RGEHezfZIa
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Danilo J. Rezende proslijedio/la je Tweet
We have 2 papers published in
@nature today!
One describes AlphaFold, which uses deep neural networks to predict protein structures with high accuracy. AlphaFold made the most accurate predictions at the 2018 scientific community assessment CASP13. 1/4https://deepmind.com/blog/article/AlphaFold-Using-AI-for-scientific-discovery …Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
I'm honored to be giving a talk about generative models tomorrow at the Simons Center for Geometry and Physics! Thanks
@jhhalverson for the invite and for the remote arrangements! Looking forward to it!Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Danilo J. Rezende proslijedio/la je Tweet
Had a fun time preparing my talk for
#ML4Jets presenting deep learning in the perspective of a LEGO brick box with infinitely composable functional blocks
Slides are available at https://indico.cern.ch/event/809820/contributions/3632527/attachments/1969816/3276388/glouppe2020-ml4jets.pdf …https://twitter.com/KyleCranmer/status/1217467919120109570 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Stay tuned for the call for submissions and updates to our workshop "Causal Learning for Decision Making" at
#ICLR2020 Jointly organised by@MILAMontreal,@DeepMindAI and@PerceivingSys#CausalMLhttps://twitter.com/anirudhg9119/status/1215381163541520384 …Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Danilo J. Rezende proslijedio/la je Tweet
We are organizing a workshop on Causal learning for Decision Making at
@iclr_conf along with@rosemary_ke@DeepSpiker@theophaneweber, Jovana Mitrovic,@janexwang, Stefan and@csilviavr. https://sites.google.com/view/causal-learning-icrl2020/home …@MILAMontreal Consider submitting your work!Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Danilo J. Rezende proslijedio/la je Tweet
Our paper "Variational Autoencoders and Nonlinear ICA: A Unifying Framework" has been accepted to AISTATS'20. With
@ilkhem, Ricardo Pio Monti and Aapo Hyvarinen (UCL). Surprisingly strong and general identifiability results, with rigorous proofs! https://arxiv.org/abs/1907.04809Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Danilo J. Rezende proslijedio/la je Tweet
Happy New Decade everyone! Hard to believe it is 2020 - feels like the year when the future should be invented. We started
@DeepMindAI back in 2010, incredible to see how far AI has come in the last decade, but really this is just the beginning!Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
These mathematical principles are independent of how the model is instantiated (e.g as a Lagrangian, as some non-parametric posterior or as a bunch of matrices squeezing non-linearities).*What matters is how effectively we can enforce them in the model class that we work on* 5/5
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
We can further enrich the notion by connecting to symmetry/equivariance principles: A good model is one that possesses/exposes the largest possible set of symmetries, while being compatible with all observed and experimental(interventional) data 4/5
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
We can enrich this notion of simplicity by connecting it to robustness principles: A good model is one that is simple and yet robust to a family of hypothetical perturbations established upfront (i.e. causally correct) 3/5
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Generalisation is about the simplicity of the model class that is compatible with all observed and experimental(interventional) data. Simplicity doesn't mean a small number of parameters. It means that we need a small number of bits to describe the model 2/5
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Some people in ML share the illusion that models expressed symbolically will necessarily/magically generalise better compared to, for example, parametric model families fit on the same data. This belief seems to come from a naive understanding of mathematics 1/5
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Danilo J. Rezende proslijedio/la je Tweet
"Deep Learning" is such a poor name, let's call it what it really is: "Differentiable Software". It's hard to remain dogmatic against the field when you realize it's just about writing programs that you can take the analytic derivative of and optimize via gradient descent.
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Hey
@betanalpha@JCornebise@dustinvtran, what statistics do you suggest for assessing the quality of a proposal distribution given known target density up to normalization? Anything I could look for beyond ESS, MCMC acc. rate and Fisher score? Any paper pointers welcome! :)Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Each of these principles is an active area of research in the DL community with a growing interest. In fact I expect an enormous progress in the next few years in merging physics and DL.
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
It is also orthogonal to other principles of robustness and interpretability from statistics and physics such as compositionality, disentanglement, equivariance and gauge invariance.
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
This is orthogonal to causality
@yudapearl: we can build SEMs from DL modules with generative components, do interventions, counterfactuals and also last but not least fit DL models with interventional data.Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Rephrasing
@ylecun with my own words: DL is a collection of tools to build complex modular differentiable functions. These tools are devoid of meaning, it is pointless to discuss what DL can or cannot do. What gives meaning to it is how it is trained and how the data is fed to itPrikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.