Tweetovi
- Tweetovi, trenutna stranica.
- Tweetovi i odgovori
- Medijski sadržaj
Blokirali ste korisnika/cu @MilesCranmer
Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @MilesCranmer
-
Prikvačeni tweet
My
#NeurIPS2019 talk video and slides are up here starting at 43:35 https://slideslive.com/38922043/machine-learning-and-the-physical-sciences-4 … I explain our work extracting learned physical laws from graph networks in https://arxiv.org/abs/1909.05862 . I try to make it digestible for a broad audience!Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
This is a nice package for making pyplot animations more intuitive: https://github.com/jwkvam/celluloid/ … All you do is call "camera.snap()" every time you re-do the plot.pic.twitter.com/FYwYkU2KSB
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Miles Cranmer proslijedio/la je Tweet
This is the sharpest movie of the Sun ever made. Even at this fine resolution, the scale is enormous; each plasma cell here is about the size of Texas. https://www.nso.edu/telescopes/dkist/first-light-cropped-field-movie/ … via
@NatSolarObspic.twitter.com/JYSDw1Grx6Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Miles Cranmer proslijedio/la je Tweet
After a ton of work by a bunch of people, we're releasing an entirely new Neural Tangents. Paper: https://arxiv.org/abs/1912.02803 Github: https://github.com/google/neural-tangents … Colab Notebook:https://colab.sandbox.google.com/github/google/neural-tangents/blob/master/notebooks/neural_tangents_cookbook.ipynb …
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Want to do distributed hyperopt with nothing but a shared folder between nodes/processes? Try this example I wrote: https://github.com/MilesCranmer/easy_distributed_hyperopt … Quick and dirty but: no extra packages, no master nodes, and complete resilience to crashes and restarts.pic.twitter.com/01qUAV8hHn
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
If you use Stack Overflow for coding, try my vim plugin "Googling Stack Overflow" to do queries and paste code directly in the editor https://github.com/MilesCranmer/gso …pic.twitter.com/bLdb30Do9R
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
BayesNet seems like a really nice LaTeX package for drawing clean probabilistic graphical models with minimal effort. Wish I heard about it earlier! https://github.com/jluttine/tikz-bayesnet …pic.twitter.com/CaJ4HXWALx
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Awesome interactive demos of different MCMC algorithms: https://chi-feng.github.io/mcmc-demo/ pic.twitter.com/j4d5K8vCEt
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Miles Cranmer proslijedio/la je Tweet
and here's a million parameters. 10 chains/100 samples. About 30% of proposals are accepted.pic.twitter.com/Pif1BQTDrC
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
3/ Answering
@pwnic's question about the weight distribution (+@davidwhogg's comment on modality?), here's what the joint distribution looks like for 4 weights (16,000 samples) from the HMC. It's not converged and I doubt meaningful but it's interesting. It's not uncorrelated...pic.twitter.com/ydtdyQIodO
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Miles Cranmer proslijedio/la je Tweet
Thanks
@sam_power_825. Do you know of good references for statistical implications of these properties? Any thoughts on SWAG? https://arxiv.org/abs/1902.02476Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
2/2 Just wondering why Monte Carlo dropout/Bayes by Backprop are the most common (?) methods, despite their assumption of the weight posterior being uncorrelated. Directly MCMC'ing the weights seems like the simpler thing to do... and also seems to work well
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
1/2 Why isn't it more common to do explicit Hamiltonian MCMC on a Bayesian Neural Network's weights, with eg the initial condition = the loss minima found via SGD? I'm playing around with one in JAX and it seems to be working reasonably even with 5 chains: https://colab.research.google.com/drive/1gMAXn123Pm58_NcRldjSuGYkbrXTUiN2 …pic.twitter.com/LbC3kPOMkY
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Miles Cranmer proslijedio/la je Tweet
Training Neural SDEs: We worked out how to do scalable reverse-mode autodiff for stochastic differential equations. This lets us fit SDEs defined by neural nets with black-box adaptive higher-order solvers. https://arxiv.org/pdf/2001.01328.pdf … With
@lxuechen,@rtqichen and@wongtkleonard.pic.twitter.com/qlUwMxezjOPrikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
7/7 Finally, thanks
@wgrathwohl@rtqichen et al. for FFJORD; it's a really nice algorithm and code!Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
6/7 This approach is very applicable to iteratively learning distributions in latent spaces in the presence of noise or structured bias (dust) as we've shown in the paper.
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
5/7 More generally I think normalizing flows are very underused in astro
#AAS235. One often encounters highly non-Gaussian distributions in high-dimensional spaces, and GMMs/grid-based methods will hurt accuracy (and GPs are hard to scale and not as architecturally flexible).Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
4/7 I chose n=128 for Extreme Deconvolution based on its previous uses, e.g., arxiv:1706.05055. Note also that this example observed CMD does not have any dust or iterative estimates for simplicity. The example in our paper does, however.
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
3/7 To recreate this figure (you can optionally re-train the normalizing flow if you wish, or load it; it isn't actually converged yet), here's some code: https://github.com/MilesCranmer/xd_vs_flow/ …. The flow is vanilla except for treatment of noise.
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
2/7 The science case for work like this is building more accurate and scalable data-driven priors on photometry to improve distance estimates to stars (or other sources) in large scale surveys. Work with
@richardgalvez, Lauren Anderson,@DavidSpergel,@cosmo_shirley.Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
1/7
#AAS235 figure I included on my poster demonstrating normalizing flows (this one uses@DavidDuvenaud's group's FFJORD) vs extreme deconvolution-generated GMMs as a reconstruction method for noisy color-magnitude diagrams in astronomy. (our paper here: https://arxiv.org/abs/1908.08045 )pic.twitter.com/R9bdrcUTVD
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.