Tweetovi
- Tweetovi, trenutna stranica.
- Tweetovi i odgovori
- Medijski sadržaj
Blokirali ste korisnika/cu @balajiln
Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @balajiln
-
Prikvačeni tweet
Excited to announce our new paper "AugMix", which proposes a simple yet surprisingly effective method to improve robustness & uncertainty particularly under dataset shift :) Joint work with
@DanHendrycks@TheNormanMu@ekindogus@barret_zoph@jmgilmer. More details below:pic.twitter.com/hVWNvU0BfQPrikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Balaji Lakshminarayanan proslijedio/la je Tweet
Check out a new study into how the uncertainty of
#ML models degrade with increasing dataset shift. Do the models become increasingly uncertain or do they become confidently incorrect? Learn all about it below!https://goo.gle/2QW6MZlPrikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Balaji Lakshminarayanan proslijedio/la je Tweet
Check out a novel approach to out-of-distribution detection, applied to a new benchmark dataset of genomic sequences, that enables a
#MachineLearning model to better discriminate between anomalous data and that used in training. Learn all about it below ↓https://goo.gle/2sCZz6OHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Balaji Lakshminarayanan proslijedio/la je Tweet
Please come and check out our poster at NeurIPS 2019, on Wed Dec 11th 10:45 AM -- 12:45 PM @ East Exhibition Hall B + C #44.
@peterjliu@latentjasper@balajiln https://twitter.com/jessierenjie/status/1138335044219547649 …pic.twitter.com/qcwKObzoIJ
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Balaji Lakshminarayanan proslijedio/la je Tweet
We looked into the NeurIPS 2019 data, to see if we could gain any interesting insights and inform discussion on future years. Here's our blog post on what we found out:https://medium.com/@NeurIPSConf/what-we-learned-from-neurips-2019-data-111ab996462c …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
If you'd like to learn more, check out our paper https://arxiv.org/abs/1912.02757 :)
@stanislavfort will also be giving a contributed talk about our work on Dec 13 (Friday) 9-915 AM and presenting a poster at the Bayesian deep learning workshop (http://bayesiandeeplearning.org/ ) at#NeurIPS2019pic.twitter.com/NrpnmTNlDv
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
5) We also validate the hypothesis by building low-loss tunnels between solutions found by different random inits. While points along low loss tunnel have similar accuracies, the function space disagreement between them & the two end points shows that the modes are diverse.pic.twitter.com/JUsaysXIpA
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
4) From a bias-variance perspective, we care about both accurate solutions (low bias) and diverse solutions (as decorrelation reduces variance). Given a reference solution, we plot diversity vs accuracy to measure how different methods trade-off diversity vs accuracy.pic.twitter.com/Gkh7N48QQH
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
3) t-SNE plot of predictions along training trajectories (marked by different colors) shows that random initialization leads to diverse functions. Sampling functions from a subspace corresponding to a single trajectory increases diversity but not as much as random init.pic.twitter.com/1hxxu12a4c
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
2) One hypothesis is that ensembles may lead to different modes while scalable Bayesian methods may sample from a single mode. We measure the similarity of function (both in weight space and function space) to test this hypothesis.pic.twitter.com/QYfqvSDmWd
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Why do deep ensembles trained with just random initialization work surprisingly well in practice? In our recent paper https://arxiv.org/abs/1912.02757 with
@stanislavfort & Huiyi Hu, we investigate this by using insights from recent work on loss landscape of neural nets. More below:Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Balaji Lakshminarayanan proslijedio/la je Tweet
Looking for something to read in your flight to
#NeurIPS2019? Read about Normalizing Flows from our extensive review paper (also with new insights on how to think about and derive new flows) https://arxiv.org/abs/1912.02762 with@gpapamak@eric_nalisnick@DeepSpiker@balajiln@shakir_zapic.twitter.com/EWh8Aui7n0
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
If you'd like to learn more, check out our paper https://arxiv.org/abs/1912.02781 for more details!
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
AugMix also significantly improves predictive uncertainty estimation and is orthogonal to other methods for improving uncertainty. AugMix + Deep Ensembles achieves SOTA calibration on ImageNet-C under increasing data shift, a challenging task as shown in (Ovadia et al. 2019).pic.twitter.com/paK3OaXTMr
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
AugMix significantly improves robustness to unseen corruptions on the benchmark proposed by (Hendrycks & Dietterich, 2019). AugMix closes the gap between previous SOTA and clean error (an estimate of the best possible performance) by more than half on CIFAR-10-C and CIFAR-100-C!pic.twitter.com/VvSaGe7dbR
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
As can be seen in GIF above, AugMix generates more diverse & realistic augmentations of training data by `composing' random set of label-preserving ops & `mixing' them. AugMix also uses a consistency loss between augmentations that encourages invariance to semantic perturbations.
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Balaji Lakshminarayanan proslijedio/la je Tweet
Great thread (you should read the whole thing!), but this final tweet is something we should all aspire to do. Thanks,
@random_walker!https://twitter.com/random_walker/status/1199303767625228288 …Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Balaji Lakshminarayanan proslijedio/la je Tweet
The Bayesian Deep Learning workshop website has been updated with accepted papers and schedule
#BDL2019 http://bayesiandeeplearning.org pic.twitter.com/RGdzmb5X3q
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Balaji Lakshminarayanan proslijedio/la je Tweet
NeurIPS for local communities! • Reduce the need for air travel • Grow AI expertise around the world, including in underrepresented communities • Create opportunities for researchers and practitioners who can’t physically attend due to space, visa, time, funding constraintshttps://twitter.com/NeurIPSConf/status/1186357882557784070 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Balaji Lakshminarayanan proslijedio/la je Tweet
Now for something different! Deep RL + GAN training + CelebA = artificial caricature. Agents learn to draw simplified (artistic?) portraits via trial and error. @
#NeurIPS2019 creativity workshop. Animated paper: https://learning-to-paint.github.io PDF: https://arxiv.org/abs/1910.01007 Thread.pic.twitter.com/eeChwyP57fPrikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.