Balaji Lakshminarayanan

@balajiln

Research Scientist at Google working on machine learning and its applications

Vrijeme pridruživanja: ožujak 2009.

Tweetovi

Blokirali ste korisnika/cu @balajiln

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @balajiln

  1. Prikvačeni tweet
    5. pro 2019.

    Excited to announce our new paper "AugMix", which proposes a simple yet surprisingly effective method to improve robustness & uncertainty particularly under dataset shift :)  Joint work with . More details below:

    Prikaži ovu nit
    Poništi
  2. proslijedio/la je Tweet
    15. sij

    Check out a new study into how the uncertainty of models degrade with increasing dataset shift. Do the models become increasingly uncertain or do they become confidently incorrect? Learn all about it below!

    Prikaži ovu nit
    Poništi
  3. proslijedio/la je Tweet
    18. pro 2019.

    Check out a novel approach to out-of-distribution detection, applied to a new benchmark dataset of genomic sequences, that enables a model to better discriminate between anomalous data and that used in training. Learn all about it below ↓

    Poništi
  4. proslijedio/la je Tweet
    10. pro 2019.

    Please come and check out our poster at NeurIPS 2019, on Wed Dec 11th 10:45 AM -- 12:45 PM @ East Exhibition Hall B + C #44.

    Poništi
  5. proslijedio/la je Tweet
    9. pro 2019.

    We looked into the NeurIPS 2019 data, to see if we could gain any interesting insights and inform discussion on future years. Here's our blog post on what we found out:

    Poništi
  6. 6. pro 2019.

    If you'd like to learn more, check out our paper :) will also be giving a contributed talk about our work on Dec 13 (Friday) 9-915 AM and presenting a poster at the Bayesian deep learning workshop () at

    Prikaži ovu nit
    Poništi
  7. 6. pro 2019.

    5) We also validate the hypothesis by building low-loss tunnels between solutions found by different random inits. While points along low loss tunnel have similar accuracies, the function space disagreement between them & the two end points shows that the modes are diverse.

    Prikaži ovu nit
    Poništi
  8. 6. pro 2019.

    4) From a bias-variance perspective, we care about both accurate solutions (low bias) and diverse solutions (as decorrelation reduces variance).   Given a reference solution, we plot diversity vs accuracy to measure how different methods trade-off diversity vs accuracy.

    Prikaži ovu nit
    Poništi
  9. 6. pro 2019.

    3) t-SNE plot of predictions along training trajectories (marked by different colors) shows that random initialization leads to diverse functions. Sampling functions from a subspace corresponding to a single trajectory increases diversity but not as much as random init.

    Prikaži ovu nit
    Poništi
  10. 6. pro 2019.

    2) One hypothesis is that ensembles may lead to different modes while scalable Bayesian methods may sample from a single mode. We measure the similarity of function (both in weight space and function space) to test this hypothesis.

    Prikaži ovu nit
    Poništi
  11. 6. pro 2019.

    Why do deep ensembles trained with just random initialization work surprisingly well in practice?  In our recent paper with & Huiyi Hu, we investigate this by using insights from recent work on loss landscape of neural nets.  More below:

    Prikaži ovu nit
    Poništi
  12. proslijedio/la je Tweet
    6. pro 2019.

    Looking for something to read in your flight to ? Read about Normalizing Flows from our extensive review paper (also with new insights on how to think about and derive new flows) with

    Prikaži ovu nit
    Poništi
  13. 5. pro 2019.

    If you'd like to learn more, check out our paper for more details!

    Prikaži ovu nit
    Poništi
  14. 5. pro 2019.

    AugMix also significantly improves predictive uncertainty estimation and is orthogonal to other methods for improving uncertainty. AugMix + Deep Ensembles achieves SOTA calibration on ImageNet-C under increasing data shift, a challenging task as shown in (Ovadia et al. 2019).

    Prikaži ovu nit
    Poništi
  15. 5. pro 2019.

    AugMix significantly improves robustness to unseen corruptions on the benchmark proposed by (Hendrycks & Dietterich, 2019). AugMix closes the gap between previous SOTA and clean error (an estimate of the best possible performance) by more than half on CIFAR-10-C and CIFAR-100-C!

    Prikaži ovu nit
    Poništi
  16. 5. pro 2019.

    As can be seen in GIF above, AugMix generates more diverse & realistic augmentations of training data by `composing' random set of label-preserving ops & `mixing' them. AugMix also uses a consistency loss between augmentations that encourages invariance to semantic perturbations.

    Prikaži ovu nit
    Poništi
  17. proslijedio/la je Tweet
    26. stu 2019.

    Great thread (you should read the whole thing!), but this final tweet is something we should all aspire to do. Thanks, !

    Poništi
  18. proslijedio/la je Tweet
    23. stu 2019.

    The Bayesian Deep Learning workshop website has been updated with accepted papers and schedule

    Poništi
  19. proslijedio/la je Tweet
    21. lis 2019.

    NeurIPS for local communities! • Reduce the need for air travel • Grow AI expertise around the world, including in underrepresented communities • Create opportunities for researchers and practitioners who can’t physically attend due to space, visa, time, funding constraints

    Poništi
  20. proslijedio/la je Tweet
    3. lis 2019.

    Now for something different! Deep RL + GAN training + CelebA = artificial caricature. Agents learn to draw simplified (artistic?) portraits via trial and error. @ creativity workshop. Animated paper: PDF: Thread.

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·