Ben Poole

@poolio

research scientist at google brain. phd in neural nonsense from stanford.

Stanford, CA
Vrijeme pridruživanja: ožujak 2008.

Tweetovi

Blokirali ste korisnika/cu @poolio

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @poolio

  1. Prikvačeni tweet
    25. lip 2019.

    Want to estimate or optimize mutual information using neural networks and the latest variational bounds? Check out our Colab notebook for implementations and experiments! Colab: Paper:

    Poništi
  2. proslijedio/la je Tweet
    4. velj

    A research story with a twist I looked for a neural network regularization method that limited the number of non-zero node activities (L0 group sparsity?) I couldn’t find one. 1/11

    Prikaži ovu nit
    Poništi
  3. proslijedio/la je Tweet
    prije 21 sat

    Our huge Patch-seq effort is out on bioRxiv: . 1320 neurons in mouse motor cortex patched, sequenced, and mapped to a scRNA-seq atlas. 642 reconstructed morphologies! Our running joke was that it was a bit like catching Pokemons. (1/n)

    Prikaži ovu nit
    Poništi
  4. proslijedio/la je Tweet

    Looks like the meeting program has been posted online!

    Poništi
  5. proslijedio/la je Tweet
    3. velj

    Joint Distributions for TensorFlow Probability. (arXiv:2001.11819v1 [])

    Poništi
  6. proslijedio/la je Tweet
    31. sij

    Does your DNN have problems with common corruption robustness? You can get suprisingly far by just training on noise! In our new paper, we evaluate how simple learned i.i.d. noise can help to generalize to ImageNet-C. Blog post @

    Prikaži ovu nit
    Poništi
  7. 31. sij

    For all the procrastinators with AOE deadlines:

    Poništi
  8. proslijedio/la je Tweet
    28. sij

    This is a great question that I've gotten periodically. Previously it would have taken too long to put something together, but using Neural Tangents () it's really easy and fast! Here is the reproduction in a colab:

    Prikaži ovu nit
    Poništi
  9. proslijedio/la je Tweet
    22. sij

    FixMatch: focusing on simplicity for semi-supervised learning and improving state of the art (CIFAR 94.9% with 250 labels, 88.6% with 40). Collaboration with Kihyuk Sohn, Nicholas Carlini

    Prikaži ovu nit
    Poništi
  10. 16. sij

    bay area drivers' fear of water is as bad as their fear of new housing

    Poništi
  11. proslijedio/la je Tweet
    15. sij

    Differentiable Digital Signal Processing (DDSP)! Fusing classic interpretable DSP with neural networks. ⌨️ Blog: 🎵 Examples: ⏯ Colab: 💻 Code: 📝 Paper: 1/

    Prikaži ovu nit
    Poništi
  12. proslijedio/la je Tweet
    14. sij

    BigGAN samples are famously photo-realistic but limited in diversity for some classes. Slightly modifying only the class embeddings (network unchanged) can reduce the diversity gap by ~50%! Work with Long Mai and led by fantastic !! Paper & video:

    Prikaži ovu nit
    Poništi
  13. proslijedio/la je Tweet
    14. sij

    New preprint with and ! arXiv: code: (thread) 1. Many models (e.g., in psych and comp neuro) don't have a log-likelihood in closed form, but we can easily sample observations from the model.

    Prikaži ovu nit
    Poništi
  14. proslijedio/la je Tweet
    10. sij

    Videos of all of our speakers from 2019 are now up! Please share and stay tuned for 2020.

    Poništi
  15. proslijedio/la je Tweet
    7. sij

    Our paper "On the information bottleneck theory of deep learning" has been republished (with small edits) in J Stat Mech ML special issue: A wonderful collaboration with @laika117 Artemy Kolchinsky

    Prikaži ovu nit
    Poništi
  16. proslijedio/la je Tweet
    6. sij

    *REMINDER + PLS RT* Our workshop, From Neuroscience to Artificially Intelligent Systems (NAISys), has an abstract deadline of January 10. This Friday!!! But, it's only 1-page, so easy-peasy: Please send in ideas for how neuroscience can inform AI!

    Poništi
  17. proslijedio/la je Tweet
    6. sij

    We distill key components for pre-training representations at scale: BigTransfer ("BiT") achieves SOTA on many benchmarks with ResNet, e.g. 87.8% top-1 on ImageNet (86.4% with only 25 images/class) and 99.3% on CIFAR-10 (97.6% with only 10 images/class).

    Prikaži ovu nit
    Poništi
  18. proslijedio/la je Tweet
    3. sij

    A General and Adaptive Robust Loss Function They propose an analytical function that can represent a family of well known robust cost functions just with a single parameter (alpha). Alpha lets you walk through L2, huber, cauchy, tukey and more.

    Prikaži ovu nit
    Poništi
  19. proslijedio/la je Tweet
    2. sij

    HNY and 1st discovery from mouse V1 serial section EM dataset! Largest wiring diagram yet btw identified cortical neurons Brain Science supported by MICrONS. 1/n

    Ovo je potencijalno osjetljiv multimedijski sadržaj. Saznajte više
    Prikaži ovu nit
    Poništi
  20. proslijedio/la je Tweet
    20. pro 2019.

    Interested in how neuroscience can inspire better AI? Come to NAISys, March 24-28 Abstracts (1 page) due Jan 10 registration {Please RETWEET ME}

    Prikaži ovu nit
    Poništi
  21. proslijedio/la je Tweet
    20. pro 2019.

    A year ago in Nature Biotechnology, Becht et al. argued that UMAP preserved global structure better than t-SNE. Now and me wrote a comment saying that their results were entirely due to the different initialization choices: . Thread. (1/n)

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·