Nal Kalchbrenner

@NalKalchbrenner

Research Scientist in Deep Learning at Brain Amsterdam. Previously RS at , PhD in CS at Oxford, MS at UvAmsterdam and BS and BA at Stanford.

Vrijeme pridruživanja: kolovoz 2015.

Tweetovi

Blokirali ste korisnika/cu @NalKalchbrenner

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @NalKalchbrenner

  1. Prikvačeni tweet
    10. srp 2018.

    Exciting news: Brain expands to Amsterdam 🚲🚣🌳 Looking forward to working on core AI challenges with , Lasse Espeholt and researchers across Google and beyond! For RE and RS roles, reach out or apply here: Great times ahead!

    Poništi
  2. proslijedio/la je Tweet
    27. sij

    Short but sweet paper on recurrent autoencoder architectures for speech compression. We systematically explore the space of RNN-AEs and show that the best method, dubbed FRAE, outperforms classical codecs by a large margin. Check it out!

    Poništi
  3. proslijedio/la je Tweet

    Valsts prezidents Egils Levits šodien Cīrihē tikās ar Google mākslīgā intelekta pētniekiem. Diskusijas noritēja par digitālām tehnoloģijām, kā novērst to radītos riskus sabiedrībai.

    Poništi
  4. proslijedio/la je Tweet
    30. lis 2019.
    Poništi
  5. proslijedio/la je Tweet

    ✅Solar-powered ✅Autonomous ✅Scalable The Interceptor removes up to 100,000 kg of plastic from rivers per day. This is how it works:

    Poništi
  6. 27. lis 2019.
    Poništi
  7. proslijedio/la je Tweet
    23. lis 2019.

    New paper! We perform a systematic study of transfer learning for NLP using a unified text-to-text model, then push the limits to achieve SoTA on GLUE, SuperGLUE, CNN/DM, and SQuAD. Paper: Code/models/data/etc: Summary ⬇️ (1/14)

    Prikaži ovu nit
    Poništi
  8. 23. lis 2019.

    A uniquely most remarkable result that just 8 years ago while I was studying quantum computing I never thought I would witness so soon. Congrats to the Quantum group!

    Poništi
  9. proslijedio/la je Tweet
    17. lis 2019.

    Neural architecture evolution, a new method for automatically finding optimal neural networks for video understanding, has yielded architectures that outperform existing hand-made models and show improvements to network runtime of 10-100x. Learn more ↓

    Poništi
  10. proslijedio/la je Tweet
    11. lis 2019.

    New research demonstrates how a model for multilingual of 100+ languages trained with a single massive significantly improves performance on both low- and high-resource language translation. Read all about it at:

    Poništi
  11. proslijedio/la je Tweet

    Our ocean cleanup system is now finally catching plastic, from one-ton ghost nets to tiny microplastics! Also, anyone missing a wheel?

    Poništi
  12. 2. lis 2019.

    It's inspiring to see achieve this impressive first milestone in collecting ocean plastic

    Tweet je nedostupan.
    Poništi
  13. proslijedio/la je Tweet
    21. kol 2019.

    Today, we're launching our Waymo Open Dataset. This high resolution lidar and camera data has been collected by our self-driving cars across a diverse range of situations. We're excited to share it directly with the research community. Download now:

    Poništi
  14. proslijedio/la je Tweet

    Great Pacfic Garbage Patch plastic, concentrated >10,000x by the cleanup system. Also note the buildup of very small pieces on the left.

    Prikaži ovu nit
    Poništi
  15. proslijedio/la je Tweet
    15. kol 2019.

    Super proud of work from my teammate from Google Brain Amsterdam: Scales up Bayesian Inference with a sampler that outperforms all ImageNet models that don't use batch norm. Important added benefit is accurate uncertainty estimates, via rigorous calibration testing

    Poništi
  16. proslijedio/la je Tweet
    12. kol 2019.
    Poništi
  17. 15. kol 2019.

    it was so much fun to see the steady crazy progress on this project!

    Prikaži ovu nit
    Poništi
  18. 15. kol 2019.

    Announcing exciting progress in Bayesian deep learning: the new ATMC sampler achieves first of its kind Bayesian inference results on ImageNet Check out the results and the paper 👇 Heek et al:

    Prikaži ovu nit
    Poništi
  19. proslijedio/la je Tweet
    5. kol 2019.

    The call for papers for our Graph Representation Learning Workshop is out! Submit your papers by 9 September: w/ co-organizers , , , Stefanie Jegelka, , , ,

    Poništi
  20. proslijedio/la je Tweet
    10. srp 2019.

    Recent work on "Unsupervised Data Augmentation" (UDA) reveals that better data augmentation leads to better semi-supervised learning, with state-of-the-art results on various language and vision benchmarks, using one or two orders of magnitude less data.

    Prikaži ovu nit
    Poništi
  21. proslijedio/la je Tweet
    9. srp 2019.

    There is an old misconception that common sense is rooted in language. IMHO, common sense emerges from knowing how the world works. It has more to do with intuitive physics than with language. But when your world involves communicating with humans, language becomes part of it.

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·