Peetak Mitra

@peetak_mitra

Caffeine dependent life form. Proud cat dad. Computational physics @ Los Alamos, Co-founder @ ICEnet. 🇮🇳|🇺🇸 my tweets/RTs are my own

Los Alamos, NM
Vrijeme pridruživanja: studeni 2019.

Tweetovi

Blokirali ste korisnika/cu @peetak_mitra

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @peetak_mitra

  1. Prikvačeni tweet
    7. sij

    Excited to (be invited and) serve on the program committee for the ‘Tackling Climate Change with Machine Learning’ workshop at the upcoming in as part of the initiative. Workshop webpage:-

    Prikaži ovu nit
    Poništi
  2. proslijedio/la je Tweet
    31. sij
    Poništi
  3. proslijedio/la je Tweet
    31. sij

    On Tuesday, in my class, we have learnt that all a neural net does is stretching / contracting the space fabric. For example this 3-layer net (1 hidden layer of 100 positive neurons) gets its 5D logits (2D projections) linearly separable by the classifier hyperplanes (lines).

    Poništi
  4. proslijedio/la je Tweet
    29. sij

    Excited to share that we're out of stealth!

    Poništi
  5. proslijedio/la je Tweet
    Odgovor korisnicima

    The course I am teaching now at is here: I have added topics in statistics because I believe it is critical to have a strong foundation in ML

    Poništi
  6. proslijedio/la je Tweet
    28. sij

    New paper: Towards a Human-like Open-Domain Chatbot. Key takeaways: 1. "Perplexity is all a chatbot needs" ;) 2. We're getting closer to a high-quality chatbot that can chat about anything Paper: Blog:

    Prikaži ovu nit
    Poništi
  7. proslijedio/la je Tweet
    28. sij

    Our new paper using RNNs for surrogate modeling of PDEs with multiscale behavior : We identify parallels between Mori-Zwanzig theory and the general formulation of RNNs & leverage this to close insufficiently resolved systems through time-delay embeddings.

    Poništi
  8. proslijedio/la je Tweet
    28. sij

    Join us now for the 2nd half of "AI & Climate Change" with talks on climate mitigation and how to use ML for climate action, with Kristina Orehounig Daniel Soares &

    Poništi
  9. proslijedio/la je Tweet
    27. sij

    This is a great opportunity for ML students to apply their skills to important climate questions, and equally Climate students to learn cutting edge ML skills. Don’t miss out!

    Poništi
  10. proslijedio/la je Tweet
    24. sij

    We're releasing mBART, a new seq2seq multilingual pretraining system for machine translation across 25 languages. It gives significant improvements for document-level translation and low-resource languages. Read our paper to learn more:

    Poništi
  11. proslijedio/la je Tweet
    22. sij

    Updated blog post with -- "Bayesian Neural Networks Need Not Concentrate": Thanks to all for the discussion & feedback. This improved version of the blog post hopefully explains the core claims more clearly, while being less polarizing.

    Prikaži ovu nit
    Poništi
  12. proslijedio/la je Tweet
    24. sij

    Check out the new release of NumPyro . This includes an implementation of 's Block Neural Autoregressive Flow in JAX and its usage in MCMC. Also, let us know what you think about the new Sample Adaptive MCMC sampler.

    Poništi
  13. proslijedio/la je Tweet
    23. sij

    Google Dataset Search is now officially out of beta. "Dataset Search has indexed almost 25 million of these datasets, giving you a single place to search for datasets & find links to where the data is." Nice work, Natasha Noy and everyone else involved!

    Prikaži ovu nit
    Poništi
  14. proslijedio/la je Tweet
    23. sij

    Reminders of deadlines: Papers: abstracts due 30 Jan, papers due 6 Feb Workshops: due 14 Feb Tutorials: due 21 Feb details on each: Please submit (and circulate the calls!)

    Prikaži ovu nit
    Poništi
  15. proslijedio/la je Tweet

    I agree. Jax is next generation framework for

    Poništi
  16. proslijedio/la je Tweet

    Cool result by on how methods help explain good generalization in My 1 slide summary. Layers in neural network are more compressible when tensorized. This gives tighter generalization that agrees with observed test error

    Poništi
  17. proslijedio/la je Tweet
    22. sij

    1/5 In one my Residency projects we used CNNs to reparameterize structural optimization (w/ ). Our approach worked best on 99/116 structures. I just finished a blog post with GIFs, visualizations, and links to code + Colab.

    Prikaži ovu nit
    Poništi
  18. proslijedio/la je Tweet

    We used JAX for competitive gradient descent (CGD) with Hessian-vector products. Mixed mode differentiation in JAX makes this efficient (just twice cost of backprop). We used CGD for training GANs and for constrained problems in RL. This library will be very useful

    Prikaži ovu nit
    Poništi
  19. proslijedio/la je Tweet
    20. sij

    Flax is worth taking a look at if you're interested in training neural nets in JAX. It's definitely a big step up from Stax!

    Poništi
  20. proslijedio/la je Tweet
    17. sij

    This review on normalizing flows is excellent. It's full of clear writing, precise claims, and useful connections.

    Poništi
  21. 17. sij

    Incredible opportunity to discuss my work with & at the PIML conference organized by / ! Look hard and you’ll find me behind the podium 😂

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·