jaydeep thik

@jaydeepthik

MSCS at The Ohio State University . ML and DL enthusiast on a mission to make machines learn, think, understand,imagine and to solve the problem of intelligence

Vrijeme pridruživanja: svibanj 2014.

Tweetovi

Blokirali ste korisnika/cu @jaydeepthik

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @jaydeepthik

  1. proslijedio/la je Tweet
    2. sij

    Brains are amazing. Our lab demonstrates that single human layer 2/3 neurons can compute the XOR operation. Never seen before in any neuron in any other species. Out now in . Congrats Albert, Tim  & CO

    Poništi
  2. proslijedio/la je Tweet
    9. pro 2019.

    At , we care about speed. 75x faster tokenizers. 🤯

    Poništi
  3. proslijedio/la je Tweet
    8. pro 2019.
    Poništi
  4. proslijedio/la je Tweet

    Gradient descent is hugely controversial in neuroscience. See the ultimate megathread:

    Prikaži ovu nit
    Poništi
  5. proslijedio/la je Tweet

    Facebook AI is getting ready to welcome all of you at ! Swing by booth 509 to learn more about cutting-edge research we’re presenting this year. Read more:

    Poništi
  6. proslijedio/la je Tweet
    4. stu 2019.
    Poništi
  7. proslijedio/la je Tweet
    30. lis 2019.

    It's humbling and inspiring that many big discoveries seem obvious in retrospect. This suggests that there are simple undiscovered ideas, right before our eyes, waiting to be noticed by a mind that's bold enough to look for the obvious.

    Poništi
  8. 23. lis 2019.
    Poništi
  9. proslijedio/la je Tweet
    23. lis 2019.

    We are excited to announce the results of our quantum supremacy experiment. Using a fully programmable, 54-qubit processor, called “Sycamore”, we have performed a calculation in 200 seconds that’s infeasible on the fastest supercomputers. Learn more ↓

    Poništi
  10. proslijedio/la je Tweet
    17. lis 2019.

    Conventional wisdom: slowly decay learning rate (lr) when training deep nets. Empirically, some exotic lr schedules also work, eg cosine. New work with Zhiyuan Li: exponentially increasing lr works too! Experiments + surprising math explanation. See

    Poništi
  11. proslijedio/la je Tweet
    17. lis 2019.

    What happens in the IT cortex and to the subsequent behavior of non-human primates when you silence visually driven vlPFC neurons? Can deep-nets predict this? Come check out my poster [Tuesday Oct 22ⁿᵈ from 8 am; or Saturday Oct 19ᵗʰ at 6:30 pm @ poster session]

    Poništi
  12. proslijedio/la je Tweet
    14. lis 2019.

    🚀 3.8 has been released! Learn what's new and how to use it at What's your favorite 3.8 feature?

    Prikaži ovu nit
    Poništi
  13. proslijedio/la je Tweet
    9. lis 2019.

    Seeing a lot of confusion regarding these two terms in GAN papers lately. Mode Collapse: When a large region of a model's input space maps to a small region around a single (often bad) sample. Mode Dropping: When modes in the data are not represented in the output of the model.

    Prikaži ovu nit
    Poništi
  14. proslijedio/la je Tweet
    6. ruj 2019.

    Thrilled to be able to share what I've been working on for the last year - solving the fundamental equations of quantum mechanics with deep learning!

    Prikaži ovu nit
    Poništi
  15. proslijedio/la je Tweet
    6. ruj 2019.

    In our new blog post, we review how brains replay experiences to strengthen memories, and how researchers use the same principle to train better AI systems:

    Poništi
  16. proslijedio/la je Tweet
    12. kol 2019.
    Poništi
  17. 7. kol 2019.
    Poništi
  18. proslijedio/la je Tweet
    6. kol 2019.

    Announcing EfficientNet-EdgeTPU, a family of image classification models created using and optimized for use on , that provide real-time performance while achieving the accuracy of much larger, server-side models. Check it out below ↓

    Poništi
  19. proslijedio/la je Tweet
    25. srp 2019.

    We’re excited to announce the results of our research collaboration with , applying Population Based Training (PBT) to help make the process of training neural nets in their self-driving cars more effective and efficient.

    Prikaži ovu nit
    Poništi
  20. proslijedio/la je Tweet

    We were so excited last night to show the world what we've been doing for the last two years. If you missed it, you can catch the full recording here:

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·