Behnam Neyshabur

@bneyshabur

Backpacker, Senior Research Scientist at Google

Vrijeme pridruživanja: svibanj 2014.

Tweetovi

Blokirali ste korisnika/cu @bneyshabur

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @bneyshabur

  1. 14. pro 2019.

    Guy Gur-Ari is explaining how Feynman Diagrams can be used as a tool to simplify calculations needed to understand asymptotics of neural networks.

    Poništi
  2. 14. pro 2019.

    Come to our poster on "Observational Overfitting in Reinforcement Learning" at "Optimization Foundations of Reinforcement Learning" and "Science Meets Engineering of Deep Learning" workshops. Joint work w Xingyou Song & Yilun Du

    Poništi
  3. 14. pro 2019.

    Joint work with and Samy Bengio

    Prikaži ovu nit
    Poništi
  4. 14. pro 2019.

    Come to our poster on "The intriguing role of module criticality in the generalization of deep networks" at workshops "ML with Guarantees" and "Science Meets Engineering of DL" with w/ Niladri Chatterji and

    Poništi
  5. 14. pro 2019.

    Come to our poster on "Fantastic Generalization Measures and Where To Find Them" at workshops "ML with Guarantees" and "Science Meets Engineering of DL". will also give a spotlight talk at 5:40pm in "Science Meets Engineering of DL" workshop.

    Prikaži ovu nit
    Poništi
  6. 8. pro 2019.

    Will be attending ! Ping me if you want to chat!

    Poništi
  7. 8. pro 2019.
    Poništi
  8. proslijedio/la je Tweet
    6. pro 2019.

    How does transfer learning for medical imaging affect performance, representations and convergence? Check out the blogpost below and our paper for some of the surprising conclusions, new approaches and open questions!

    Poništi
  9. proslijedio/la je Tweet
    5. pro 2019.

    Fantastic Generalization Measures and Where to Find Them by Yiding Jiang et al. including

    Poništi
  10. proslijedio/la je Tweet
    4. pro 2019.

    Fantastic Generalization Measures and Where to Find Them “We present the first large scale study of generalization in deep networks. We train over 10,000 convolutional networks by systematically varying commonly used hyperparameters.”

    , , i još njih 2
    Poništi
  11. proslijedio/la je Tweet
    4. pro 2019.

    Been waiting for this paper to drop. It's here. I've got my NeurIPS flight reading sorted out. I think this is an important step towards gaining clarity on what it might mean to "explain generalization".

    Poništi
  12. proslijedio/la je Tweet
    4. pro 2019.

    One of the most comprehensive studies of generalization to date; ≈40 complexity measures over ≈10K deep models. Surprising observations worthy of further investigations. Fantastic Generalization Measures: w S. Bengio

    Poništi
  13. proslijedio/la je Tweet
    3. pro 2019.

    Excited to share our latest work on generalization in DL w/ Niladri Chatterji & We study the phenomenon that some modules of DNNs are more critical than others: rewinding their values back to initialization, strongly harms performance.(1/3)

    Prikaži ovu nit
    Poništi
  14. 4. pro 2019.

    The shape of the valleys that connect the initial and final parameter values of modules (eg. conv module) can tell you a lot about why some architectures generalize better! See our recent work w/ Niladri Chatterji &  :

    Poništi
  15. proslijedio/la je Tweet
    26. lip 2019.

    1/3 If you study dynamics of gradient descent, what properties of trajectory would be most useful for your research? Currently DEMOGEN (dataset of 756 trained models) has final weights, but we plan to extend and include information on intermediate weights.

    Prikaži ovu nit
    Poništi
  16. proslijedio/la je Tweet
    19. lip 2019.

    1/3 DEMOGEN is a dataset of 756 CNN/ResNet-32 models trained on CIFAR-10/100 w/ various regularization and hyperparameters, leading to wide range of generalization behaviors. Hope dataset can help the community w/ exploring generalization in

    Prikaži ovu nit
    Poništi
  17. proslijedio/la je Tweet
    15. lip 2019.

    Nati Srebro giving an exciting talk about what are NOT the reasons for neural networks working well at workshop

    Prikaži ovu nit
    Poništi
  18. 14. lip 2019.
    Prikaži ovu nit
    Poništi
  19. 14. lip 2019.

    Have you seen something interesting or curious or mysterious while training a deep neural network? Share these interesting and unusual deep learning phenomena here:

    Prikaži ovu nit
    Poništi
  20. 14. lip 2019.

    Come to our workshop on Identifying and Understanding Deep Learning Phenomena tomorrow! We have many super exciting talks tomorrow! See our schedule here:

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·