Tweetovi

Blokirali ste korisnika/cu @dilipkay

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @dilipkay

  1. proslijedio/la je Tweet
    14. pro 2019.

    Come to our poster on "Fantastic Generalization Measures and Where To Find Them" at workshops "ML with Guarantees" and "Science Meets Engineering of DL". will also give a spotlight talk at 5:40pm in "Science Meets Engineering of DL" workshop.

    Prikaži ovu nit
    Poništi
  2. proslijedio/la je Tweet
    4. pro 2019.

    One of the most comprehensive studies of generalization to date; ≈40 complexity measures over ≈10K deep models. Surprising observations worthy of further investigations. Fantastic Generalization Measures: w S. Bengio

    Poništi
  3. proslijedio/la je Tweet
    24. stu 2019.

    Google AI Residency 2020 applications are open, with positions in many different locations including Europe and Africa. A fantastic program designed to jumpstart a career in Machine Learning. Apply at before Dec. 19, 2019.

    Poništi
  4. proslijedio/la je Tweet

    Nice work & repo on knowledge distillation dark knowledge remains one of few amusingly brain-tickling / head-scratching results in neural nets

    Poništi
  5. 24. lis 2019.

    New paper with and : we apply contrastive learning to representation distillation, achieving SOTA results. Seems to be the first method to consistently outperform knowledge distillation for a range of tasks. Details and code:

    Poništi
  6. proslijedio/la je Tweet
    21. lis 2019.

    So it seems the margin-based approach to generalization bounds can be made to work -- provided you measure margin in a way that reflects the stability of all layers jointly. In hindsight, that seems rather natural. More impressive work from with Colin Wei.

    Poništi
  7. proslijedio/la je Tweet
    21. lis 2019.

    For quite sometime (NeurIPS18, ICLR19), we have empirically observed margin at intermediate layers carries significant information about generalization of a deep model. Delighted to see has now proved this phenomenon, and provided a cleaner definition of all-layer margin.

    Poništi
  8. proslijedio/la je Tweet
    5. ruj 2019.

    Applications now open for 2019 Google Faculty Research Award and are due September 30 at 1:00PM PST. The award provides unrestricted gifts as support for world-class technical research in Computer Science, Engineering, and related fields.

    Poništi
  9. proslijedio/la je Tweet
    19. kol 2019.

    Today (August 20, 2019) after the Lunar Orbit Insertion (LOI), is now in Lunar orbit. Lander Vikram will soft land on Moon on September 7, 2019

    Poništi
  10. 18. kol 2019.

    New paper at ICCV 2019! We tackle the problem of image *extrapolation* using generative adversarial networks. This is a much less constrained problem than image interpolation. Check out the paper, and more results here:

    Poništi
  11. 22. srp 2019.

    Successful launch, congratulations ! 23 days before landing sequence is initiated (aiming at the South Pole of the moon).

    Poništi
  12. 14. srp 2019.

    Good luck to ISRO!! Hope the technical issues are fixed soon.

    Poništi
  13. proslijedio/la je Tweet
    9. srp 2019.

    Our work on DEMOGEN dataset (first dataset of trained networks with realistic sizes/architectures) and its use in studying the connection between margin distribution and generalization gap.

    Poništi
  14. proslijedio/la je Tweet
    9. srp 2019.
    Poništi
  15. 9. srp 2019.

    with and Samy Bengio

    Prikaži ovu nit
    Poništi
  16. 9. srp 2019.

    New by blog post on our new dataset of over 700 models for the study of generalization; and our results on (very accurately) predicting the generalization gap in deep networks!

    Prikaži ovu nit
    Poništi
  17. 8. srp 2019.

    A novel new regularizer for deep networks that achieves SOTA for Imagenet adversarial robustness! Joint work with Chongli Qin, Pushmeet Kohli and other awesome people at DeepMind:

    Poništi
  18. proslijedio/la je Tweet
    26. lip 2019.

    with and S. Bengio.

    Prikaži ovu nit
    Poništi
  19. proslijedio/la je Tweet
    26. lip 2019.

    3/3 We thought storing a few (e.g. 5) evenly spaced weights between initial and final w's. I also thought storing LPC coeffs (say order n=10) [LPC predicts next w as linear combination of previous w's: (a_1,...,a_n)=argmin sum_t (w(t)-sum_i^n a_i w(t-i))^2]. Other thoughts?

    Prikaži ovu nit
    Poništi
  20. proslijedio/la je Tweet
    26. lip 2019.

    2/3 We cannot store all intermediate weights (final weights alone are 15.6GB in current version of DEMOGEN). Are there some compact statistics from the trajectory that you wished to have access to besides the final weights?

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·