James Lucas

@james_r_lucas

Machine Learning PhD Student University of Toronto; Vector Institute

Vrijeme pridruživanja: travanj 2018.

Tweetovi

Blokirali ste korisnika/cu @james_r_lucas

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @james_r_lucas

  1. proslijedio/la je Tweet
    30. sij

    Glad I don't have to keep this a secret anymore - is in Toronto next year! Excited to welcome everyone up to Canada in January ❄️❄️❄️ 🇨🇦🇨🇦🇨🇦

    Poništi
  2. proslijedio/la je Tweet
    29. sij

    How do you increase AI capacity in a workforce? This article is about some of the professional development programming we're doing at Vector Institute . via

    , , i još njih 2
    Poništi
  3. proslijedio/la je Tweet
    6. sij

    To jump start the new year, a blog post on geometric series.

    Poništi
  4. 20. pro 2019.

    "vague weasel words do not a reason for rejection make" Seriously, amazing work from this meta reviewer. (Though, it is unfair that the reviewers made their job so much harder.)

    Poništi
  5. 14. pro 2019.

    I'm at the ML With Guarantees workshop today, talking about how hard it is to generalize to data without iid assumptions. Contributed talk at 11:30am and poster all day. Come speak with me about theory in settings like few-shot learning! Work w/ + Rich Zemel

    Poništi
  6. proslijedio/la je Tweet
    13. pro 2019.

    Can we learn generative language models for the joint distribution over several languages? Come find my poster for 🐸 Multilingual KERMIT 🐸 at the Perception as Generative Reasoning (PGR) Workshop at East Meeting rooms 1-3 from 2:30-3:30pm!

    Prikaži ovu nit
    Poništi
  7. 12. pro 2019.

    Come speak with and me about Lookahead now! Poster200

    Poništi
  8. proslijedio/la je Tweet
    11. pro 2019.
    Poništi
  9. 11. pro 2019.

    Come speak with us about posterior collapse in VAEs! (In 30 minutes...)

    Poništi
  10. 9. pro 2019.

    (4) Information-Theoretic Limitations on Novel Task Generalization, ML with Guarantees workshop. We measure theoretical hardness of settings like few-shot learning. I'll be presenting this as a contributed oral at 11:30 and during the poster sessions. and Rich Zemel

    Prikaži ovu nit
    Poništi
  11. 9. pro 2019.

    (3) Lookahead Optimizer: k steps forward, 1 step back. Thursday evening, East Hall B+C (#200) We propose a new optimization algorithm that wraps around existing optimizers, reducing variance and improving convergence. Work with , Geoff Hinton, and Jimmy Ba.

    Prikaži ovu nit
    Poništi
  12. 9. pro 2019.

    (2) Preventing Gradient Attenuation in Lipschitz Constrained Convolutional Networks. Thursday morning, East Hall B+C (#149) The secret is in finding the right way to learn orthogonal convolutions. Work with 2 awesome **undergrads** Qiyang Li + Saminul Haque, + others

    Prikaži ovu nit
    Poništi
  13. 9. pro 2019.

    (1) See also our website with code/video/poster

    Prikaži ovu nit
    Poništi
  14. 9. pro 2019.

    (1) Don't Blame the Elbo! A Linear VAE Perspective on Posterior Collapse. Wednesday morning, East Hall B+C (#123) We investigate posterior collapse through theoretical analysis of linear VAEs and empirical evaluation of nonlinear VAEs.

    Prikaži ovu nit
    Poništi
  15. 9. pro 2019.

    I'm in Vancouver for . If you're here and want to chat let me know! Also, I'm presenting some work...

    Prikaži ovu nit
    Poništi
  16. 4. pro 2019.

    Come see us on Thursday in Vancouver, Poster #200!

    Poništi
  17. proslijedio/la je Tweet
    5. stu 2019.

    Previously we introduced fully connected architectures with tight Lipschitz bounds. Now we extended this to conv nets. Good for provable adversarial robustness and Wasserstein distance estimation. Joint work w/ , Saminul Haque, et al.

    Prikaži ovu nit
    Poništi
  18. 8. kol 2019.

    I saw quite a bit of negativity around NeurIPS reviews this year... While I'm sure it's not universal, all of the papers I am reviewing have had a healthy amount of constructive reviewer discussion! Stay hopeful, friends.

    Poništi
  19. proslijedio/la je Tweet
    9. srp 2019.

    New paper on studying how the critical batch size changes based on properties of the optimization algorithm (including momentum and preconditioning), through two different lenses: large scale experiments, and analysis of a simple noisy quadratic model.

    Prikaži ovu nit
    Poništi
  20. proslijedio/la je Tweet
    5. lip 2019.

    1/5 New work w/ and Rich Zemel suggests likelihood-based conditional generative models will not solve robust classification. We show competitive models can be easily fooled, revealing fundamental issues with their learned representations and the likelihood objective.

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·