Michael Zhang

@michaelrzhang

PhD student in machine learning / ( '18). Always exploring.

Vrijeme pridruživanja: kolovoz 2017.

Tweetovi

Blokirali ste korisnika/cu @michaelrzhang

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @michaelrzhang

  1. Prikvačeni tweet
    4. pro 2019.

    Excited to share the NeurIPS camera-ready version of the Lookahead Optimizer: . Lookahead wraps around and often improves the performance of other optimizers. Very grateful to work on this with James Lucas, Geoffrey Hinton, and Jimmy Ba.

    Prikaži ovu nit
    Poništi
  2. 29. sij

    We show our rules compare favorably to rules learned by order-invariant neural networks under different noise models. paper: code:

    Prikaži ovu nit
    Poništi
  3. 29. sij

    This is interesting especially when the voting rule has access to auxiliary information e.g. some proxy for voter experience. Our model is applicable for cooperative policy making and peer review--lots more exciting directions to pursue!

    Prikaži ovu nit
    Poništi
  4. 29. sij

    Our AAMAS 2020 paper “Objective Social Choice: Using Auxiliary Information to Improve Voting Outcomes” ( and me) is online! We analyze voting rules under a setup where voters get noisy estimates of some underlying ground truth.

    Prikaži ovu nit
    Poništi
  5. proslijedio/la je Tweet
    26. sij

    Kobe was a legend on the court and just getting started in what would have been just as meaningful a second act. To lose Gianna is even more heartbreaking to us as parents. Michelle and I send love and prayers to Vanessa and the entire Bryant family on an unthinkable day.

    Poništi
  6. 25. sij

    The researcher themed trading cards are really neat too.

    Prikaži ovu nit
    Poništi
  7. 25. sij

    Enjoyed this short piece by : via It's hard to capture how much hockey is a part of growing up in Canada

    Prikaži ovu nit
    Poništi
  8. 16. sij

    Lots of exciting research on distributional RL lately, and this work shows that dopamine in mouse brain cells is better modeled with a distribution (rather than a point estimate as in classic TD)!

    Poništi
  9. 9. sij
    Poništi
  10. 12. pro 2019.

    Writing down notes, both digital and physical, between sessions has been very useful for remembering cool ideas, experiences, and people.

    Poništi
  11. proslijedio/la je Tweet
    9. pro 2019.

    (3) Lookahead Optimizer: k steps forward, 1 step back. Thursday evening, East Hall B+C (#200) We propose a new optimization algorithm that wraps around existing optimizers, reducing variance and improving convergence. Work with , Geoff Hinton, and Jimmy Ba.

    Prikaži ovu nit
    Poništi
  12. 4. pro 2019.

    The algorithm has minimal computational overhead and stores one additional copy of the parameters. It can be incorporated into existing pipelines with a couple of lines of code. Our implementation is available at:

    Prikaži ovu nit
    Poništi
  13. 4. pro 2019.

    Lookahead selects a search direction based on k steps of the inner optimizer. We demonstrate that this reduces variance, which improves convergence and makes Lookahead more robust to hyperparameter choices. This is desirable on novel datasets without well-calibrated baselines.

    Prikaži ovu nit
    Poništi
  14. proslijedio/la je Tweet

    There’s a simple way to fight familiarity bias. When you read a good paper with an author you don’t know, especially if they’re junior, take a minute to look them up, get to know their work, cite them, and keep them in mind for events you organize. In short, remember the name!

    Prikaži ovu nit
    Poništi
  15. 18. stu 2019.

    Looking forward to giving a talk this week at Toronto Machine Learning Summit () about the Lookahead Optimizer! Details:

    Poništi
  16. 8. stu 2019.

    Inspiring talk from Colleen Lewis () about promoting department inclusivity today. Believing in students and small changes sustained over time make a huge difference. is doing very well.

    Poništi
  17. 16. lis 2019.

    Very cool perspective of viewing an infinite depth network as computing the equilibrium point of a single layer.

    Poništi
  18. 25. ruj 2019.

    This is on the original set of MAML tasks, and results will most likely be different if tasks required more adaptation.

    Prikaži ovu nit
    Poništi
  19. 25. ruj 2019.

    Neat paper! Shows that inner loop adaptation is not necessary at meta-test time for MAML. Removing the final layer and computing cosine similarities (similar to prototypical nets) is sufficient.

    Prikaži ovu nit
    Poništi
  20. 14. ruj 2019.

    +1, Waves was quite an emotional journey at TIFF.

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·