Zeyuan Allen-Zhu

@ZeyuanAllenZhu

Researcher | PhD | working on Theory, ML, Optimization

Vrijeme pridruživanja: travanj 2010.

Tweetovi

Blokirali ste korisnika/cu @ZeyuanAllenZhu

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @ZeyuanAllenZhu

  1. Prikvačeni tweet
    15. sij

    Is deep learning is actually performing DEEP learning? We may have given the first proof that neural network is capable of efficient hierarchical learning, while existing theory only shows that deep learning can "simulate" non-hierarchical algorithms

    Poništi
  2. 10. sij

    V100 GPU is perhaps best for deep learning, but Azure has no V100 available throughout the United States for at least a week... Thoughts? (also spent multiple days with support to investigate, so far no useful reply)

    Poništi
  3. 6. stu 2019.

    + get FREE job title upgrade. From this year on, our intro-63-level titles become "Senior Researcher"

    Poništi
  4. proslijedio/la je Tweet
    6. stu 2019.

    ML and Optimization @ Redmond is hiring researchers and postdocs! If you want to join Ofer Dekel Yin Tat Lee Lin Xiao and several brilliant engineers, apply via below links by 12/31 (soft). Plz RT!🤩

    Prikaži ovu nit
    Poništi
  5. proslijedio/la je Tweet
    2. srp 2019.

    Overparametrization can be helpful for *unsupervised* learning, in unexpected ways! (e.g. latent variable recovery). Joint w/ Rares Buhai, Yoni Halpern and .

    Poništi
  6. 18. lip 2019.

    Version 2 uploaded. Now, we have eliminated *all* kernel methods, in particular eliminated Convolution NTK with global average pooling. 🧐

    Poništi
  7. proslijedio/la je Tweet
    17. lip 2019.

    I started a blog ()! I'll probably cross-link most posts here. Glad to finally get a chance to use the pun Ka-Math somehow. Still trying to procure a k@math.blahblah email address...

    Prikaži ovu nit
    Poništi
  8. proslijedio/la je Tweet
    13. lip 2019.

    . and realized that convolving a neural network with a Gaussian makes it robust against adversarial attacks. After lots of grind by and , we improved their results by a lot using adversarial training by and friends.

    Prikaži ovu nit
    Poništi
  9. proslijedio/la je Tweet
    6. lip 2019.

    Want generative models for *certifiably* data that can learn from biased samples? Try our new approach based on the max-entropy framework (with ). Bonus: fast algorithm for max-entropy inspired by et al

    Poništi
  10. 4. lip 2019.

    So, what do researchers do on Twitter?

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·