Tweetovi

Blokirali ste korisnika/cu @RogerGrosse

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @RogerGrosse

  1. proslijedio/la je Tweet
    9. pro 2019.

    "Preventing Gradient Attenuation in Lipschitz Constrained Convolutional Networks" Poster #149 (Thu 10:45am, East Exh. Hall B+C) by awesome *undergrads* and Saminul Haque w/ @CemAnil1, ,

    Prikaži ovu nit
    Poništi
  2. proslijedio/la je Tweet
    9. pro 2019.

    For those are interested in VOGN, you might also like to read my noisy natural gradient () paper, which derived the same connection between optimization and variational inference as VOGN (we also discussed K-FAC approximation except the diagonal one).

    Prikaži ovu nit
    Poništi
  3. proslijedio/la je Tweet
    9. pro 2019.

    (1) Don't Blame the Elbo! A Linear VAE Perspective on Posterior Collapse. Wednesday morning, East Hall B+C (#123) We investigate posterior collapse through theoretical analysis of linear VAEs and empirical evaluation of nonlinear VAEs.

    Prikaži ovu nit
    Poništi
  4. proslijedio/la je Tweet
    8. pro 2019.

    Will arrive in Vancouver for a bit late (Monday night) due to a final exam. I will present two posters (see below) in the main conference and one poster in SGO workshop ( will give a 30-mins contributed talk on that). Reach out if you'd like to chat.

    Prikaži ovu nit
    Poništi
  5. 5. stu 2019.

    This architecture is practical (if a bit slow) to train, and competitive with other deterministic provable adversarial defenses. (Though still far behind randomized smoothing.)

    Prikaži ovu nit
    Poništi
  6. 5. stu 2019.

    Turns out this is OK, since if you use 2N channels and project down, there's one connected component that can represent any orthogonal convolution over N channels. So you lose at most a factor of 2.

    Prikaži ovu nit
    Poništi
  7. 5. stu 2019.

    Our architecture uses orthogonal convolutions based on Lechao Xiao's initialization scheme. But when optimizing over this space, there's a surprising problem: the space of orthogonal convolutions is disconnected!

    Prikaži ovu nit
    Poništi
  8. 5. stu 2019.

    Previously we introduced fully connected architectures with tight Lipschitz bounds. Now we extended this to conv nets. Good for provable adversarial robustness and Wasserstein distance estimation. Joint work w/ , Saminul Haque, et al.

    Prikaži ovu nit
    Poništi
  9. 4. stu 2019.

    This is a phenomenon we also found from evaluating GAN likelihoods. Evaluating GAN likelihoods is computationally challenging, but we can learn a lot from it!

    Prikaži ovu nit
    Poništi
  10. 4. stu 2019.

    From David Bau et al.: more evidence that GANs produce seemingly high-quality image samples by omitting hard-to-model objects.

    Prikaži ovu nit
    Poništi
  11. 3. stu 2019.

    Neat work by , Haoze Wu, et al. Something I hadn't appreciated until recently is that learning SAT solvers is bottlenecked by "data," i.e. interesting problem instances. They can generate good enough random instances to tune a solver:

    Poništi
  12. proslijedio/la je Tweet
    28. lis 2019.

    The camera-ready version of our NQM paper () is out! We added a new section analyzing exponential moving average (EMA). EMA accelerates training a lot with little computation overhead. REALLY surprised that EMA hasn't been widely used so far!

    Prikaži ovu nit
    Poništi
  13. 23. lis 2019.

    Quantum computing researchers are debating fundamental science while AI researchers are stuck arguing if the word "solve" maybe gave some people the wrong impression.

    Poništi
  14. 21. lis 2019.

    In ML, it's often simultaneously the case that (1) the announcement of a research result is accurate, informative, and measured, and (2) most of the excitement is from people misinterpreting the result as more profound than it really is.

    Poništi
  15. 19. lis 2019.

    An engaging and accessible overview of the challenges involved in building AI systems consistent with human values, and what aspects of the problem our current algorithmic techniques can and can't address. We need more books like this!

    Poništi
  16. proslijedio/la je Tweet
    17. lis 2019.

    New work on solving minimax optimization locally. With Jimmy Ba. We propose a novel algorithm which converges to and only converges to local minimax. The main innovation is a correction term on top of gradient descent-ascent. Paper link:

    Prikaži ovu nit
    Poništi
  17. 15. lis 2019.

    In deep learning research, the sky turns out to be blue, but only if you measure it very carefully. Interesting meta-scientific paper on evaluating neural net optimizers, by Choi et al.

    Poništi
  18. proslijedio/la je Tweet
    11. lis 2019.

    University of Toronto is hiring broadly in robotics across departments - CS, ME, ECE and CogSci. If you are planning to be on robotics academic market, get in touch.

    Poništi
  19. 8. lis 2019.

    Haha, they think they can make statistical techniques sound fancy by emphasizing that physicists used the same mathematical tools. OK now, back to estimating the partition function of this Boltzmann machine...

    Poništi
  20. 6. lis 2019.

    I wonder if all the AI researchers they asked to review this book were unwilling to write articles for Nature.

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·