Tweetovi

Blokirali ste korisnika/cu @jh_jacobsen

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @jh_jacobsen

  1. proslijedio/la je Tweet
    1. sij

    Machine Learning Summer School 28 June - 10 July 2020 at the Max Planck Institute for Intelligent Systems, Tübingen, Germany. Application deadline: 11 Feb 2020. All welcome to apply!

    Poništi
  2. 31. pro 2019.

    Core ML/AI is oversaturated. If I'd look for PhD positions now, I'd look for ML-heavy positions in less populated adjacent fields. E.g. opportunities in natural sciences. It's often a good idea to work on something not everyone is working on already. Don't be a 🐏, be unique!

    Poništi
  3. 15. pro 2019.

    Very cool talk by David Duvenaud on the stories around and behind Neural ODEs, wish I attended that workshop.

    Poništi
  4. proslijedio/la je Tweet
    9. pro 2019.

    Classifiers are secretly energy-based models! Every softmax giving p(c|x) has an unused degree of freedom, which we use to compute the input density p(x). This makes classifiers into generative models without changing the architecture.

    , , i još njih 2
    Prikaži ovu nit
    Poništi
  5. 9. pro 2019.

    "On the Invertibility of Invertible Neural Networks" ML with Guarantees workshop (Saturday, West Ballroom B) with , , and

    Prikaži ovu nit
    Poništi
  6. 9. pro 2019.

    "Preventing Gradient Attenuation in Lipschitz Constrained Convolutional Networks" Poster #149 (Thu 10:45am, East Exh. Hall B+C) by awesome *undergrads* and Saminul Haque w/ @CemAnil1, ,

    Prikaži ovu nit
    Poništi
  7. 9. pro 2019.

    "Residual Flows for Invertible Generative Modeling", Spotlight (Tue 4:40, West Exh. Hall C) and Poster #85 (Tue 5:30, East exh. Hall B+C) presented by work w/ and

    Prikaži ovu nit
    Poništi
  8. 9. pro 2019.

    Looking forward to spending the next days . We will present recent work on Residual Flows, Lipschitz constrained convolutional networks and (non-)invertibility of invertible neural networks:

    Prikaži ovu nit
    Poništi
  9. proslijedio/la je Tweet
    8. pro 2019.

    if y'all r lookin for somethin to read while walking around Vancouver, check out my newest paper: "Your Classifier is Secretly an Energy-Based Model and You Should Treat it Like One" with

    Prikaži ovu nit
    Poništi
  10. 5. pro 2019.

    Fantastic summary of the state of the art and open problems in normalizing flows!

    Poništi
  11. proslijedio/la je Tweet
    28. stu 2019.

    Excited to share our work on Contrastive Learning of Structured World Models! C-SWMs learn object-factorized models & discover objects without supervision, using a simple loss inspired by work on graph embeddings Paper: Code: 1/5

    Prikaži ovu nit
    Poništi
  12. proslijedio/la je Tweet
    5. stu 2019.

    Previously we introduced fully connected architectures with tight Lipschitz bounds. Now we extended this to conv nets. Good for provable adversarial robustness and Wasserstein distance estimation. Joint work w/ , Saminul Haque, et al.

    Prikaži ovu nit
    Poništi
  13. proslijedio/la je Tweet
    4. stu 2019.

    From David Bau et al.: more evidence that GANs produce seemingly high-quality image samples by omitting hard-to-model objects.

    Prikaži ovu nit
    Poništi
  14. proslijedio/la je Tweet
    17. lis 2019.

    New work on solving minimax optimization locally. With Jimmy Ba. We propose a novel algorithm which converges to and only converges to local minimax. The main innovation is a correction term on top of gradient descent-ascent. Paper link:

    Prikaži ovu nit
    Poništi
  15. proslijedio/la je Tweet
    14. lis 2019.

    Understanding the Limitations of Variational Mutual Information Estimators. Jiaming Song and Stefano Ermon

    Prikaži ovu nit
    Poništi
  16. proslijedio/la je Tweet
    1. lis 2019.

    We had a "Hamiltonian extravaganza" at . We show how to learn Hamiltonian gen models from pixels and propose a general method for combining symmetry Lie groups with ODE-Flow generative models in

    Prikaži ovu nit
    Poništi
  17. proslijedio/la je Tweet
    1. ruj 2019.

    Deep Learning Theory Review: An Optimal Control and Dynamical Systems Perspective Review paper that aims to shed lights on the importance of dynamics and optimal control when developing deep learning theory.

    Poništi
  18. proslijedio/la je Tweet
    26. kol 2019.

    According to free energy theory, the brain exists to predict stimuli and thus minimize surprise. In other words, brains try to make life as boring as possible.

    Poništi
  19. 19. kol 2019.

    Check out the updated paper and code of Residual Flows for invertible generative modeling! Release includes SOTA-level pre-trained models for MNIST/CIFAR10/Imagenet/CelebA-HQ 🔥

    Poništi
  20. proslijedio/la je Tweet
    6. kol 2019.

    and I wrote a piece for called _Adversarial Example Researchers Need to Expand What is Meant by ‘Robustness’_. "As long as models lack robustness to distributional shift, there will always be errors to find adversarially."

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·