Arsenii (Senya) Ashukha

@senya_ashuha

PhD Candidate 🤓 at and AI Center with Dmitry Vetrov

Moscow, Russia
Vrijeme pridruživanja: siječanj 2016.

Tweetovi

Blokirali ste korisnika/cu @senya_ashuha

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @senya_ashuha

  1. proslijedio/la je Tweet
    31. sij

    Very excited to share "Learning Discrete Distributions by Dequantization" () in collaboration with and , from my internship at . We explore different methods and distributions for dequantization and reach 3.06 bpd on CIFAR10.

    Prikaži ovu nit
    Poništi
  2. I tend to agree with , there is a huge potential of Bayesian DNNs! And many people are working hard to make this potential real. you may chek The Deep Weight Prior a generative model prior for Bayesian CNNs.

    Poništi
  3. proslijedio/la je Tweet
    25. lis 2019.

    Check out "Scale-Equivariant Steerable Networks" (). It is joint work with Michał Szmaja and Arnold Smeulders. We build scale-equivariant CNNs which do not use image rescaling and do not limit the admissible scale factors.

    Prikaži ovu nit
    Poništi
  4. proslijedio/la je Tweet
    11. pro 2019.

    Check out our poster #143 on general E(2)-Steerable CNNs tomorrow, Thu 10:45AM. Our work solves for the most general isometry-equivariant convolutional mappings and implements a wide range of related work in a unified framework. With

    Prikaži ovu nit
    Poništi
  5. proslijedio/la je Tweet

    This evening and will present their work "The Implicit Metropolis-Hastings Algorithm" 05:00 -- 07:00 PM @ East Exhibition Hall B + C #183

    Poništi
  6. proslijedio/la je Tweet

    3. Importance Weighted Hierarchical Variational Inference by 05:30 -- 07:30 PM @ East Exhibition Hall B + C #167

    Prikaži ovu nit
    Poništi
  7. proslijedio/la je Tweet

    2. A Simple Baseline for Bayesian Uncertainty in Deep Learning by our alumni in collaboration with group 10:45 AM -- 12:45 PM @ East Exhibition Hall B + C #146

    Prikaži ovu nit
    Poništi
  8. proslijedio/la je Tweet

    1. A Prior of a Googol Gaussians: a Tensor Ring Induced Prior for Generative Models by and 10:45 AM -- 12:45 PM @ East Exhibition Hall B + C #119

    Prikaži ovu nit
    Poništi
  9. proslijedio/la je Tweet
    8. pro 2019.

    Our paper w/ received an honorable mention for the outstanding new direction award 🥳! Thursday 15:50 track 2 session 6 hall A, if you like to learn more.

    Poništi
  10. proslijedio/la je Tweet

    Check out our new paper "Low-variance Black-box Gradient Estimates for the Plackett-Luce Distribution", accepted as an oral to () on how to reduce the variance of gradients when optimizing w.r.t. a distribution over permutations. Paper:

    Prikaži ovu nit
    Poništi
  11. proslijedio/la je Tweet
    12. stu 2019.

    Stoked to share the camera-ready and code base for our NeurIPS oral, see Sindy’s tweet below! We propose a gradient-isolated self-supervised method that outperforms end-to-end (self-) supervised models on STL-10! (w/ and Peter O’Conner)

    Poništi
  12. proslijedio/la je Tweet
    7. stu 2019.

    An update: has been in contact with Canadian immigration officials. They told him that anyone who has been denied a visa to attend can request their case to be reconsidered via this form:   No guarantees, but please pass along!

    Prikaži ovu nit
    Poništi
  13. proslijedio/la je Tweet
    19. kol 2019.

    Thank you @habibian_a and for a wonderful collaboration on video compression with auto-encoders and adaptive entropy coding! More to come 🙂

    Poništi
  14. proslijedio/la je Tweet
    9. kol 2019.

    Just uploaded a paper to arXiv where we show how to make super smooth saliency visualizations on Atari. Check it out!

    Poništi
  15. proslijedio/la je Tweet
    6. kol 2019.

    Check out the call for papers for the Bayesian deep learning workshop at NeurIPS 2019:

    Poništi
  16. proslijedio/la je Tweet

    [1/2] New paper! Bayesian inference in low-dimensional subspaces of the parameter space of deep neural nets by and Dmitry Vetrov in collaboration with , Wesley Maddox, and Paper:

    Prikaži ovu nit
    Poništi
  17. proslijedio/la je Tweet

    Interested in sparse Neural Nets? Check out "Bayesian Sparsification of Gated RNNs" at Compact Deep Neural Network Representation with Industrial Applications workshop tomorrow by Ekaterina Lobacheva, Nadezhda Chirkova, Dmitry Vetrov (details below)

    Prikaži ovu nit
    Poništi
  18. proslijedio/la je Tweet

    There is still time to apply for the postdoc position in Deep RL at (Moscow). The application deadline is 30 of June.

    Poništi
  19. proslijedio/la je Tweet
    21. lip 2019.

    Excited to share our latest work; it combines neural networks with stochastic processes by positing priors over the relational structure of the dataset, without relying on global latent variables!

    Poništi
  20. proslijedio/la je Tweet
    19. lip 2019.

    Our next paper formulates the Metropolis-Hastings algorithm for empirical target distributions and implicit proposals. Check it out for theoretical analysis, implicit Markov proposals, and new loss functions for DRE. With

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·