Tweetovi

Blokirali ste korisnika/cu @andrew_atanov

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @andrew_atanov

  1. proslijedio/la je Tweet

    I tend to agree with , there is a huge potential of Bayesian DNNs! And many people are working hard to make this potential real. you may chek The Deep Weight Prior a generative model prior for Bayesian CNNs.

    Poništi
  2. proslijedio/la je Tweet

    We've got 4 papers accepted to !

    Prikaži ovu nit
    Poništi
  3. proslijedio/la je Tweet
    3. kol 2019.

    New paper out! Application of transfer learning via implicit prior improves learning for MR images segmentation problem. Showed on (brain tumour) dataset. by

    Poništi
  4. 15. lip 2019.

    Come to see our work “Semi-Conditional Normalizing Flows for Semi-Supervised Learning” on INNF workshop at room 103! Joint work with and Dmitry Vetrov

    Poništi
  5. proslijedio/la je Tweet
    29. svi 2019.

    NeuroNuggets: Dmitry Vetrov, or in Review I: summary of 3 's papers by : - Variance Networks: When Expectation Does Not Meet Your Expectations - The Deep Weight Prior - Variational Autoencoder with Arbitrary Conditioning

    Poništi
  6. proslijedio/la je Tweet

    Our paper on "Semi-Conditional Normalizing Flows for Semi-Supervised Learning" has been accepted to ICML Workshop ()! Coauthored with and Dmitry Vetrov arXiv:

    Poništi
  7. proslijedio/la je Tweet

    We are hiring a postdoc on deep reinforcement learning. Come work with us! Deadline 31 June 2019. Call for applications:

    Poništi
  8. proslijedio/la je Tweet

    Applications are finally open! The school will take place in Moscow, August 20–25. Applications close on April 15, 23:59 Moscow Time (UTC+3)

    Poništi
  9. proslijedio/la je Tweet

    Interested in a summer school on Deep Learning and Bayesian Methods? If so, we're excited to announce next run of our school. Moscow, late August 2019. Applications are not open yet, but tell us your email to stay up to date and check out this year's run!

    Poništi
  10. proslijedio/la je Tweet

    Andrew Atanov gave a talk on using generative models as semi-implicit priors Paper:

    Poništi
  11. proslijedio/la je Tweet

    We released a pre-print on the Deep Weight Prior - a flexible prior distribution for Bayesian CNNs via generative modeling of convolutional kernels. Thanks to and my awesome co-authors Kirill Struminsky Dmitry Vetrov

    Poništi
  12. proslijedio/la je Tweet

    Check out our new paper "Conditional Generators of Words Definitions" () by , and Dmitry Vetrov on how to generate word definitions in face of ambiguity by virtue of a usage example. Paper: Code:

    Poništi
  13. proslijedio/la je Tweet

    Did you know stochastic neural networks with zero-mean weights and tuneable variances actually work? If that's surprising to you as well, check out this talk by and maybe take a look at the paper

    Poništi
  14. proslijedio/la je Tweet

    A talk by about two our recent papers on loss landscapes of neural networks and how it can be used to build better ensembles

    Poništi
  15. proslijedio/la je Tweet

    Another work by our group. By Kirill Neklyudov, @arsashuha, and Dmitry Vetrov

    Poništi
  16. proslijedio/la je Tweet

    After a bit of a delay the application process is finally open! Make sure to complete the application before 23:59 April 30 (Moscow time). Better start early – the form will take quite some time to fill.

    Poništi
  17. proslijedio/la je Tweet

    New paper from our group: Uncertainty Estimation via Stochastic Batch Normalization by , @arsashuha, , Kirill Neklyudov and Dmitry Vetrov

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·