Andrew Gordon Wilson

@andrewgwils

Machine Learning Professor

New York University
Vrijeme pridruživanja: rujan 2014.

Tweetovi

Blokirali ste korisnika/cu @andrewgwils

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @andrewgwils

  1. Prikvačeni tweet
    3. lis 2018.

    I'm extremely excited to officially announce our new library, GPyTorch, which has just gone beta! Scalable Gaussian processes in PyTorch, with strong GPU acceleration. repo: website:

    Poništi
  2. proslijedio/la je Tweet
    19. sij

    "the prior [...] will certainly be imperfect [...]. Attempting to avoid an important part of the modelling process because one has to make assumptions, however, will often be a worse alternative than an imperfect assumption." Thank you for this great blog post!

    Poništi
  3. 18. sij

    I will also note that my original response was actually not intended to be specific to Carles, but rather to address general remarks and confusions I had recently seen about BDL.

    Prikaži ovu nit
    Poništi
  4. 18. sij

    Separately from the technical discussion, I suggest tending towards asking questions and learning, and being open minded about understanding BDL. Perhaps your prior is too strong! :-)

    Prikaži ovu nit
    Poništi
  5. 18. sij

    (5) Lack of desirable posterior collapse can happen when (i) the hypothesis space does not contain a good solution, (ii) the prior is too confident about a bad solution (e.g. equal label p for any x). But NNs are expressive, and (ii) is the opposite of a vague prior on weights!

    Prikaži ovu nit
    Poništi
  6. 18. sij

    (4) In fact, you could easily create various "generalization agnostic" priors in function space. They would behave very differently from BNNs. They would have trivial structure and indeed would not generalize.

    Prikaži ovu nit
    Poništi
  7. 18. sij

    (3) The volume of good solutions exceeds the volume of bad solutions for typical problems NNs are applied to. Neural nets were constructed to have these inductive biases to help generalization. It is wild to suggest that a NN function is "generalization agnostic".

    Prikaži ovu nit
    Poništi
  8. 18. sij

    (2) A model being able to fit noise is unremarkable and is different than having an inductive bias that favours noisy solutions. A standard GP-RBF prior over functions easily supports noise, but it favours more structured solutions.

    Prikaži ovu nit
    Poništi
  9. 18. sij

    There are errors in this post. (1) The likelihood will collapse onto the "good function" as we increase the data size if the data are from the distribution we want to fit, as increasingly fewer bad functions will be consistent with our observations.

    Prikaži ovu nit
    Poništi
  10. proslijedio/la je Tweet
    17. sij

    Shares in the Cornell community greatly appreciated! If the course doesn't get 3 more students enrolled it will be canceled, and I would love to see Erik make this awesome class happen!

    Prikaži ovu nit
    Poništi
  11. 11. sij

    In general, I have found the short notes of Minka, MacKay, and Neal to be a great resource. Amongst Minka's notes, a personal favourite that I describe in my note is "Bayesian model averaging is not model combination":

    Prikaži ovu nit
    Poništi
  12. 11. sij

    I hope that this note will be helpful for understanding the benefits of modern Bayesian deep learning, and the connections between BDL and approaches like deep ensembles.

    Prikaži ovu nit
    Poništi
  13. 11. sij

    After posting the twitter thread, I was urged to collect and develop my remarks into a self-contained reference. A PDF version of "The Case for BDL" is also available at

    Prikaži ovu nit
    Poništi
  14. 11. sij
    Prikaži ovu nit
    Poništi
  15. proslijedio/la je Tweet
    7. sij

    I'm excited to share that I have joined Imperial College London as a lecturer (asst prof)! I'm convinced it will be a great environment to continue working on GPs, Bayesian Deep Learning, and model-based RL. Do get in touch if you're interested joining to do a PhD!

    Prikaži ovu nit
    Poništi
  16. 5. sij

    FlowGMM has broad applicability. We consider text, tabular, and image data. FlowGMM can also discover interpretable structure, provide real-time optimization-free feature visualizations, and specify well calibrated predictive distributions. 8/8

    Prikaži ovu nit
    Poništi
  17. 5. sij

    FlowGMM models the latent space as a Gaussian mixture, where each mixture component is associated with a class label. This approach specifies an exact joint likelihood over both labelled and unlabelled data for end-to-end training. 7/8

    Prikaži ovu nit
    Poništi
  18. 5. sij

    Normalizing flows provide a pleasingly simple approach to generative modelling. By transforming a latent distribution through an invertible network, we have both an exact likelihood for the data, and useful inductive biases from a convolutional neural network. 6/8

    Prikaži ovu nit
    Poništi
  19. 5. sij

    For example, a Gaussian mixture directly over images, while highly flexible for density estimation, would specify similarities between images as related to Euclidean distances between pixel intensities, which is a poor inductive bias for translation and other invariances. 5/8

    Prikaži ovu nit
    Poništi
  20. 5. sij

    Generative models are compelling because we are trying to create an object of interest. The challenge in generative modelling is that standard approaches to density estimation are poor descriptions of high-dimensional natural signals. 4/8

    Prikaži ovu nit
    Poništi
  21. 5. sij

    Nearly all classifiers are discriminative. Even approaches that use a generator typically involve a discriminator in the pipeline. For example, sometimes one learns a generator on unlabelled data, then recycles the representation as part of a discriminative classifier. 3/8

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·