Carles Gelada

@carlesgelada

Research Scientist working on RL. Highschool dropout, self-taught, ex Google Brain Resident.

Montreal Canada
Vrijeme pridruživanja: ožujak 2016.

Tweetovi

Blokirali ste korisnika/cu @carlesgelada

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @carlesgelada

  1. Prikvačeni tweet
    6. lip 2019.

    I am glad to introduce the DeepMDP! in collaboration with Saurabh Kumar, , , . We did the theory on how to learn latent space models, and it works!

    Prikaži ovu nit
    Poništi
  2. proslijedio/la je Tweet

    The field is already self-correcting. Good departments/labs are clearing their eyes, caring less about paper count, seeing through the noise. Don't worry so much about the ICML deadline. Slow down, relax, try to do work you're proud of, submit when it's ready.

    Poništi
  3. proslijedio/la je Tweet
    22. sij

    The whole thread on BNNs and blog post by and reminded me of the "First, you rob a bank..." characterization by Yasser Abu Mostafa Apologies to my Bayesian friends who may find it unfair.

    Poništi
  4. 22. sij

    Thanks to everyone who asked questions and engaged in proper scientific discourse. It has allowed us to better understand the ideas ourselves. And thanks to those who respectfully pointed out issues with the tone of the first version. In particular and .

    Prikaži ovu nit
    Poništi
  5. 22. sij

    We've updated the last blog with . The explanations should be much clearer and the language less incendiary. The main point is not to attach BNNs, but to thinking critically about them, specially about the role of their priors play.

    Prikaži ovu nit
    Poništi
  6. proslijedio/la je Tweet
    20. sij
    Odgovor korisnicima i sljedećem broju korisnika:

    Anyways... Neal himself has already answered the question. In the aforementioned NN FAQ s3.

    Poništi
  7. proslijedio/la je Tweet
    20. sij
    Odgovor korisnicima i sljedećem broju korisnika:

    Sometimes it's to put down younger researchers who have the temerity to ask difficult questions.

    Poništi
  8. proslijedio/la je Tweet
    20. sij
    Odgovor korisnicima i sljedećem broju korisnika:

    If we still know so little about neural networks that this blog post is at all relevant in 2050, we have failed as a field.

    Poništi
  9. proslijedio/la je Tweet
    20. sij

    It's frustrating when people refuse to have idea-level discussions on the grounds that "since you missed this 1 reference, you aren't worth talking to." Feels very patronizing and anti-good-discourse.

    Prikaži ovu nit
    Poništi
  10. proslijedio/la je Tweet
    20. sij

    A crude metaphor: a smartphone with a battery in it is very useful for navigation; just a battery is not. We know NN + SGD is a useful prior. But maybe the NN arch alone is just the battery. Random init on a NN is like nav by spinning the battery and following where it points.

    Poništi
  11. proslijedio/la je Tweet
    20. sij

    ran exp testing if the hypothesis on our blog that gaussian priors of BNNs are generalization-agnostic. The exp is a proxy for the real thing but it indicates we were wrong. Small diff on logprop means huge prob ratio between good and bad generalizing solutions.

    Poništi
  12. proslijedio/la je Tweet
    20. sij
    Odgovor korisnicima i sljedećem broju korisnika:

    I'm just going to test it right now. Simple experiment: SVHN, train a model to convergence on train set, measure logprob of weights under prior. Then, concatenate the test set with random labels, train again, measure logprob of weights again. Hypothesis: prior logprob ~same.

    Poništi
  13. proslijedio/la je Tweet
    19. sij
    Odgovor korisnicima

    Thanks ... we started off on the wrong foot but got there in the end. With thanks to and for gentle cajoling. And of course to for considered reflection.

    Poništi
  14. 19. sij

    Yes, it's probably the most interesting conversation that spawned out of the blog post.

    Poništi
  15. proslijedio/la je Tweet
    18. sij

    Back to back interesting blog posts - "...when a Bayesian tells you that BNNs provide good uncertainty estimates.. We should ask, “what evidence are you providing that your priors are any good?” New blog post by +

    Prikaži ovu nit
    Poništi
  16. proslijedio/la je Tweet

    This makes a lot of sense. Bayesian methods are computationally expensive so there needs to be a clear advantage for using them. Without good priors we don't expect good generalization. Nice work!

    Poništi
  17. proslijedio/la je Tweet
    18. sij
    Odgovor korisnicima

    Bayesian language is so obfuscating. If I said my first "guess" doesn't matter, or that my "hypothesis" doesn't matter, it would sound absurd, but call it a "prior" and people start nodding along ...

    Poništi
  18. proslijedio/la je Tweet
    18. sij

    New blog post with -- "A Sober Look at Bayesian Neural Networks": Without a good prior, Bayesian uncertainties are meaningless. We argue that BNN priors are likely quite poor, and concretely characterize one specific failure mode.

    Prikaži ovu nit
    Poništi
  19. 18. sij

    We expand on the arguments I made on my original thread and respond to the recent blog by

    Prikaži ovu nit
    Poništi
  20. 18. sij

    Good uncertainties are profoundly connected to generalization. If the prior used in BNNs isn't, the uncertainties will be useless. and I provide a mathematical argument for that, and we even put into question if the B in BNN is doing much.

    Prikaži ovu nit
    Poništi
  21. 16. sij

    Reviewer 2: The authors provided a substantive improvement on the resolution of the meme but failed to cite previous work. 3/10 Reject.

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·