Tweetovi
- Tweetovi, trenutna stranica.
- Tweetovi i odgovori
- Medijski sadržaj
Blokirali ste korisnika/cu @carlesgelada
Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @carlesgelada
-
Prikvačeni tweet
I am glad to introduce the DeepMDP! in collaboration with Saurabh Kumar,
@jacobmbuckman,@ofirnachum,@marcgbellemare. We did the theory on how to learn latent space models, and it works! https://arxiv.org/abs/1906.02736 pic.twitter.com/qhJH1EIKEu
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Carles Gelada proslijedio/la je Tweet
The field is already self-correcting. Good departments/labs are clearing their eyes, caring less about paper count, seeing through the noise. Don't worry so much about the ICML deadline. Slow down, relax, try to do work you're proud of, submit when it's ready.
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Carles Gelada proslijedio/la je Tweet
The whole thread on BNNs and blog post by
@jacobmbuckman and@carlesgelada reminded me of the "First, you rob a bank..." characterization by Yasser Abu Mostafa https://www.youtube.com/watch?v=ihLwJPHkMRY&t=2440 … Apologies to my Bayesian friends who may find it unfair.Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Thanks to everyone who asked questions and engaged in proper scientific discourse. It has allowed us to better understand the ideas ourselves. And thanks to those who respectfully pointed out issues with the tone of the first version. In particular
@tdietterich and@dustinvtran.Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
We've updated the last blog with
@jacobmbuckman. The explanations should be much clearer and the language less incendiary. The main point is not to attach BNNs, but to thinking critically about them, specially about the role of their priors play.https://jacobbuckman.com/2020-01-22-bayesian-neural-networks-need-not-concentrate/ …Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Carles Gelada proslijedio/la je Tweet
Anyways... Neal himself has already answered the question. In the aforementioned NN FAQ s3. http://www.faqs.org/faqs/ai-faq/neural-nets/part3/section-7.html …pic.twitter.com/RwXoVzZqxL
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Carles Gelada proslijedio/la je Tweet
Sometimes it's to put down younger researchers who have the temerity to ask difficult questions.
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Carles Gelada proslijedio/la je Tweet
If we still know so little about neural networks that this blog post is at all relevant in 2050, we have failed as a field.
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Carles Gelada proslijedio/la je Tweet
It's frustrating when people refuse to have idea-level discussions on the grounds that "since you missed this 1 reference, you aren't worth talking to." Feels very patronizing and anti-good-discourse.https://twitter.com/carlesgelada/status/1219386058972049415 …
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Carles Gelada proslijedio/la je Tweet
A crude metaphor: a smartphone with a battery in it is very useful for navigation; just a battery is not. We know NN + SGD is a useful prior. But maybe the NN arch alone is just the battery. Random init on a NN is like nav by spinning the battery and following where it points.
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Carles Gelada proslijedio/la je Tweet
@jacobmbuckman ran exp testing if the hypothesis on our blog that gaussian priors of BNNs are generalization-agnostic. The exp is a proxy for the real thing but it indicates we were wrong. Small diff on logprop means huge prob ratio between good and bad generalizing solutions.https://twitter.com/jacobmbuckman/status/1219141178043531268 …Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Carles Gelada proslijedio/la je Tweet
I'm just going to test it right now. Simple experiment: SVHN, train a model to convergence on train set, measure logprob of weights under prior. Then, concatenate the test set with random labels, train again, measure logprob of weights again. Hypothesis: prior logprob ~same.
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Carles Gelada proslijedio/la je Tweet
Thanks
@sarahookr ... we started off on the wrong foot but got there in the end. With thanks to@maosbot and@dustinvtran for gentle cajoling. And of course to@carlesgelada for considered reflection.Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Yes, it's probably the most interesting conversation that spawned out of the blog post.https://twitter.com/sarahookr/status/1219002633836318720 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Carles Gelada proslijedio/la je Tweet
Back to back interesting blog posts - "...when a Bayesian tells you that BNNs provide good uncertainty estimates.. We should ask, “what evidence are you providing that your priors are any good?” New blog post by
@carlesgelada +@jacobmbuckmanhttps://bit.ly/3ap71nyPrikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Carles Gelada proslijedio/la je Tweet
This makes a lot of sense. Bayesian methods are computationally expensive so there needs to be a clear advantage for using them. Without good priors we don't expect good generalization. Nice work!https://twitter.com/carlesgelada/status/1218395853007790080 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Carles Gelada proslijedio/la je Tweet
Bayesian language is so obfuscating. If I said my first "guess" doesn't matter, or that my "hypothesis" doesn't matter, it would sound absurd, but call it a "prior" and people start nodding along ...
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Carles Gelada proslijedio/la je Tweet
New blog post with
@carlesgelada -- "A Sober Look at Bayesian Neural Networks": https://jacobbuckman.com/2020-01-17-a-sober-look-at-bayesian-neural-networks/ … Without a good prior, Bayesian uncertainties are meaningless. We argue that BNN priors are likely quite poor, and concretely characterize one specific failure mode.Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
We expand on the arguments I made on my original thread and respond to the recent blog by
@andrewgwilshttps://twitter.com/andrewgwils/status/1216070929484341248 …Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Good uncertainties are profoundly connected to generalization. If the prior used in BNNs isn't, the uncertainties will be useless.
@jacobmbuckman and I provide a mathematical argument for that, and we even put into question if the B in BNN is doing much.https://jacobbuckman.com/2020-01-17-a-sober-look-at-bayesian-neural-networks/ …Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Reviewer 2: The authors provided a substantive improvement on the resolution of the meme but failed to cite previous work. 3/10 Reject. https://twitter.com/tetraduzione/status/1217092211369660417 …pic.twitter.com/jUGf1PQEJ0
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.