Sure many regularizations have a Bayesian interpretation… but everyone and their mama has an interpretation of what regularizations do. The question is: What did we gain from it being Bayesian? 2/4
-
-
Prikaži ovu nit
-
You could say that BNNs allow us to empirically find regularizations which can’t be implemented as anything other than Bayesian priors. Yes, but what is the reason to believe that this space of regularizations is more interesting than any other? Others are easier to work with 3/4
Prikaži ovu nit -
Another way BNNs could be used is as a Bayesian-meta-learning framework, where we meta learn a prior so that the learning of a new task’s weights is fast. Potentially useful but again, no reason to believe this will be a better meta-learning framework than any other. 4/4
Prikaži ovu nit
Kraj razgovora
Novi razgovor -
-
-
Most Bayesian NNs are interesting only insofar as they give you a way to get calibrated probabilistic outputs. In practice, model ensembles (which, as you pointed out, can be given a Bayesian interpretation) seem to work much better.
-
We looked at the comparison of some common Bayesian methods and ensembles in our recent paper Deep Ensembles: A Loss Landscape Perspective https://arxiv.org/abs/1912.02757 both in terms of accuracy as well as calibration scores
- Još 1 odgovor
Novi razgovor -
-
-
Tweet je nedostupan.
-
Do bayesian neural networks give you better control over the expected risk of your classifier? That seems to be a crucial point.
- Još 3 druga odgovora
-
-
-
By being Bayesian you can perform more efficient exploration in model-free RL, improving over ensemble-based methods:https://towardsdatascience.com/successor-uncertainties-b498097827fb …
-
In general, exploration in RL is based on the usage of confidence intervals (over the MDP parameters or over the space of value functions). You can get CIs without the need of a prior but if you have one bayesian methods for exploration make sense.
Kraj razgovora
Novi razgovor -
-
-
How else do you assess the uncertainty in the output of a NN? By smearing the parameters. By how much? Bayes theorem lets you calculate a prior. There is always a prior, even if one doesn't believe in Bayes theorem effectiveness (which is by definition a prior)
-
There are other ways to get uncertainty estimates in deep learning. Ensambles, for example, are well known to perform much better than BNNs for this porpuse. Also, Bayes rule to calculate a prior?? Could you explain what you mean by that.
- Još 5 drugih odgovora
Novi razgovor -
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.