2. Can we estimate log-likelihoods via sampling? The answer is yes, but we need to do it correctly. Inverse binomial sampling (IBS) is a technique to estimate log-likelihood via sampling in an *efficient* and *unbiased* way. Log-likelihoods for likelihood-free models!
-
-
Prikaži ovu nit
-
3. How does it work? For each data point (e.g., trial), sample from the model until the simulated response matches the observation. The IBS estimator maps the # of samples to the log-likelihood. This differs from taking a fixed # of samples, which we show can be a terrible idea.pic.twitter.com/KoDdcyQzyY
Prikaži ovu nit -
4. The IBS estimator is simple to implement and has many desirable properties: no bias; efficiency in allocating samples to different trials; low variance; and we obtain calibrated estimates of the variance (so we can get to the desired level of precision).
Prikaži ovu nit -
5. We demonstrate IBS for maximum-likelihood estimation with several computational models of increasing complexity. IBS produces noisy estimates, but combines well with surrogate-based, gradient-free optimizers such as BADS (https://github.com/lacerbi/bads ).pic.twitter.com/BnJIuvmcFJ
Prikaži ovu nit -
6. Limitations? Technically IBS works for *discrete* observations (but can be extended to the continuous case). Also, you need to be able to sample observations for a given set of stimuli ("conditional simulation"). See paper for details!
Prikaži ovu nit -
7. We would love to hear your comments and questions about the paper and method! Preprint here: https://arxiv.org/abs/2001.03985 and code (for now only MATLAB, but we'll expand):https://github.com/lacerbi/ibs
Prikaži ovu nit
Kraj razgovora
Novi razgovor -
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.