Perhaps of interest @bradpwyble @JCSkewesDK @o_guest @zerdeve @rolandVM @RemiGau @wgervais @Alex_Danvers? Comments welcome.https://twitter.com/Psychonomic_Soc/status/1086161758509780992 …
-
-
Sigh. Yes

-
Literally every modeller has these stories.
-
Here's one: Published paper with model + predictions, Submitted followup paper: test predictions w/4 EEG exps + revise model to bring into alignment with new data. Review: Why don't you test the new model to make sure it's correct?pic.twitter.com/Qx0jbonO1b
-
This reflects a continuing obsession with seeking “truth”. In your example we already have two rounds of learning: we learn something when the model misses the data, and also learn from how we have to modify the model to handle those data
-
Agreed. It’s about extracting information such that we learn more & more over time. We may aim for truth in the long, but cannot in anyway guarantuee it in the short run. Moreover, enforcing ‘truth’ in the short run IMO limits us to uncovering not more than simple/shallow truths.
-
Yep, that's it exactly. Moreover, the idea that I could "verify" a complex model by running an experiment or two is incredible. There's no way to tick a box that says the model is right.
- 10 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.
