If replication is not an issue, then neither is preregistration. I imagine in pure modeling studies, this is the case. I assume peer review focuses on modeling constraints, validity and parsimony, rather than reliability of results. Open code is important for robustness though.
-
-
Because, when used correctly, prereg prevents or at least reduces analytical choices that are data-dependent. You can't just run multiple analyses until you find one "that works". I don't think prereg prevents all causes of overfitting but it makes it a lot less likely.
-
That's not really enough (too much of an underspecified procedure) though for modeling. We need formal ways of doing it for a model. Which is why we do this: https://en.m.wikipedia.org/wiki/Training,_validation,_and_test_sets …
-
Formal systems need formal evaluations.
-
To be clear: All data gets a chance to be in all of the three sets.
-
Agreed! I don't want to argue that prereg can replace crossval, is better than it, or does exactly the same thing. There are some similarities in their utility though. Both are great tools given the right circumstances and should be used more often, when appropriate.
-
Modelling though is not the same as analysing data in a deep way. So prereg in the cases I can think of is not able to help with avoiding overfit.
-
Agreed.
-
I'm glad to see quite a few people like you who realise we're not trying to stop you overhauling the method in your (sub*)fields. We're all for open science. We just want to do it right in our (sub*)fields too.
- 2 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.
TFW somebody really gets how you do science (AKA is also a modeller).
"We believe that a hypothesis-centric approach is too impoverished to provide the necessary resources for a formal theory of such open practices."