If replication is not an issue, then neither is preregistration. I imagine in pure modeling studies, this is the case. I assume peer review focuses on modeling constraints, validity and parsimony, rather than reliability of results. Open code is important for robustness though.
-
-
Generally speaking, isn't cross-validation to modeling what pre-registration can be for empirical work?
-
I think I know what you are trying to get at, so sort of: yes! Crossval evaluate cog/ML models. If you are interested, there is lots to read online. Crossval ofc doesn't really evaluate things on the theory level of analysis.
-
Right. I see the similarity in that both are pretty good tools to lower/prevent overfitting. Would that be fair to say? I'm definitely interested in reading more; crossval is heavily under-taught and underutilized in my fields.
-
How does prereg prevent overfitting in a concrete way? Crossval does it in a very objective way, if used correctly.
-
Because, when used correctly, prereg prevents or at least reduces analytical choices that are data-dependent. You can't just run multiple analyses until you find one "that works". I don't think prereg prevents all causes of overfitting but it makes it a lot less likely.
-
That's not really enough (too much of an underspecified procedure) though for modeling. We need formal ways of doing it for a model. Which is why we do this: https://en.m.wikipedia.org/wiki/Training,_validation,_and_test_sets …
-
Formal systems need formal evaluations.
-
To be clear: All data gets a chance to be in all of the three sets.
- 6 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.
TFW somebody really gets how you do science (AKA is also a modeller).
"We believe that a hypothesis-centric approach is too impoverished to provide the necessary resources for a formal theory of such open practices."