Also check out @ReScienceEds, a journal dedicated to publishing replications in modelling.
-
-
Replying to @o_guest @the_Sage_BB and
Olivia Guest | Ολίβια Γκεστ Retweeted Olivia Guest | Ολίβια Γκεστ
To be clear, my high-level point is that just because replication in modelling is important doesn't mean prepreg can solve it by definition. Literally, the way we do science is different. See
@zerdeve's work, relevant quotes here:https://twitter.com/o_guest/status/1062771691552673793 …Olivia Guest | Ολίβια Γκεστ added,
Olivia Guest | Ολίβια Γκεστ @o_guest
TFW somebody really gets how you do science (AKA is also a modeller).
"We believe that a hypothesis-centric approach is too impoverished to provide the necessary resources for a formal theory of such open practices." @zerdeve https://arxiv.org/abs/1811.04525Show this thread1 reply 1 retweet 3 likes -
Replying to @o_guest @the_Sage_BB and
Generally speaking, isn't cross-validation to modeling what pre-registration can be for empirical work?
1 reply 0 retweets 1 like -
Replying to @Research_Tim @the_Sage_BB and
I think I know what you are trying to get at, so sort of: yes! Crossval evaluate cog/ML models. If you are interested, there is lots to read online. Crossval ofc doesn't really evaluate things on the theory level of analysis.
2 replies 0 retweets 1 like -
Replying to @o_guest @the_Sage_BB and
Right. I see the similarity in that both are pretty good tools to lower/prevent overfitting. Would that be fair to say? I'm definitely interested in reading more; crossval is heavily under-taught and underutilized in my fields.
1 reply 0 retweets 1 like -
Replying to @Research_Tim @the_Sage_BB and
How does prereg prevent overfitting in a concrete way? Crossval does it in a very objective way, if used correctly.
1 reply 0 retweets 1 like -
Replying to @o_guest @the_Sage_BB and
Because, when used correctly, prereg prevents or at least reduces analytical choices that are data-dependent. You can't just run multiple analyses until you find one "that works". I don't think prereg prevents all causes of overfitting but it makes it a lot less likely.
1 reply 0 retweets 1 like -
Replying to @Research_Tim @the_Sage_BB and
That's not really enough (too much of an underspecified procedure) though for modeling. We need formal ways of doing it for a model. Which is why we do this: https://en.m.wikipedia.org/wiki/Training,_validation,_and_test_sets …
1 reply 0 retweets 2 likes -
Replying to @o_guest @Research_Tim and
Formal systems need formal evaluations.
1 reply 0 retweets 2 likes -
Replying to @o_guest @Research_Tim and
To be clear: All data gets a chance to be in all of the three sets.
2 replies 0 retweets 1 like
That's if you want to be exhaustive and it's practical to do so. Another thing to bear in mind is that not all models are trained of course. In those cases other formal model selection techniques can be used.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.