But publishing is essentially a form of pre-registration. Once published, it's a permanent record of the model's particular instantiation. No HARKing possible.
-
-
Replying to @bradpwyble @IrisVanRooij and
HARking occurs during model building. E.g. people change their hypotheses about model mechanisms and do incremental adjustments until they're happy with the model fit to the data.
1 reply 0 retweets 2 likes -
Replying to @GunnarBlohm @IrisVanRooij and
That's just model building. All models are built based on data. How else would you do it?
2 replies 0 retweets 4 likes -
Replying to @bradpwyble @GunnarBlohm and
I think this is any interesting question. I agree iterating on models is part of the process, but valuable info is lost if those iterations aren't reported in the paper and only the final product is
2 replies 0 retweets 3 likes -
Replying to @neurograce @bradpwyble and
Yes - so a final report that includes information about failed iterations would achieve the same purpose as a pre-reg in this case, if @GunnaerBlohm's goal is to prevent HARking?
2 replies 0 retweets 1 like -
Replying to @venpopov @neurograce and
But you don't need to test the predictions in the paper that published the model initially. Save that for later.
2 replies 0 retweets 3 likes -
Replying to @bradpwyble @venpopov and
I agree with this position, but generally I have found it hard to get models published without empirical tests. It seems that reviewers tend to underestimate the scientific contribution of coming up with a model that generates/explains key phenomena in the first place.
1 reply 0 retweets 8 likes -
Replying to @IrisVanRooij @bradpwyble and
Couldn't agree more. I have never been able to publish a model without accompanying new empirical data and it is a major barrier to theoretical work.
2 replies 0 retweets 6 likes -
Replying to @tom_hartley @IrisVanRooij and
I've published models without new data. But they were well interfaced with existing data!
1 reply 0 retweets 5 likes -
Replying to @GunnarBlohm @tom_hartley and
In this paper (https://www.ncbi.nlm.nih.gov/m/pubmed/28986463/ …), eg, we tried 2 different learning options. When one fit the data better than the other I initially thought to only report the one. But including both helped flesh out the story (& it didnt even need new data!)
1 reply 0 retweets 4 likes
I published a paper without data and with a model. Very lucky to have had of course.http://dx.doi.org/10.7554/eLife.21397 …
-
-
Replying to @o_guest @neurograce and
We now have 2 published papers of this form and the reviewers were surprisingly accepting, even though the journals were empirical. Many other examples exist from back in the day, e.g.http://www.jneurosci.org/content/13/11/4700 …
1 reply 3 retweets 6 likes -
Replying to @bradpwyble @o_guest and
Hopefully publishing models without new data becomes widely acceptible in psych. Einstein didn't test relativity, other people ran those experiments later *because* relativity was already published
0 replies 1 retweet 5 likes
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.