Also: even if the Stage 2 review process didn't help enforce rigour in terms of robustness (which imo it does), there is little incentive anyway for authors to underreport on vital exploratory analyses because the outcomes of any analysis do not influence the editorial decision.
-
-
Replying to @chrisdc77 @richarddmorey and
So for me, I would need to see some concrete evidence that the concerns you outline actually manifest in RRs before I see this as something to worry about. With vanilla prereg, I think your case may be stronger, though again, the review process should help.
2 replies 0 retweets 0 likes -
Replying to @chrisdc77 @richarddmorey and
Daniël Lakens Retweeted Daniël Lakens
I preregistered this would be the response.https://twitter.com/lakens/status/1083423995834363906?s=19 …
Daniël Lakens added,
Daniël LakensVerified account @lakensAny improvement in science that becomes popular will lead to a published article that suggests X will not solve all our problems. Unless people provide concrete real-life examples and a cost-benefit analysis of the problems, articles criticizing X probably mean X is good.1 reply 0 retweets 4 likes -
Replying to @lakens @chrisdc77 and
I did not claim that anyone claimed that pre-reg would "would not solve all our problems". The requirement of cost-benefits analysis is ridiculous when my goal is not to KEEP people from pre-reg, but rather to articulate and help avoid a particular issue.
1 reply 0 retweets 4 likes -
Replying to @richarddmorey @lakens and
Given how people tend to use p values rigidly, you should recognise the need for reminding people to have some nuance when using new tools. If one warns against naïve use of hard decision rules, that doesn't mean one is arguing against significance testing.
1 reply 1 retweet 6 likes -
Replying to @richarddmorey @lakens and
Given the history of statistical practice, I think there's plenty of reason to worry that people will be rigid. Let's help avoid that future by first considering it a possibility.
3 replies 0 retweets 2 likes -
Replying to @richarddmorey @lakens and
A similar point was made by
@TrishaVZ at Psychonomics. I don't think we have to worry one iota about people *not* exploring their data. I'm confident that everyone I know does it. Do you know of anyone who doesn't?2 replies 0 retweets 1 like -
Replying to @GordPennycook @lakens and
I have been sent data by people who did not explore their data in the right way to find critical issues with their analysis, yes. Their analysis was dependent on not having seen something important that invalidated their results.
2 replies 0 retweets 1 like -
Replying to @richarddmorey @GordPennycook and
Based on my experience being sent data sets, I believe a lack of robust checking to be the default (indeed, most people's tools, e.g., SPSS, don't really allow very much of it or make it difficult)
1 reply 0 retweets 2 likes -
Replying to @richarddmorey @GordPennycook and
Same here. Modelling people's data I realise my computational models are showing stuff about the data that means the experiment don't mean what the authors hoped/think. (Manuscripts still in pre-preprint format for good examples of this, but obviously will share when ready.)
1 reply 0 retweets 5 likes
Not that I'm the only person that's done this, of course. Pretty standard stuff throughout comp modelling and of the reasons why it's such a powerful way to do science.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.