When preparing the data, I scrutinized the hell out of the files. I wanted to make sure the variable names made sense to anyone who stumbled across the data, I quadruple checked all of the analyses. I posted the r script. Surely, there was nothing that I missed, right? 2/n
-
-
Show this thread
-
Turns out, NO. I get the unexpected editors decision that my paper was flat out rejected. I was quite confused because this paper is actually pretty important, and I thought for sure this journal would find it interesting. I was upset, so I put it aside for a day. 3/n
Show this thread -
I finally get to the review, and the only pertinent comment was that the reviewer re-ran the analyses and even did a factor analysis on the Big 5 scale (consc was a primary IV). The paper was rejected because this reviewer had a “hunch” that the scale was incorrectly coded. 4/n
Show this thread -
I was furious. How could the scale be wrongly coded? I checked everything numerous times! There was no chance in hell. So once again, I set it aside to work on other projects for a week while I cooled off. 5/n
Show this thread -
On some downtime here in Croatia, I figured I would start dabbling into these analyses again. I check the code and everything was correct! I start drafting my response to the editor protesting the rejection. How could my paper be rejected on a “hunch” that was wrong? 6/n
Show this thread -
Im swearing up and down that I will never share my data again because how could something like this happen. But before I send my draft to my supervisor, I had an inkling to look at the qualtrics survey just to make sure I was 100% right. Turns out, I was not. 7/n
Show this thread -
Things look fine initially, but apparently, the RA who put together the survey together entered 10 of 44 items of the BFI out of order. And the reviewers “hunch” was indeed correct...the items were coded correctly, they just weren’t entered correctly. 8/n
Show this thread -
I checked (almost) everything and was so sure that I was right. I was also about 5 seconds away from changing my position on open science. But I am SO glad that I took the time to step away from the project to look at things more clearly. 9/n
Show this thread -
Most importantly, I am SO glad that I shared my data because this mistake NEVER would have been caught otherwise. I’m quite “hover-ish” when it comes to my RAs, but never would I have thought to double check something as simple as correctly entering a survey 10/n
Show this thread -
Especially because we have a bank of questionnaires that are simply copied and pasted into surveys (this RA didn’t do that for some reason). 11/n
Show this thread -
Fortunately, I think this error will actually make my paper a lot stronger. And as upset that I am about the 3 months of review that are now lost, I am happy to know that you didn’t publish a misleading paper. And from now on, I will always share my data. /end.
Show this thread -
Bonus advice to anyone who may fall into the trap I did: the time I spent preparing my data to share made me over confident, more so than usual, I’d say. Do yourself a favour and always keep an open mind that you may be wrong. It will save a lot of headache in th long run.
Show this thread
End of conversation
New conversation -
-
-
The support on this thread is amazing & very encouraging. I would like to challenge anyone who has liked, re-tweeted, and/or commented on this thread to give open science a shot. Whether it's trying a pre-reg, sharing materials or data. Let's work together to make science better.
Show this thread -
Here are some resources for anyone interested: Orgs:
@OSFramework@improvingpsych@PsiChiHonor@PsySciAcc@open_con@UCBITSS Intro paper: https://twitter.com/PsyArXivBot/status/1019821446531600384 …@lakens coursera course: http://www.coursera.org/learn/statistical-inferences …Show this thread
End of conversation
New conversation -
-
-
-
Nah. The hero is the reviewer who actually took the time to check the data!
-
I should also say, the reviewer was actually quite encouraging and empathetic given the situation. They could have easily been a jerk or said nothing, but they really went above and beyond!
- 1 more reply
New conversation -
-
-
This is great! As someone who worked for decades putting info into databases & later trying to retrieve it & taught others how to do it, I've learned there are ALWAYS errors in data entry. To avoid problems w/final result, assume errors & build checks into the process & workflow.
-
The data entry errors are the easiest to make, the hardest to find, and the most exhaustive to prove their absence by.
-
Only way I know to guard against errors–assuming access to people who are entering data, or data itself immediately after entered–is constant spot checks. Obvs. not great for huge amounts of data & metadata. For large am'ts you should checking for patterns & unexplained outliers.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.