I'm confused here, because clearly the researchers knew it was an issue otherwise why investigate?
-
-
Replying to @GidMK @JacobGudiol
Let me put it this way - this was clearly something that the authors knew about, investigated, and considered to be an issue that potentially derailed the narrative. And yet, not addressed at all in the paper, except lumped in with other numbers. That is a bad look
1 reply 0 retweets 3 likes -
Replying to @GidMK @JacobGudiol
It is potentially entirely a fluke. It is also possible that excess mortality in this age group was influenced by limited hospital capacity, or that under-reporting of COVID in children was a problem. There is some uncertainty there
1 reply 0 retweets 1 like -
Replying to @GidMK @JacobGudiol
It's also worrying that what the email itself describes is still basically p-hacking - only reporting the analyses that agree with your argument is literally the definition of p-hacking
1 reply 0 retweets 2 likes -
Replying to @GidMK
Both the analysis agrees with the argument. We know that now. There is no real excess death among school kids in Sweden It is just that the other analysis required a lot more explaining in order for it not to be used in a misleading way, as it is now by science magazine etc
2 replies 0 retweets 1 like -
Replying to @JacobGudiol @GidMK
Say you did some sort of a subanalysis of your IFR data and first found something that looked weird, not fitting with your general conclusion.
1 reply 0 retweets 1 like -
Replying to @JacobGudiol @GidMK
Would you not investigate that further before publishing it? And if you found out that it was just randomness, not even some systematic bias in some of the date or similar. Would you still report it? Just to tell people "I also found this but it was nothing"
2 replies 0 retweets 2 likes -
Replying to @JacobGudiol @GidMK
To me p-hacking is excluding data to get to the conclusion you want or choosing one analysis that gives you a significant p-value. You are in some way changing the real conclusion It is not that you choose not to present every single datapoint and analysis that there is
1 reply 0 retweets 1 like -
Replying to @JacobGudiol
It's not about excluding data, it's about stopping your analysis when it matches your conclusions. That can mean doing more analyses until you find one that you like and only reporting that one instead of all of them
1 reply 0 retweets 1 like -
Replying to @GidMK
He didn't stop though. That is why the mail exists. He found some data that stood out and decided to see if he could find more detailed information
1 reply 0 retweets 1 like
Right, but he didn't report the analysis in the paper until after an FOI revealed that he'd done it, no? And instead reported an analysis that backed his argument?
-
-
Replying to @GidMK @JacobGudiol
If he had the data to put the issue at rest, it should be mentioned in the background material or analysed in a research report that could be referred in the letter. The intention of looking into the problem does not make burying it any better.
0 replies 0 retweets 0 likesThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.