You are not afraid that revealing this Achilles heel of yours can be exploited by disinformers, aiming either merely to win your approval by faking transparency, or even to discredit you for approving a strawman(!) study that they subsequently butcher completely? 
-
-
-
Replying to @GidMK
You basically announce that you not only endorse openness, but are biased in favor of studies that appear to be open. Thus, faking openness, e.g. by being selectively open about some mistakes, will be a natural (if cynical) approach to getting you and your followers on board. 1/
2 replies 0 retweets 0 likes -
Replying to @EliasHasle @GidMK
Elias Hasle Retweeted Covid19Crusher
An ivermectin believer pointed me to a critical review https://twitter.com/Covid19Crusher/status/1367854831076061185?s=20 … (to which I am very skeptical), where the study is attacked from all angles (in effect attacking its openness too). "Hostile study design", "underpowered", "treated the control group" etc.. 2/
Elias Hasle added,
Covid19Crusher @Covid19CrusherLow usefulness underpowered ivermectin RCT in young patients published by JAMA. IVM arm shows: • lower mortality • lower disease progression • lower hospitalization or ICU or O2 need • faster symptom resolution but no stat significance due to design. https://jamanetwork.com/journals/jama/fullarticle/2777389 …Show this thread2 replies 0 retweets 0 likes -
Replying to @EliasHasle
2 tweets into that review and much of it is already nonsensical
2 replies 0 retweets 1 like -
Replying to @GidMK @EliasHasle
Ok, that thread is amazingly bizarre. Completely ignores confidence intervals or any statistical tests because of some vague accusations of bias on the part of the authors, then just cherry picks the few results that were (non-significantly) better in the ivm arm
1 reply 0 retweets 1 like -
Replying to @GidMK
For all of us who must admit not to have read the paper, could you confirm or reject some of the many claims about error sources that (supposedly) favor negative/neutral/non-significant results?
1 reply 0 retweets 0 likes -
Replying to @EliasHasle
The argument about "6.1x" is obvious nonsense - you can't compare people in a study (who are carefully tested and screened) to the general population like that, it's just poor understanding of how RCTs work
2 replies 0 retweets 0 likes -
Replying to @GidMK
Thanks for responding (again)!
I am sorry, but I don't see any argument about "6.1x" in that thread.
1 reply 0 retweets 0 likes -
Replying to @EliasHasle @GidMK
Elias Hasle Retweeted Covid19Crusher
If you mean this, https://twitter.com/Covid19Crusher/status/1367854834536357889?s=20 …, it appears to me as an argument about low statistical power (by low event frequency in both groups monitored), for which it should be relevant.
Elias Hasle added,
Covid19Crusher @Covid19CrusherLet's start with a clinical trial golden rule: the population sample must reflect the population to treat in the real world. An absolute must. And an absolute design fail right from the start: this study has cherry-picked low-risk patients 6x less likely to die. pic.twitter.com/wYWcIhHdonShow this thread1 reply 0 retweets 0 likes
Nah it makes no sense whatsoever. Cases/deaths in a trial that tests every person will by definition be different to the average across the population for so many reasons, it's not a reasonable criticism of the study at all
-
-
Replying to @GidMK
I think labeling it as "an absolute design fail" is an absolute review fail.
Reducing statistical power is of course only a problem if it becomes unlikely to detect a valuable effect size. Unlike you, the Covid19Crusher "review" omits any actual treatment of statistical power.0 replies 0 retweets 1 likeThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.