Hi, Health Nerd. I'm a statistican and textbook author. I computed the p-value and found the exact answer here. There is nothing about the statement I can see that is incorrect. A p-value is the result of the computation testing to see if a result could occur by random chance.
-
-
Replying to @EduEngineer @GidMK
Perhaps you're reading in a meaning the author meant to express but didn't. e.g. perhaps they meant to say "a hypothetical ineffective treatment would generate" rather than "an ineffective treatment generated"
2 replies 0 retweets 4 likes -
Replying to @K_Sheldrick @EduEngineer
It's also worth noting that in the context of this meta-analytic model the p-value is entirely the result of the cherry-picking of "positive" values, so the chance of having a low p-value is 100% regardless of whether ivm works or not
2 replies 0 retweets 7 likes -
Replying to @GidMK @K_Sheldrick
No, there was no cherry-picking. There was an extremely forgiving set of inclusion-exclusion criteria that let in some positive and negative results, but left out almost nothing.
2 replies 1 retweet 39 likes -
Replying to @EduEngineer @K_Sheldrick
Of course there is cherry-picking throughout, it is rather boringly obvious. The anonymous authors of the website simply pick the most convenient values for their analysis so that they can have a better looking model regardless of severity etc
1 reply 0 retweets 7 likes -
I identified one example of this in the thread, but it's pretty much ubiquitous throughout the analysis
2 replies 0 retweets 4 likes -
Replying to @GidMK @K_Sheldrick
Can you name the author in the example of cherry-picking? I don't see it. Which study was misplaced in the inclusion-exclusion criteria?
1 reply 0 retweets 33 likes -
Replying to @EduEngineer @K_Sheldrick
It's not the inclusion criteria, which are basically "chuck all the awful studies into one website". It's just that the authors extract only "positive" results regardless of whether studies actually showed a benefit
2 replies 0 retweets 3 likes -
Replying to @GidMK @K_Sheldrick
You keep conflating "didn't show a benefit" with "wasn't statistically significant", but the latter doesn't make a difference in a binary p-value computation.
2 replies 1 retweet 34 likes -
And your idea of "number wasn't corrected properly" means "awful study" is probably only slightly correlating in reality. I've read a 5-digit number of studies, and I find such mistakes in the majority of them, at every level of clinical quality.
1 reply 0 retweets 32 likes
Yes as I said there are a lot of awful studies. This one is particularly worthless tho, for a number of reasons not limited to the one that you want to ignore for some reason
-
Show additional replies, including those that may contain offensive content
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.