Their risk of bias rating was basically "does this study agree with me?". They transformed numbers weirdly to give odds ratios that they then inappropriately plugged into a single statistical model
-
-
Show this thread
-
There was no meaningful distinction between Big Data and/or real world, and the disconnect made no sense. The papers they liked appear multiple times in the analysis, but the ones they disliked only get a single line (for no reason)
Show this thread -
Search strategy, inclusion/exclusion, and pretty much the whole review methodology were opaque and/or non-existent Major studies were completely overlooked in favor of the authors' own work
Show this thread -
-
Also, it's a fairly major publication for the journal, and there are clearly many language issues/mistranslations. Not a great look
Show this thread
End of conversation
New conversation -
-
-
The point (for me) is that science does not stop at publication. Scientists can still discuss published science. And people who care about science need to be critical of flawed research, no matter who published it. I care about that. But to each their own of course.
-
Oh certainly, and I don't disagree. I'm just frustrated at a system that lets this occur - this isn't so much a paper as a lengthy opinion piece dressed up with fancy words and statistical jargon
- Show replies
New conversation -
-
-
I'm trained to be open-minded to all possible outcomes..but...well...errm.......BOLLOCKS!!
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
He did it again!
@medmastery FYIThanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
La France, Raoult et la méthode scientifique...dans l’embarras!
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
Show additional replies, including those that may contain offensive content
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.