This is almost certainly wrong, but it illustrates some of the real complexities involved with journalists interpreting scientific research 1/pic.twitter.com/dqJJqVdsC9
You can add location information to your Tweets, such as your city or precise location, from the web and via third-party applications. You always have the option to delete your Tweet location history. Learn more
(Worth noting that there was no suggestion anywhere that this was a causal relationship until the press release, any speculation about the causes is totally hypothetical at this point)
What this tenuous relationship means, in practice, is that pet owners could have anywhere between a 1.8% or 57% increased odds of IBS, which is quite a wide range which verges on 0% increase at the bottom end
The authors helpfully include a forest plot for their study at the bottom Take a look and see what you thinkpic.twitter.com/7yY0GyUlWE
Having seen this plot, are you more or less confident in the statement that IBS is associated with pet ownership?
See, the thing is, that top study appears to be contributing the ENTIRE association. Every other study found no result at all, but one single study has caused the entire relationship to become statistically significantpic.twitter.com/pcW30Bzkhm
Being the nerd I am, I decided to rerun the meta analysis on their sample using the metan command in Stata This is a bit quick and dirty, but using a random-effects model with an inverse variance, I get these resultspic.twitter.com/s95D5VYPQ7
For the epi nerds, when I run it with a fixed-effects model my results are the same as those reported in the paper, but my random-effects model CI crosses 0 
But now comes the interesting part - what happens if I take out that single paper that appears to be driving the result? What do you reckon?
Here's the result. The association disappears completely It looks like one study is driving all of these resultspic.twitter.com/hcmXJePEol
So what is this study? Essentially, a simple observational survey of people in Singaporepic.twitter.com/H2qxbLvGJD
Now, I'm not going to critique this piece of research in-depth, but I think it's worth noting that it only surveyed 300 people, of whom 80 had IBS The other studies looked at a total of ~2,500 people
So what we're seeing in the meta-analysis is basically a series of negative results being totally overset by a single positive result That is not great scientifically!pic.twitter.com/xv5a3lDScx
It's a bit like tossing a coin 5 times, getting 4 tails and 1 heads, and concluding that heads is the right answer
This is especially true when you consider that the p-value is 0.064, which means that these results aren't even ~technically~ significant in any model!
But bringing this back to #scicomm - how is a journalist meant to know this? It's complex stuff. Most scientists I know aren't comfortable re-running a meta-analysis to see what happens when you exclude studies
And the press release, let's remember, is astonishingly positive. No mention of the MASSIVE question mark remaining after this research, just "pet owners more likely to have IBS"
The real finding from this analysis is that there may be a very modest increase in risk of IBS from owning a pet, but this seems unlikely at present based on the totality of the evidence
Who do we blame for the misreporting? I'll leave that to you There are many steps along the way that could've corrected this, but none were taken
SMALL CORRECTION The forest plot I included earlier in the analysis of the random-effects model was from the log-transformed variables (oops) here's the plot once exponentiated:pic.twitter.com/Me4zJfoI6w
Also, the p-value is 0.064 for this model, which is technically not significant. The effect size is also different from that reported in the abstract, however if I run a fixed effects model everything is exactly the same so I suspect that's what was actually done here
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.