Fixed link to PLOS study: http://journals.plos.org/plosone/article/metrics?id=10.1371/journal.pone.0168217 …
-
-
Show this threadThanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Subject of major concern. I spoke at
@HumBehEvoSoc on how researchers can avoid distortion of their work in media, & it starts with DEMANDING!!! to see/correct/have the final say on univ press releases about their work. In short: Be an asshole for science. Proudly. -
Very true, but I would add one caveat: we found in our 2014 study that scientists themselves admit to allowing exaggeration in their own PRs. See panel B below: 30% admitted their most recent PR was exaggerated even when they wrote it themselves, yet all blamed journalists!pic.twitter.com/EqZsC9tunr
-
Yes, but I think it is also a difficult game to play. If you do not simplify in your PR at least a bit, then journals will never print it directly but make their own, which is often so wrong that you could not recognize your own study anymore.
-
Interestingly, we found that when journals do make up their own PR (usually, as you say, w/o consulting the authors) the content is substantially LESS exaggerated than university PRs. See http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0168217 …pic.twitter.com/tXKV2GWTsy
-
That's not to say, of course, that more involvement of the authors in PR production is a bad thing - there are plenty of other differences b/w the way journal and university press offices work that could explain this difference.
-
But pursuing this line of research has led me to challenge a lot my starting assumptions, e.g. that journalists are the main source of hype, that hyped PRs get more news coverage, or that greater involvement of scientists necessarily reduces hype. None are supported by evidence.
-
Part of the problem I see -- just observing how research I know well is characterized -- is that papers move somebody who was writing, say, celeb news over to science. They have no idea of the body of work in a field; clueless about what, say, a cohort study is, etc...
-
Indeed. One of the things we've been working on with
@SMC_London is a potential coding system for PRs that can help convey basic study design info more clearly to reporters. There's a shortage of specialist sci journalists in the UK but a rising volume of sci being reported. - 2 more replies
New conversation -
-
-
and since press releases are, AFAIK, approved by the authors, the ultimate blame for hype rests with scientists themselves. 11 year-old example herehttp://www.dcscience.net/2007/12/05/why-honey-isnt-a-wonder-cure/ …
-
Yes where this is true, I fully agree -- e.g. for most of the UK universities we studied this is the case. Not so much the case for journal press releases, which are often prepared without any consultation with authors (and, interestingly, tend to be less exaggerated as well!)
-
Are abstracts exaggerated? Certainly saw few in econometric journals where case. Also related to multiple comparisons: you *will* find something. Abstract reports that.
-
Good Q. We did code the abstract & main text of articles separately but didn't analyse them as part of the main study (all data can be downloaded here https://figshare.com/articles/InSciOut/903704 … in case anyone ever wants to look). The reason we didn't examine this is that /1
-
Our team lacked the specialist expertise to determine whether differences in the strength of causal statements between abstract and main text were due to exaggeration in one or underexaggeration in the other. Our impression is that abstracts probably do exaggerate. /2
-
And not only in terms of causal statements but also quite likely in the other areas we looked too (advice & sample generalisation). But it would require a team of field specialists to assess this properly. Our analyses focused instead on changes b/w journal article, PR & news /3
-
On the assumption that the journal article represents the baseline (a conservative estimate of exaggeration). We discuss this in the 2014 BMJ paper.pic.twitter.com/DYhH3akum9
-
Ah clear. Tx for thread. V interesting. When in first tweeg say 'coded abstract and main text separately' there was no possibility of an inter rater agreement metric? Or do I misunderstand your 'coding process'?
- 5 more replies
New conversation -
-
-
This has been one of my pet peeves for YEARS, and I think it’s getting worse, not better, because everyone is giving scientists training on writing better “stories” but not the ethics that goes with that. Also, Post Truth by Evan Davis should be required reading for everyone.
-
I also wonder if it's getting worse. We've got an analysis at the moment (sort of) looking at this, comparing results before vs after publication of our 1st paper on it in Dec 2014, on basis that the paper ended up being a kind of naturalistic intervention on UK uni press offices
-
And once we have enough data across the increasing number of groups that are looking at this, we can do proper longitudinal analysis and maybe even some forward projections.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.