The badness is usually a matter of what the questions mean to the audience, how unlike things are grouped, and what claims the data are used to prove. It’s rarely on the surface.
-
-
Show this thread
-
Often the badness is introduced (or at least compounded) after the study is complete, in its presentation to the media. The survey didn’t group unlike things, but the lead investigator did in talking to reporters.
Show this thread -
There’s a mix of intention and accident: better social science magic tricks get popular, and people innovate better magic tricks.
Show this thread -
For instance: it’s hard to see what’s wrong with this... https://www.cdc.gov/violenceprevention/pdf/2015data-brief508.pdf … / https://www.cdc.gov/violenceprevention/pdf/nisvs-statereportbook.pdf …pic.twitter.com/mEmhQt6bKd
Show this thread -
...unless you have this context http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.845.1250&rep=rep1&type=pdf … / https://psycnet.apa.org/record/2014-34003-001 …pic.twitter.com/z9MqweUtMc
Show this thread
End of conversation
New conversation -
-
-
Maybe the real answer is that there's no such thing as a particularly good survey instrument
-
also the reason this is true is the reason it’s hard to tell if it’s bad from looking at it
- 1 more reply
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.