"Surveys of hundreds of [academic] articles have found that statistically non-significant results are [falsely] interpreted as indicating ‘no difference’ or ‘no effect’ in around half" https://www.nature.com/articles/d41586-019-00857-9 …
-
Show this thread
-
Seems this can reasonably be interpreted as shorthand for "no statistically significant difference or effect".
3 replies 1 retweet 6 likesShow this thread -
"bucketing results into ‘statistically significant’ & ‘… non-significant’ makes people think that the items assigned in that way are categorically different. The same problems are likely to arise under any proposed statistical alternative that involves dichotomization"
2 replies 0 retweets 11 likesShow this thread -
This seems to be a generic argument against ever using dichotomous language to describe a continuous world. Do we really want to ban such language from all academic publications?
4 replies 2 retweets 12 likesShow this thread -
Replying to @robinhanson
If you want a one-number rubric for "do I care?" it should be Cohen's d, not p-value.
1 reply 1 retweet 0 likes -
Replying to @s_r_constantin
Once you grant that dichotomies typically fail to fully describe continuous worlds, you'll also have to grant that single numbers similarly fail to describe multidimensional worlds.
1 reply 1 retweet 2 likes
Sure. But "p-values make bad go/no-go decisions" is actually true, for reasons independent from the "never categorize anything" insanity. Small-but-statistically-significant effects should *also* mostly not drive action.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.