Not rounding would make it even more interesting.
-
-
-
The p-value for this distribution of p-values having occurred by chance is zero, I'd bet.
-
But isn’t the problem with these recurring plots that text scrapers count <0.05 as 0.05?
-
@lakens has written about this so can say more (I might be wrong) -
This plot is completely uninformative. So sad to see it being retweeted like this. Means scientists are clueless about the real problem. This is indeed p<.05. See Lakens, 2014, 2015 why it is dfficult to learn anything from these analyses.
-
Elaborate more pls. This plot clearly signals fraud imho
- 3 more replies
New conversation -
-
-
What's provocative about this slide?
-
p-values should be uniformly distributed under the null (flat distribution). If there is a real effect, p-values should cluster closer to 0. That there is a magnificent spike at .05 suggest that there is some serious p-hacking and selecting reporting/publishing going on...
End of conversation
New conversation -
-
-
I asked my stats teachers why 5% was the number for "significance." A one-in-twenty chance that you're wrong seems high for so many uses. From junior high through college, all said "that's just the number." Textbooks skimped on this too, leaving it context-free. Balderdash.
-
Nitpick: it's not 1 in 20 chance you're wrong, it's 1 in 20 chance that you could get a result as strong as you got under the null hypothesis. That distinction is part of the reason I think Bayesian approaches make more sense in many applications -- they better match intuition
-
As strong, OR STRONGER
End of conversation
New conversation -
-
-
Very disturbing. Figures from this study, I believe: https://www.ncbi.nlm.nih.gov/pubmed/26978209
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
I don't know that there's anything sinister here. If you have a real effect, you can keep expending resources to increase N until you can show it.
-
If you keep on gathering data until your p<0.05, then your p-value is not a good estimate of p due to multiple testing.
-
Not really versed in frequentist thinking, but my understanding is not that p is a real thing you're estimating, but a property of your data & null hypotheses. Hence, more independent data can reduce p to significance. Am I missing something?
-
More data is good. But if you test multiple times where you already have enough data you have each time a chance to have a result that is too significant by chance, while p is supposed to protect you from getting a result by chance. That is the multiple testing problem. 1/2
-
To take an example from climate. If you have a reason to test the trend for a certain period, the error of your regression parameters will be right. But you increase your chance to find a "hiatus" by testing all possible begin years. Then the p-value is no longer right. 2/2
-
Surely also in Bayesian statistics the same problem with https://en.wikipedia.org/wiki/Multiple_comparisons_problem … exists.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.