The dice in this game had three evenly distributed colours - blue, yellow, and red People had to report the colour they got on their first dice roll. If it was blue, they got no money, yellow they got 3 Euros, and red, 5 Euros
-
-
This starts to get a bit murky, because I cannot find a pre-registration for the study That's worrying, because changing your hypothesis after running a study is a classic sign of p-hacking
Show this thread -
Another sign is the statistical analysis. I count upwards of 100 comparisons (Fisher's exact test, chi squared etc) with no correction for multiple comparisons That's...worrying
Show this thread -
If you apply a Bonferroni correction to the results, pretty much every statistically significant finding completely disappears, which is not surprising given that they ran SO MANY tests
Show this thread -
Bringing it back home, we have this sentence in the discussion According to supplementary table 4, this simply isn't true!pic.twitter.com/zNm5GfENI8
Show this thread -
Obese people had differences in behaviour, but the statistical comparisons DIDN'T SHOW A SIGNIFICANT DIFFERENCE Pretty major issue, that
Show this thread -
Anyway, the paper is abhorrent regardless, but I think it also shows some worrying signs of being constructed after the fact from a dataset of a trial with different aims
Show this thread -
Oh, another issue - the paper makes an inherently misleading claim about causality. The primary findings were of a subgroup analysis of non-randomized groups (lean vs obese) and so it's not clear whether this was causal anyway
Show this thread -
Because the randomization was simply fasted vs breakfast, the causal attribution for this study should be comparing those two groups, not the subgroups of obese vs lean
Show this thread
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.
What do we see?