This guy uses Fisher randomization from the 30s to call a bunch of econ papers into question http://personal.lse.ac.uk/YoungA/ChannellingFisher.pdf …
I would like to know the computational complexity of this compared to ordinary statistical methods, probably trivial but I lack the spells
-
-
The other one is the test for excess significance, first proposed here http://datacolada.org/wp-content/uploads/2014/06/3472-Ioannidis-Trikalinos-an-exploratory-test-for-an-excess-of-significance.pdf …
-
Essentially identifying researchers who get too lucky with underpowered studies - lots of criticism and I don't know how to evaluate it
-
One of the criticisms is it's so weak it only identifies the really egregious frauds but there are lots of them apparently
-
But looking at this https://twitter.com/sarahdoingthing/status/674962320548741120 … - how can you have 80+% positive results if power is supposed to be 80%, without fraud?
-
If the best big RCTs get 8% positive results scientists are obviously not perfect predictorshttps://twitter.com/sarahdoingthing/status/674961763318693888 …
-
I'm reading the old Fisher books and as much other stuff as I can but advice appreciated
- 3 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.