How could any amount of data have no evidence at all? The statistician must be wrong.
-
-
-
Noise. Randomness.
-
As a Bayesian, the only way I see that the experiment - even a noisy one - can provide exactly zero information about the hypothesis is if the experiment is completely unrelated to the hypothesis or otherwise fatally flawed.
-
The SME is probably better than the statistician at detecting these types of flaws.
-
Two people argue about which pet is a more common: cats or dogs. They decide to go to a vet and see what animals come in. First, a ferret. Then an emu etc. By the end of the day, no cats or dogs. No information about the hypotheses. Reasonable experiment, but uncooperative data.
-
Fair enough, but I would call that fatally flawed and I imagine the SME would be in a better position to know that ferrets and emus are unrelated and that this sampling frame selects on sickness. I could be wrong, of course. :)
End of conversation
New conversation -
-
-
Both can be right as "being more confident about the hypothesis" is definitely not the same as "statistically showing a hypothesis to be true"
-
What do you think is the difference?
-
Wearing my oncologist hat, I will treat a patient if I see a clinically relevant trend in outcomes even if my pharmacoepi hat tells me the null hypothesis is statistically not ruled out. Decision making under epistemological uncertainty is different from statistical reasoning.
-
Yet the data supporting either decision making process are the same.
End of conversation
New conversation -
-
-
What does "...finds no evidence that the hypothesis is true..." mean?
-
Can you elaborate? Maybe I'm missing something and you can help me come up with a better version of the question.
-
I suppose it boils down to what you mean by "best techniques". How did the statistician decide there was no evidence? I'm presuming they didn't just decide based on statistical significance.
End of conversation
New conversation -
-
-
I put 'Yes' but I want to be on the record as saying I am deeply skeptical of expert opinion that isn't validated by statistical methods. The 'Yes' is because it's pretty feasible that the expert's priors are right, but a more conservative prior finds insufficient support.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
The subject matter expert is a statistician, innit?
- End of conversation
New conversation -
-
-
A domain expert will be much more aware of other corroborating evidence and what we should expect from the experiment. If the controls behave as expected for example. We should avoid p-hacking and so on but we mustn’t dismiss the expertise of those who know the system better.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Amateur here: but doesn’t your statement still allow for the presence of evidence against competing hypotheses (which the statistician may be simply unaware of)? Guess that’s not what you mean to ask with the hypothetical though
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Sounds like you are just describing a bayesian and frequentist analysis. Both are right in their own framework, the bayesian because the expert believes in it, the frequentist because there is no evidence. To find work for the statistician, you can downdate to find the prior.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
What's the sample size? Power? Alpha? The prior? Cost of making a mistake?
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.