@St_Rev Relevant Shalizi result: Bayesian learner, starting with exactly correct prior, can diverge arbitrarily badly http://vserver1.cscs.lsa.umich.edu/~crshalizi/weblog/606.html …
-
-
-
@St_Rev Subtitled “Often Wrong, Never In Doubt,” which sums up the problem with#Bayesianism… Also has good joke about crunchy integration. -
@St_Rev I haven’t taken the time to think through the exact implications of the construction—how pathlogical is this?—though. -
@St_Rev “This is very simple. If the set of considered models does not contain the true model then Bayesian updating can go very wrong. > -
@St_Rev > “But how does a Bayesian know that her process includes the true model without leaving the reference frame of her church?”
End of conversation
New conversation -
-
-
@St_Rev aumann's agreement theoremThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.
. Banned in Sweden. SubGenius, Zhuangist, white-hat troll. Defrocked mathematician. Brain problems.