Guys, what's that damn theorem about rational Bayesian agents w/access to same data agreeing? I am bad at names.
-
-
Replying to @St_Rev
@St_Rev Relevant Shalizi result: Bayesian learner, starting with exactly correct prior, can diverge arbitrarily badly http://vserver1.cscs.lsa.umich.edu/~crshalizi/weblog/606.html …1 reply 0 retweets 1 like -
Replying to @Meaningness
@St_Rev Subtitled “Often Wrong, Never In Doubt,” which sums up the problem with#Bayesianism… Also has good joke about crunchy integration.1 reply 0 retweets 0 likes -
Replying to @Meaningness
@St_Rev I haven’t taken the time to think through the exact implications of the construction—how pathlogical is this?—though.1 reply 0 retweets 0 likes -
Replying to @Meaningness
@St_Rev Discussion: http://delong.typepad.com/sdj/2009/03/cosma-shalizi-takes-me-to-probability-school-or-is-it-philosophy-school.html … and http://tsm2.blogspot.com/2009/03/theology-and-probability.html …1 reply 0 retweets 0 likes -
Replying to @Meaningness
@St_Rev “This is very simple. If the set of considered models does not contain the true model then Bayesian updating can go very wrong. >1 reply 1 retweet 1 like
@St_Rev > “But how does a Bayesian know that her process includes the true model without leaving the reference frame of her church?”
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.