3/ Rationalists’ expected failure modes: parameter uncertainty, incomplete information of known types, insufficient computation power, etc.
-
-
Replying to @Meaningness
4/ Rationality actually works through intelligent interpretation of inherently ambiguous rules in concrete but ambiguous situations.
2 replies 6 retweets 19 likes -
Replying to @Meaningness
5/ Some typical rational failure modes: model vocabulary fails to make relevant distinctions; sensible rule misinterpreted in specific case;
1 reply 3 retweets 11 likes -
Replying to @Meaningness
6/ major aspects of circumstances were unexpectedly not accounted for by the model *at all*—it’s not even wrong, it’s entirely inapplicable;
1 reply 3 retweets 7 likes -
Replying to @Meaningness
7/ rationally recommended course of action is infeasible, ignored, or obstructed, and next-best option is not part of the story;
2 replies 2 retweets 7 likes -
Replying to @Meaningness
8/ relevant common-sense observations can’t be fit into the model because its vocabulary doesn’t cover them; etc. (Maybe this needs a post!)
5 replies 1 retweet 6 likes -
Replying to @Meaningness
I have been musing on whether a Pure Reason Machine is possible … and if so, useful …
1 reply 0 retweets 0 likes -
Replying to @miniver @Meaningness
… so that a scientist or public policy wonk or biz analyst could check that their arguments at least hold water
1 reply 0 retweets 0 likes -
Replying to @miniver
This has been an attractive idea for decades, and I do sort of feel it should be possible, but no one has made it work.
2 replies 0 retweets 0 likes -
Replying to @Meaningness
I gather that one can do it in limited domains. But it runs counter to current machine learning fashions.
2 replies 0 retweets 0 likes
Well, current machine learning stuff is impressive in a narrow class of applications, but certainly not reasoning sorts of thing.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.