6/ major aspects of circumstances were unexpectedly not accounted for by the model *at all*—it’s not even wrong, it’s entirely inapplicable;
-
-
Replying to @Meaningness
7/ rationally recommended course of action is infeasible, ignored, or obstructed, and next-best option is not part of the story;
2 replies 2 retweets 7 likes -
Replying to @Meaningness
8/ relevant common-sense observations can’t be fit into the model because its vocabulary doesn’t cover them; etc. (Maybe this needs a post!)
5 replies 1 retweet 6 likes -
Replying to @Meaningness
I have been musing on whether a Pure Reason Machine is possible … and if so, useful …
1 reply 0 retweets 0 likes -
Replying to @miniver @Meaningness
… so that a scientist or public policy wonk or biz analyst could check that their arguments at least hold water
1 reply 0 retweets 0 likes -
Replying to @miniver
This has been an attractive idea for decades, and I do sort of feel it should be possible, but no one has made it work.
2 replies 0 retweets 0 likes -
Replying to @Meaningness
I gather that one can do it in limited domains. But it runs counter to current machine learning fashions.
2 replies 0 retweets 0 likes -
Replying to @miniver @Meaningness
I am tempted to imagine a philosopher sitting down to clean up the error messages generated by the Reason Machine …
2 replies 1 retweet 0 likes -
Replying to @miniver
https://en.wikipedia.org/wiki/Decision_analysis … is one of several attempts along this general line
1 reply 0 retweets 0 likes -
Replying to @Meaningness
Yeah. Part of what is interesting is how this is one of the much-harder-than-it-looks problems.
1 reply 0 retweets 0 likes
Yes; and exactly for the “rationality doesn’t work the way rationalists think” reasons!
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.