Learning on a *closed* set is trivial extension of optimization mental model. Learning in an open domain breaks it
-
-
-
ditto compute costs. For problems in P, you can throw compute cost into utility fn. For NP, you're screwed.
End of conversation
New conversation -
-
-
A deep fallacy of casual use of 'optimization' metaphor is that adding info to optimization frames is trivial
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
common assumption: "learning" is just adding info-costs, bounded rationality is just compute-cost. No.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
My general reaction to "efficiency" is "eww." I have visceral aversion to anybody who thinks optimization-first
-
Sometimes the task is such that not starting with every optimization in mind is not an option.
End of conversation
New conversation -
-
-
"When two people meet, they often ask each other What do you do?" spooky b/c
@Vaguery http://vaguery.com/words/do-it-wrong-together …Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
never do a raccoon a favor
@sarahdoingthing@vgr@sjmoodypic.twitter.com/P8L0UmUV8u
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.