Perhaps we need a hybrid solution: general, fast and stupid learning, and expensive local problem solving at exactly the points where it does not converge.
-
-
I think at the elementary level of inference, namely cortical belief propagation, there is a prior to expect change that allows to switch between and refine hypotheses (this also solves the "Dark-Room Problem"). This prior is also a bit modulated by how much the inference agrees.
-
I think backtracking is rather a cognitive strategy that is acquired by trial-and-error or culture.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.