Very interesting. Hypothesis: this works for modelling relatively "low complexity" data. Recurrent loops of fitting to outputs of previous iterations strip the "high complexity" noise and leave only the underlying lo-co phenomenon. Wondering how to formalize and test this.
-
-
-
Wilder speculation: this is somewhat similar to confirmations bias? In each loop the learner is more "biased" towards the phenomenon being modelled.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.