As far as current machine learning is concerned, generalization originates from the ability to learn the latent manifold on which the training data lies, i.e. the ability to interpolate between training samples (local generalization, by definition)
-
-
But (by the same no free lunch theorems you invoke) there's NO difference between interpolation and extrapolation. To say "interpolation is a more certain generalisation than extrapolation" is to say "the data is generated by a function linear in THESE particular features".
-
Note: obviously you know far more about theoretical learning theory than I do, so I'm just asking what I'm missing. Under reparameterisations of feature- and target- spaces, we can switch between interpolation and extrapolation.
End of conversation
New conversation -
-
-
H. Poincaré (1908) :les faits les plus intéressants sont ceux qui peuvent servir plusieurs fois ; ce sont ceux qui ont chance de se renouveler….. Quels sont donc les faits qui ont chance de se renouveler ? Ce sont d'abord les faits simples
-
and ( or +): Einstein : An explanation of data should be made as simple as possible, but no simpler
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.