A model compresses a state space by capturing a set of invariances that predict the variance in the states. Its free parameters define the latent space of the model and should ideally fully correspond to the variability, the not-invariant (= unexplained) remainder of the state.
-
Show this thread
-
The proper representation of invariances (which include allowed set of values of the free parameters) is conditional on the reason for an invariance. The relationship to this reason is itself an invariance that needs to be made conditional on the foundations of its semantics.
5 replies 1 retweet 1 likeShow this thread -
Replying to @Plinz
2) Here's a probably-wrong paraphrase of the above: Models are proprer (truer?), when they reason about the invariances they use. This reasoning itself is a model, and so, is also depends on 'reasons' which are "the foundations of its semantics" - Um, but what are those?


2 replies 0 retweets 0 likes -
Replying to @mattgiammarino
It seems that universal computation and Bayes are at the root of modeling and learning.
2 replies 0 retweets 0 likes -
Replying to @Plinz
Ok, the layers in learning/modeling are: computation + bayes <--> languages that can be used to reason <--> reason < --> perceptual models. Coherence among these means you have a proper model??
1 reply 0 retweets 0 likes
Understanding means that we think we found a mapping to a thing we think we can already compute. (We rarely prove any of that to ourselves.)
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.