Yes, or rather the general version. I think it is much more generalizable. I would like to hear audio, speech and music made with that, and look at NPC behavior in games, and perhaps computational models of non-semantic perceptual imagination.
-
-
I think this is only tractable because it models probabilities by simple correlations and using a small number of states. Backtracking & modeling would be intractable with learnt, high-dimensional states (GANs and Glow are SOTA and neither can backtrack).
-
Perhaps we need a hybrid solution: general, fast and stupid learning, and expensive local problem solving at exactly the points where it does not converge.
- 2 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.