Conversation

Are there well known non-frequency-domain compression techniques (lossy or lossless) that exploit information sparsity? Like the first image could be segmented as in the second image, and only the 4 non-empty rectangles stored. That sort of approach.
Image
Image
14
Non-frequency domain because it seems to me preserving feature legibility is a useful property. I can say “rectangle id 14 contains a smiley”
1
3
Example of why it’s useful: you can navigate (inefficiently) directly on a compressed map if it’s not in some other conformal space. Like knowing only the highways topology. Shortest path point to point in the highway graph won’t be real shortest path but it will avoid obstacles.
Quote Tweet
Replying to @vgr and @meditationstuff
(the robots move on a graph connecting the centroids of the open rectangles so you get weird artifacts)
Image
1
1
I’m trying to think this through for time perception in narrative memory, not actually implement it. For example our brains seem to store emotionally charged events in great detail (so recalled time perception slows per David Eagleman experiments) but yada-yada over white space.
For time, slow/fast time perception is still ontologically *time* perception. We don’t suddenly switch to frequency based perception when thinking about, for eg., 10,000 years of yada-yada history. Cyclic theories of history seem clearly like a different compression ontology.
Replying to
Not quite the same thing, but deep learning autoencoders represent high dimension inputs in sometimes very low dimensional spaces. Below is a common example of MNIST images 28x28) encoded into two dimensions. The relevant part is that the latent space is semantically meaningful.
Image
1
Replying to and
i.e., the number two gets mapped to the same space in latent space. More advanced methods like VQ-VAE have latent variables that encode position, size, color, etc. So you can both interpret a latent representation, and modify it in meaningful ways.