Are there well known non-frequency-domain compression techniques (lossy or lossless) that exploit information sparsity? Like the first image could be segmented as in the second image, and only the 4 non-empty rectangles stored. That sort of approach.
Conversation
Non-frequency domain because it seems to me preserving feature legibility is a useful property. I can say “rectangle id 14 contains a smiley”
1
3
Example of why it’s useful: you can navigate (inefficiently) directly on a compressed map if it’s not in some other conformal space. Like knowing only the highways topology. Shortest path point to point in the highway graph won’t be real shortest path but it will avoid obstacles.
Quote Tweet
Replying to @vgr and @meditationstuff
(the robots move on a graph connecting the centroids of the open rectangles so you get weird artifacts)
1
1
1
I’m trying to think this through for time perception in narrative memory, not actually implement it. For example our brains seem to store emotionally charged events in great detail (so recalled time perception slows per David Eagleman experiments) but yada-yada over white space.
Ah! Run-length encoding seems closest to what I’m trying to model. Thanks ... it’s the yada-yada montage algorithm.
Ah, compressive sensing seems to be in the same spirit
Quote Tweet
Replying to @vgr
Are you familiar with Compressive Sensing? en.m.wikipedia.org/wiki/Compresse
2
1
Good statement of the problem, which I’d compress further to “explainable compressions.” Specifically ones that in the limit recover the raw ontology of explanation. The way a caricature admits a “facial” explanation of a face similar to a photograph. twitter.com/himbodhisattva
This Tweet is unavailable.
1
Replying to
Not quite the same thing, but deep learning autoencoders represent high dimension inputs in sometimes very low dimensional spaces. Below is a common example of MNIST images 28x28) encoded into two dimensions. The relevant part is that the latent space is semantically meaningful.
1
i.e., the number two gets mapped to the same space in latent space. More advanced methods like VQ-VAE have latent variables that encode position, size, color, etc. So you can both interpret a latent representation, and modify it in meaningful ways.


