I had this great idea for a compression algorithm where instead of storing all the data for an image you only store just enough binary information to uniquely identify that image from all the images stored by, say, Google
-
-
If you want to further improve your compression algorithm, you can take it a step further and also invent the URL shortener.
-
Kolmogorov complexity somewhere in the neighborhood of "64 bit guid + ALL OF GOOGLE"
End of conversation
New conversation -
-
-
Hmm, you could take all images indexed by Google and ram them through some kind of fancy neural network to reduce their dimensionality and then use that lower-dimensional space as the compressed image?
-
Then you can do lossy encoding by sorting the output vector by importance and ignoring the minor parts, which means that the first bit of the image format will be "is it a cat picture?"
End of conversation
New conversation -
-
-
yeah, i was about to say "oh, you're inventing a hash table."
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Also, for a smaller set off images, the emoji.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.