it depends on the encoding
One assumes it would have been best to take the x86/SSE instruction set and "rebalance the huffman tree" such that you tried to have the size of each encoding correspond to its frequency? But I assume they didn't do this because they still wanted to run x86 at speed...
-
-
They wanted to not have a separate decoder, basically, and x86_64 is definitely close enough to do 32b and 64b with the same decoder block (and many internal flag bits).
-
Yes. Which is a shame, because then that means we are paying for it for the rest of time, basically :/
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.