
Zlib in serious danger of becoming obsolete - pettou
http://richg42.blogspot.com/2016/01/zlib-in-serious-danger-of-becoming.html
======
goldenkey
All a compression does is choose what will compress to less than the input
size, and what will compress to greater or the same input size. Pigeon hole
principle.. Whatever gains are supposedly had over zlib mean that these other
compressions are assigning meaning to other aspects that might not deserve to
be pigeon holed to smaller input sizes.

Theres no magic in compression, if I make unintelligable garbage artifacts
compress, I lose that domain for compressing intelligible repeats or other
reasonable patterns.

A good compression always values patterns OF intelligible deliberate meaning
-- theres no holy grail. Zlib is very competent at this.

~~~
Jasper_
Modern compression has an entropy code step where you have a probability of
seeing a symbol. And yet DEFLATE, the algorithm zlib is based on, uses Huffman
coding, which means probabilities can only be integers. Arithmetic coders
(like Brotli uses) and ANS-based coders (like BitKnit uses) have support for
fractional probabilities. So already, the improved entropy encode step means
that you get some gains. They're not perfect in any case, but they're a wide
improvement on most data.

zlib, the library, isn't even perfect. You can easily get more out of DEFLATE
by simply replacing the compressor with a smarter one, e.g. Zopfli.

