
Zlib-ng: a performance-oriented fork of zlib - profquail
https://github.com/Dead2/zlib-ng
======
r1ch
One thing to be aware of, these replacement zlib libraries are not always
"drop in" safe as some software such as nginx pre-allocate the zlib state
buffers by hard coding in the original zlib structure sizes (the original zlib
makes guarantees about memory usage that technically makes this behavior
somewhat OK). Depending on how well the software is designed, the forked zlib
versions may result in buffer overflows or other crashing. (nginx is fine - it
spams your log file with errors and re-allocates).

------
bodyfour
I'll be watching this with some interest. A couple years ago I tried using
some of the x86_64 assembly versions of inflate_fast() and longest_match() but
found them to be subtly buggy -- they'd both sometimes do reads outside of the
allocated memory and could crash if that happened to land at the end of a
mmap'ed segment. I emailed all of the relevant authors, but nobody seemed to
be maintaining that stuff any more. Honestly, just having a zlib with a _bug
tracker_ is a huge improvement.

I am sad to see that a lot of the zlib-ng seem to be around code style
updates, which will make it hard for things to travel back and forth between
the codebases. I guess one branch or another will eventually have to "win",
which is sad.

Also, is zlib-ng going to commit to keeping the same zlib license? Some of the
zlib improvements floating around on the net have snuck in more restrictive
licensing. It would be fantastic if zlib-ng could hold the line on license
creep.

~~~
dalke
According to the linked page, "Just remember that any code you submit must be
your own and it must be zlib licensed."

------
danieldk
_The zlib code has to make numerous workarounds for old compilers that do not
understand ANSI-C or to accommodate systems with limitations such as operating
in a 16-bit environment._

This is true for many older open source libraries. I wonder:

\- Are old platforms still tested as time passes? Is it really worth it
maintaining extra cruft for old systems even if it is unknown whether it still
works? Are there projects running CI for OS/2 Warp, MS-DOS, or A/UX?

\- How many users of old or obscure platforms are there still around? Are they
even updating such libraries?

\- What platforms do we need to get rid of to get rid of
autoconf/automake/configure and use e.g. CMake instead?

~~~
stefantalpalaru
> What platforms do we need to get rid of to get rid of
> autoconf/automake/configure and use e.g. CMake instead?

Why would you want that? Autotools is the devil we know. CMake has its own
problems and from a statistical sample of one project implementing both, I
came to the conclusion that CMake is more frustrating than Autotools when you
need to do something unusual.

~~~
tanderson92
Exactly. autotools is exceptionally well designed for dealing with more exotic
use cases like cross compiling. When designing a cross-compiling linux
distribution([http://exherbo.org/docs/multiarch.txt](http://exherbo.org/docs/multiarch.txt))
we discovered that autotools is by far the easiest to deal with when it comes
to arbitrary host & build targets, compilers, etc. I would be even bolder and
say that:

Those who do not understand autotools are doomed to reinvent it, poorly.

~~~
nitrogen
I use CMake to cross-compile firmware for my home automation hardware. It
doesn't seem too bad, just a single file shared across all projects that
defines the compiler binaries and settings. I like it much better than
autotools.

~~~
detaro
Do you have an example online somewhere? This might be useful to have as a
reference in the future.

~~~
bodyfour
[http://www.vtk.org/Wiki/CMake_Cross_Compiling](http://www.vtk.org/Wiki/CMake_Cross_Compiling)

------
gjm11
It seems like this would be more compelling if accompanied by some benchmarks
showing how much better the performance is than the original zlib's.

~~~
jsnell
Edit:

Results with zlib-ng and a newer version of the Cloudflare changes:

[https://www.snellman.net/blog/archive/2015-06-05-updated-
zli...](https://www.snellman.net/blog/archive/2015-06-05-updated-zlib-
benchmarks/)

Original:

Here are some benchmarks of the Intel and Cloudflare forks that this project
is based on:

[https://www.snellman.net/blog/archive/2014-08-04-comparison-...](https://www.snellman.net/blog/archive/2014-08-04-comparison-
of-intel-and-cloudflare-zlib-patches.html)

The speedups are not insignificant. (I'll see about updating the post to
include results for zlib-ng).

~~~
Dead2
As I mentioned in another comment, the minigzip used is very suboptimal due to
reading and decompressing only 1 byte per inflate() call. Please retest with
--zlib-compat or the fix that was merged today.

~~~
jsnell
Sure thing, updated.

I'm not in the habit of including compression level 6 in the results, since
the application I care about defaults to 5 :) And I wanted a good spread of
different compression levels in the results.

------
craftkiller
For decompression miniz is already faster than zlib and with a more permissive
license [https://code.google.com/p/miniz/](https://code.google.com/p/miniz/)

And my post where I put times and code: [http://fizz.buzz/posts/anything-you-
zcat-i-zcat-faster-under...](http://fizz.buzz/posts/anything-you-zcat-i-zcat-
faster-under-certain-conditions.html)

~~~
ChickeNES
Though it looks like there's been no updates since late 2013. I wonder if it
makes sense for someone to step in and import it to Github

~~~
bcseeati
miniz is already on GitHub: [http://richg42.blogspot.com/2015/05/the-great-
github-migrati...](http://richg42.blogspot.com/2015/05/the-great-github-
migration.html)

------
sliverstorm
If you want more performance out of zlib, step one is recompile.

I've seen flabbergasting speedup out of basic CPU-bound utilities simply by
recompiling the source code for a fifteen-year-old binary using a modern
compiler.

Which also brings up the point that if you want to benchmark zlib-ng, compile
zlib with the same compiler.

------
tkhsu
Intel optimized zlib: [http://www.intel.com/content/www/us/en/intelligent-
systems/i...](http://www.intel.com/content/www/us/en/intelligent-
systems/intel-technology/zlib-compression-whitepaper.html)

Github:[https://github.com/jtkukunas/zlib](https://github.com/jtkukunas/zlib)

------
legulere
This makes me wonder what a good strategy would be for deprecating old
architectures. Remove support for them in new versions and backport bugfixes
to the last version supporting the old architectures indefinitely?

~~~
TheCondor
It's a more difficult issue with something like OpenSSL as new features are
added. Zlib is done though, there is zero interest in sdding deflate64 or a
different algorithm, they could just put a version on ice, keep it around for
distribution and make a more clean 2.0

------
erikb
How to pronounce the "-ng"? "zetlibng" like "bang" without the "a" is kind of
hard to produce for my mouth.

~~~
TimWolla
You spell it out. It's an acronym for “next generation”.

~~~
erikb
Ah that's what it stands for. Alright.

