Hacker News new | past | comments | ask | show | jobs | submit login
Pngquant – Lossy PNG compressor (pngquant.org)
130 points by pstadler on Jan 13, 2014 | hide | past | web | favorite | 55 comments

You should showcase at least a single sample of a compressed vs uncompressed image right there on the first page, before the Features section.

WOW!!! I just applied this to the (big) PNG sprite files used in a project I am working on now, and the gains in file size were amazing! And with hardly any difference in quality!

Who ever said that reading Hacker News was a productivity drain?

Some alternatives while you're thinking about the subject: png-nq, pngcrush, optipng. I'm sure there are others. If I want an image small, I try all of them and pick the best.

I don't know png-nq¹ but in their default use pngcrush and optipng aren't [direct] alternatives to pngquant.

Optipng and pngcrush do lossless recompression and the program in the OP AFAICT does lossy compression to create an image encoded in to PNG without an intermediate representation.

As I understand it pngcrush will reduce the bit-size or apply a new palette if that will reduce the file-size without loss of pixel data. Optipng can optionally add lossy compression too.

¹ - https://github.com/stuart/pngnq looks like it was inspired by pngquant, from that page:

>Pngnq exists because I needed a lot of png images in RGBA format to be quantized. After some searching, the only tool I could find that worked was pngquant. I tried pngquant but found that the median cut algorithm that it uses, with or without dithering, gave inferior looking results to the neuquant algorithm. You can see the difference demonstrated on the neuquant web page: http://members.ozemail.com.au/~dekker/NEUQUANT.HTML . //

in my experiments pngnq worked better than pngquant but pngquant2 works better than pngnq.

shameless self-promotion:

demo: http://o-0.me/RgbQuant/

repo: https://github.com/leeoniya/RgbQuant.js

* doesnt support alpha channel or dithering (yet)

Very good, but looking at the examples, the smooth gradients look awful.

They would benefit from some noise to hide the transitions, but then the compression would suffer.

Maybe some more palette entries for large smooth areas.

PNG is practically made for gradients. I can't find the file, but there's an image of every single color that PNG can support in the full opacity range, and the whole file was like 32k after pngcrush.

If I understand correctly, this tool is basically intelligently selecting a palette (which was a historically supported feature of the PNG format) and then using that on the image. Has anyone tested to see if pngcrush can shrink the filesize even further?

To the creator: have you considered supporting more than an 8-bit palette? Like, a 2-bit palette or other that happens to be better for this particular image?

> To the creator: have you considered supporting more than an 8-bit palette? Like, a 2-bit palette or other that happens to be better for this particular image?

the plan was to stick with a native, supported and easy to understand format. it was never about absolute filesize. filesize crunching should be handled afterwards via RLE, deflate or other output-format specific encoding trickery, etc.

if you jack the minHueCols param up to 4096 or 8192, it does much better in many cases of gradients but also much slower. it's certainly not a complete hands-off quantizer. the best results i've found for 0-config quantizarion is Wu's Color Quantizer v2 [1] implementation [2]

[1] http://www.ece.mcmaster.ca/~xwu/cq.c

[2] http://nquant.codeplex.com/

There is a version of pngquant that uses Wu's algorithm:


However, Wu's algorithm requires preprocessing R * G * B * A * int array, which is 16GB of RAM if done at full RGBA quality, so in practice all implementations have to drop alpha and/or heavily posterize colors.

pngquant the same goal — subdivides RGBA (hyper)cube to minimize variance in each section — but does it with much less memory and can do it at full quality.

Posterization of input is my pet peeve, as it gives images slightly banded and grainy look that we associate with "256-color" images (since VGA only ever supported 6-bit per gun), and presume 256 colors are never enough for photorealistic look — but it often is, and people who use e.g. TinyPNG service think it's magic.

yeah, i discovered how much space it needed when porting it via emscripten to asm.js

It'd be awesome to port pngquant to JS, especially that JS is getting parallelization and SIMD now.

Unfortunately I never got emscripten to work on my machine. Could you give it a try and compile http://pngquant.org/lib?

sure i can give it a go. though i'm currently stuck in an airport going on 14h. i'll ping you when i get around to it but could be a few days

IMO the best quantisation tools are bright183 (proprietary, by Epic for Unreal shared palettes) and neuquant. Wu unfortunately suffers from banding artifacts very badly for a low amount of colours with dithering off, the banding after applying bright looks much more acceptable.

More code to research: http://web.archive.org/web/20041010190217/http://www.geociti...

neuquant is quite good for photos but does strange stuff with graphics and low color counts. in fact, the pool table example that's used to show how good it performs, RgbQuant does better on the details and with fewer artifacts at the expense of bsckground smoothness/banding. compare for yourself http://members.ozemail.com.au/~dekker/NEUQUANT.HTML

i'll check out bright183

pngquant is great. I make an iPhone app that displays transit maps. Transit maps are ideal candidates for quantization (limited color range, large blocks of color that are perfect for RLE) and I saved a lot of file size by quantizing the map tiles. Thanks pngquant!

Sounds like vector images could be used effectively there.

I've never quite understood how its possible to create better encoders without needing a new decoder on the other end. Can anyone explain how this works? When you're writing a png decoder, why wouldn't there just be one optimal png encoder to go along with it?

Globally, because encoding and decoding are not the same thing. Encoding requires making choices about what to throw away, but decoding is just putting them back together. Mathematically you can think about encoding as converting to some matrix representation that has a lot of zeros -- the process to take that matrix and reconstruct the image doesn't depend on how many zeros there are, but the efficiency of the encoder does.

Perhaps an easier way to think about it from a programmer perspective.. a decoder is a programming language interpreter that takes a program and generates the data. The encoder is something that generates a program in that language. So you can see that the decoder is quite straight-forward, while the encoder has a lot of choices to make about how best to represent the data as a program in the decoder's language.

At least 2 people replied to you referencing the lossyness of the encoder, but that is actually completely irrelevant and unnecessary to answer your question, as it applies perfectly well to lossless compressors too.

See for example Zopfli and ZIP, which is obviously not lossy.

The situation exists because there are many alternative ways to represent a given, fixed output file, so that the job of the encoder is to seek through this search space and try to find a size-optimal solution, or a good approximation thereof.

It's best to think of many codecs ([en]CODers/DECoders) as sets of tools. The encoder "tools" can be used in many different ways (with different parameters and combinations) to produce valid encoded versions of input data. The decoder then uses various methods to take the tools and data used in the encoded version, and interpolates the encoded data into something resembling the original. Sometimes that interpolation is "lossy" (detail is lost, but may be acceptably close to the original), while other times it is "lossless" (decoded bit-for-bit to be the same as the original).

Just like there are many ways to encode data with a given codec/format, there are often many different ways (or tools used) to decode, some of which - as in the case of lossy codecs - may produce more "natural" or pleasing results than others. For example, some MPEG decoders will analyze motion vectors and interpolate extra spatial resolution that was not present in the original, thereby "upscaling" (e.g., from standard resolution to HD) the output. In this case, it's important to remember that data/detail is never created from nothing, but is instead "guessed", so it may produce nice-looking results, but it's also likely to introduce more artifacts (erroneous details that were actually not present in the original).

In lossy encoding you throw away data to get smaller file, and the art is in deciding which data you throw away.

In this case you need to do vector quantization — choose 256 colors (decoder's hard limit) that best represent thousands of different colors in the source image, e.g. you have to decide whether to use a palette entry for just a single red pixel, or maybe sacrifice that pixel and use the palette entry to improve smoothness of a more visible blue gradient.

This is a lossy encoder, which means that the image that comes out the other end doesn't contain exactly the same information as the image that goes in.

You can use all kinds of cleverness that is mostly unrelated to the actual image format itself to shrink the size of an image.

Here's one example: if an image uses millions of colours, you could drop it down to just using a few hundred colours (picking the nearest colour from your pallette to each of the pixels) and hence shrink the filesize a whole bunch. There are lots of different approaches to picking that initial pallette though - you might do it based on averaging out the pixel colours, or maybe you know something about human visual colour perception and can hence behave differently for colours that you don't think people will notice as easily.

For pngs (and lossless encoding) the thing is that the last step of encoding is a run of the deflate algorithm. It's impossible to say what exactly a stream will deflate down to without actually running your implementation of deflate. Possible encodings of an image pre-deflation are staggering, 5^height just for the filter step on all color types. I've been writing a brute force compressor (everything but the deflate step which depends on the implementation you choose) as a joke and it takes about an hour on images 10px high. For that reason encoders use heuristics to make best guesses whenever there's a choice to be made.

Oh and here's[1] a post I made about it. I decided to keep the project to myself as I'm making it into a more, uh, realistic png compressor, but that will give you an idea of all the possible descisions that need to be made while encoding.

[1] http://heyimalex.com/journal/pngmassacre/

As kraken.io, we can safely say that this is indeed an excellent program, and it makes up part of our lossy PNG optimization stack.

How is this different from `pngnq` (that's been around for ages)?

If you use Google to ask this question the first hit is actually the pngnq website, which explicitly states that pngnq is a modification of pngquant, and supposedly an improvement.


> Pngnq is an adaptation by Stuart Coyle of Greg Roelf's pnqquant using Anthony Dekker's neuquant algorithm.

> Why another quantizer?

> Pngnq exists because I needed a lot (several thousand) of png images in RGBA format to be quantized. After some searching, the only tool I could find that worked for RGBA was pngquant. I tried pngquant but found that the median cut algorithm that it uses, with or without dithering, gave inferior looking results to the neuquant algorithm. You can see the difference demonstrated on the neuquant page.

I compared them recently for a project where I needed to make some PNGs smaller, and settled on pngng, which, in my informal testing, seemed to produce better results.

Make sure you haven't tested the 13-years-old pngquant 1.0 that is shipped with Debian stable and some other distros.

pngquant2 should clearly beat pngnq.

I tested v. 1.0-4.1 that comes with Ubuntu 12.04.1 (I think). Am I missing out?

Yes. 1.0 was written in the MS-DOS era, truncates quality to 15-bits, has buggy alpha and uses basic Median Cut algorithm from early 80s.

2.0 is by now a totally different program, with expanded and improved algorithms (with bits of machine learning) and uses 128 bits per pixel precision (SIMD+multicore optimized).

It produces same type of file PNG8, but:

* pngquant has adaptive dithering which compresses better and makes less noisy images

* pngquant has more predictable algorithm and usually gives better quality (pngquant aims for minimum square error and guarantees local optimum, while pngnq has neural net algorithm that sometimes works well, sometimes misses details or generates useless palette entries)

* pngquant can automatically choose number of colors required for desired quality and leave 24-bit image if it requires too many colors.

* pngquant's algorithm is available as embeddable libimagequant

Some examples would be nice.

This is how I use pngquant in my thumbnailing workflow: https://gist.github.com/gigablah/3110899

At 100x100 pixels, I find that this produces small (~8kb) PNG thumbnails which are still sharp with no noticeable color loss.

(Disclaimer: I'm a beginner at nodejs)

Available in Homebrew:

    brew install pngquant

Read more about the details of implementing something like this in C#:


The ImageOptim/ImageAlpha OS X apps are fantastic for compressing all kinds of images for the web.

http://imageoptim.com/ http://pngmini.com/

It should be noted that ImageAlpha, the app under the pgnmini.com url, provides a GUI to to PNGQuant and PNGNQ.

While a command line tool is useful by itself, using lossy compression is best done in a GUI to determine how much compression draws the right balance between file size and its loss in quality. ImageAlpha is amazing for this.

Very timely, I was just working on a page that had to display complex graphs and the regular PNG's were huge. I set the quality to 50% and it reduced the size to about 30% of the original and it still looks good. Thank you!

Use ImageAlpha as a GUI for OSX. Very nice results!


Lossy PNG compression is like trying to use a Ferrari to pull a 35 tonne trailer.

PNG is for graphics; JPEG is for photos.

I like this small program , great for compressing png files. Congrats dev.

Anyone knows of a similar tool for jpeg images?

Jpegs are already lossy, but if you want to increase the lossiness (and decrease filesize)via the commandline, you can do that with imagemagick or graphicsmagick with the -quality flag on a convert.

I use jpegtran for lossless. I've had good luck with http://jpeg-optimizer.com/ for lossy.

yeah, cwebp, works on jpeg effectively, gifs and pngs (pngs with a 30% lossless savings, much more if lossy), does some tricks pngs don't..

Any link to the 2.0 32bit deb?

Debian (unstable) has an up-to-date package now: http://packages.debian.org/sid/pngquant


The algorithms link is broken.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact