Who ever said that reading Hacker News was a productivity drain?
Optipng and pngcrush do lossless recompression and the program in the OP AFAICT does lossy compression to create an image encoded in to PNG without an intermediate representation.
As I understand it pngcrush will reduce the bit-size or apply a new palette if that will reduce the file-size without loss of pixel data. Optipng can optionally add lossy compression too.
¹ - https://github.com/stuart/pngnq looks like it was inspired by pngquant, from that page:
>Pngnq exists because I needed a lot of png images in RGBA format to be quantized. After some searching, the only tool I could find that worked was pngquant. I tried pngquant but found that the median cut algorithm that it uses, with or without dithering, gave inferior looking results to the neuquant algorithm. You can see the difference demonstrated on the neuquant web page: http://members.ozemail.com.au/~dekker/NEUQUANT.HTML . //
* doesnt support alpha channel or dithering (yet)
They would benefit from some noise to hide the transitions, but then the compression would suffer.
Maybe some more palette entries for large smooth areas.
If I understand correctly, this tool is basically intelligently selecting a palette (which was a historically supported feature of the PNG format) and then using that on the image. Has anyone tested to see if pngcrush can shrink the filesize even further?
To the creator: have you considered supporting more than an 8-bit palette? Like, a 2-bit palette or other that happens to be better for this particular image?
the plan was to stick with a native, supported and easy to understand format. it was never about absolute filesize. filesize crunching should be handled afterwards via RLE, deflate or other output-format specific encoding trickery, etc.
However, Wu's algorithm requires preprocessing R * G * B * A * int array, which is 16GB of RAM if done at full RGBA quality, so in practice all implementations have to drop alpha and/or heavily posterize colors.
pngquant the same goal — subdivides RGBA (hyper)cube to minimize variance in each section — but does it with much less memory and can do it at full quality.
Posterization of input is my pet peeve, as it gives images slightly banded and grainy look that we associate with "256-color" images (since VGA only ever supported 6-bit per gun), and presume 256 colors are never enough for photorealistic look — but it often is, and people who use e.g. TinyPNG service think it's magic.
Unfortunately I never got emscripten to work on my machine. Could you give it a try and compile http://pngquant.org/lib?
More code to research: http://web.archive.org/web/20041010190217/http://www.geociti...
i'll check out bright183
Perhaps an easier way to think about it from a programmer perspective.. a decoder is a programming language interpreter that takes a program and generates the data. The encoder is something that generates a program in that language. So you can see that the decoder is quite straight-forward, while the encoder has a lot of choices to make about how best to represent the data as a program in the decoder's language.
See for example Zopfli and ZIP, which is obviously not lossy.
The situation exists because there are many alternative ways to represent a given, fixed output file, so that the job of the encoder is to seek through this search space and try to find a size-optimal solution, or a good approximation thereof.
Just like there are many ways to encode data with a given codec/format, there are often many different ways (or tools used) to decode, some of which - as in the case of lossy codecs - may produce more "natural" or pleasing results than others. For example, some MPEG decoders will analyze motion vectors and interpolate extra spatial resolution that was not present in the original, thereby "upscaling" (e.g., from standard resolution to HD) the output. In this case, it's important to remember that data/detail is never created from nothing, but is instead "guessed", so it may produce nice-looking results, but it's also likely to introduce more artifacts (erroneous details that were actually not present in the original).
In this case you need to do vector quantization — choose 256 colors (decoder's hard limit) that best represent thousands of different colors in the source image, e.g. you have to decide whether to use a palette entry for just a single red pixel, or maybe sacrifice that pixel and use the palette entry to improve smoothness of a more visible blue gradient.
You can use all kinds of cleverness that is mostly unrelated to the actual image format itself to shrink the size of an image.
Here's one example: if an image uses millions of colours, you could drop it down to just using a few hundred colours (picking the nearest colour from your pallette to each of the pixels) and hence shrink the filesize a whole bunch. There are lots of different approaches to picking that initial pallette though - you might do it based on averaging out the pixel colours, or maybe you know something about human visual colour perception and can hence behave differently for colours that you don't think people will notice as easily.
> Pngnq is an adaptation by Stuart Coyle of Greg Roelf's pnqquant using Anthony Dekker's neuquant algorithm.
> Why another quantizer?
> Pngnq exists because I needed a lot (several thousand) of png images in RGBA format to be quantized. After some searching, the only tool I could find that worked for RGBA was pngquant. I tried pngquant but found that the median cut algorithm that it uses, with or without dithering, gave inferior looking results to the neuquant algorithm. You can see the difference demonstrated on the neuquant page.
pngquant2 should clearly beat pngnq.
2.0 is by now a totally different program, with expanded and improved algorithms (with bits of machine learning) and uses 128 bits per pixel precision (SIMD+multicore optimized).
* pngquant has adaptive dithering which compresses better and makes less noisy images
* pngquant has more predictable algorithm and usually gives better quality (pngquant aims for minimum square error and guarantees local optimum, while pngnq has neural net algorithm that sometimes works well, sometimes misses details or generates useless palette entries)
* pngquant can automatically choose number of colors required for desired quality and leave 24-bit image if it requires too many colors.
* pngquant's algorithm is available as embeddable libimagequant
Offical site http://x128.ho.ua/color-quantizer.html
You can try yourself at http://tinypng.org
At 100x100 pixels, I find that this produces small (~8kb) PNG thumbnails which are still sharp with no noticeable color loss.
(Disclaimer: I'm a beginner at nodejs)
brew install pngquant
While a command line tool is useful by itself, using lossy compression is best done in a GUI to determine how much compression draws the right balance between file size and its loss in quality. ImageAlpha is amazing for this.
PNG is for graphics; JPEG is for photos.