

Lossy PNG - pornel
http://pngmini.com/lossypng.html

======
oofabz
Hey pornel, I'm glad you found my lossypng code useful. It's great to see the
algorithm made available to a wider audience.

~~~
pornel
Yes, thanks! It's awesome.

------
GravityWell
Not sure what the objective is, but using the "before" example 475kb lena pic,
a subsequent jpg at 61k looks very close to the original, and much better than
the 61k blurred png.

There is a reason why jpg has survived the test of time. It delivers a good
balance of quality, performance, and is well supported. Challengers like JPEG
2000 have not gained much traction because jpg gets the job done.
[http://en.wikipedia.org/wiki/JPEG_2000](http://en.wikipedia.org/wiki/JPEG_2000)

~~~
pornel
JPEG is very good indeed, but it can't compete when transparency is needed.

I've chosen Lenna as an example image because it's a classic, rather than as
an example where lossy PNG can't be beaten.

But take any transparent image from Apple.com, and you can halve its size:
[http://imgur.com/a/VLlqG](http://imgur.com/a/VLlqG)

(and images from pngquant2 are even smaller, but there are few that become too
lossy).

~~~
graue
This should be in your article! :) It motivates the technique much better.

------
emptybits
The author's tool is excellent and achieves impressive ratios.

But once you open the "I can accept information loss" door, it might be
worthwhile to experiment with other image manipulations also. For example,
consider dropping color depth. Some images survive that process well.

Here's a 1-minute experiment... take the lenna.png image from the article...
open in GIMP, posterize to 27 levels (or whatever you think is acceptable),
export back to PNG... 43% savings.

~~~
pornel
That's the second option in the article (pngquant2).

Last time I checked GIMP's palette generation wasn't very good - it supported
only binary transparency and truncated bits unnecessarily.

I've wanted to replace GIMP's old algorithm, but haven't even managed to
compile all the prerequisites for the monstrous codebase :(

One more thing I do in pngquant2 and haven't seen it done anywhere else is
dithering only areas that need it, rather than dithering entire image. This
minimizes noise added and makes files look and compress better.

------
SolarUpNote
This is a godsend. Thank you to the developer, I'm already using it. It's cut
transparent png-24s by 60% with no perceivable difference. Graphic designers
rejoice!

------
gweinberg
I've never understood how a format can be "lossy" or "lossless" in the first
place. It seems to me that the format specifies how the image data should be
rendered on a display device and any "loss" that occurs us always a result of
the encoding process. For that matter, the concept of loss only makes sense if
you start with a pixmap, and that is not always the case.

~~~
cdumler
By definition, a CODEC starts with some series of bits (source) that encodes
information into a new series of bits (encoding). A CODEC is said to be
"lossy" when the results of the decoded encoding does not match the source,
ie. information encoded in the source stream gets dropped.

Various techniques are used to accomplish this. One side of the scale is to
simply degrade the quality of the original. The other is to use
visual/acoustic changes humans have difficulty in recognizing. For instance,
the human hear has a minimum distance between frequency and loudness of two
tones. The softer tone is dropped in MP3 depending on "compression" level;
thus, from a source fidelity standpoint, the quality is "lossy" since the
encoded form no longer has the original's content. From a human ear quality
standpoint, the quality is barely noticeable to most people.

~~~
iso-8859-1
Here's the issue I think gweinberg has: Imagine you encode an image to JPEG,
with the "quality slider" cranked all the way to 11 (or whatever the maximum
is). Now, the DCT blocks are tiny and the compression is super inefficient.
But it exactly reproduces the input image, and you may say it's lossless. The
same image encoded with PNG (deflate) might be way smaller, but that still
does not change the fact that JPEG losslessly encodes this image.

------
mmastrac
I haven't used pngquant in years (the last time I used it was for supporting
PNG transparency in IE6), but I happened to use the excellent linked tool
ImageAlpha on an HTML5/Flash game we are in the middle of shipping and managed
to cut 1.2MB out of 7+MB in assets from it. That's pretty impressive.

The interesting thing is that the blur filter gave zero benefit, but the
"median cut" pngquant to palletize the images gave some pretty impressive
gains.

------
homosaur
This is cool code but can someone explain why we'd need this? Do we not have a
common lossy format that supports transparency? My primary concern is that
people have an expectation of what PNG means and this completely subverts
that.

~~~
metafunctor
That's exactly it: we do not have a commonly implemented lossy format that
supports transparency.

~~~
svantana
Aren't you forgetting GIF? Not a great format admittedly, but in photoshop
there's even a "lossy" slider.

~~~
xsmasher
The GIF format is not lossy. There is no "quality" option in the format
itself; just a color depth option, same as PNG.

Photoshop uses the same tricks for "lossy" gif that this article uses for PNG.

~~~
pornel
I've researched lossy GIF too:
[http://pornel.net/lossygif](http://pornel.net/lossygif) but even lossy
version wasn't better than PNG.

LZW is just a very poor compression. Even best case is taking ridiculous
amount of bits (you can only add 1 byte to previously used pattern, so it's
sequence of symbols for 1+2+3+5+6+etc. pixels resetting every 4k iterations),
so there isn't even room for improvement.

So, GIF is awful. Should be forgotten.

~~~
SeppoErviala
Only reason GIF is still alive is its animation support. APNG and
WEBP+animation are not there yet.

8-bit palette and crappy compression do not matter when you offer exclusive
features.

------
medell
My first thought: "This would be great in ImageOptim". Then I noticed you are
also the author of both awesomes.

~~~
pornel
Yes, I'd like to add it to ImageOptim. However, I don't want anybody to
accidentally ruin their source images, so I'm looking for an elegant UI for
lossy optimizations. Feedback welcome:

[https://github.com/pornel/ImageOptim/issues/17](https://github.com/pornel/ImageOptim/issues/17)

------
tomerv
On my browser (default browser on Galaxy Nexus) the pallete example and
posterization example (tree and mask pictures) have weird scan lines. I guess
the palette support is not perfect in every browser.

~~~
qoiweu
Demo images have added scrolling striped background with background-
attachment:fixed to make alpha channel more visible.

backgorund-attachment:fixed is a bit buggy in Android Firefox, but images
themselves are fine.

~~~
tomerv
Okay, I understand now. Thanks for the explanation!

The page should explain this, since it looks a little weird.

------
StefanKarpinski
I can't seem to find a license file in the GitHub repo. Since no permissions
are granted by default, this is effectively not open source. I've opened a
GitHub issue requesting the addition of a LICENSE file:
[https://github.com/pornel/ImageAlpha/issues/9](https://github.com/pornel/ImageAlpha/issues/9).

------
ronjouch
I love PNGQuant. Linux users with GNOME and Nautilus may enjoy this small
Nautilus script that will let you call it from a right click in Nautilus:

[https://gist.github.com/ronjouch/6258621](https://gist.github.com/ronjouch/6258621)

EDIT: screenshot of what it looks like:
[https://dl.dropboxusercontent.com/u/368761/bugreport/pngquan...](https://dl.dropboxusercontent.com/u/368761/bugreport/pngquanter.png)

------
mistercow
This is very cool. I've wondered for a long time if sophisticated, predictor-
aware lossy compression could be done with PNG. I'd love to read a paper on
how this works.

~~~
kevingadd
I'm not sure how far you'd get; since PNG's predictor selection is IIRC only
done on a scanline basis you don't have as many opportunities to really
leverage prediction. Lossy image compression typically operates using a
combination of planar modifications (like subsampling) and block-level
modifications.

On the flipside, part of why PNG does so well on lossless compression is due
to the scanline-oriented predictor selection - that allows it to beat regular
GZIP by identifying ways to losslessly compress 2D image patterns that aren't
necessarily obvious in the 1D stream of bytes being sent to the compressor.

------
braxton
I think the goal should be a lossless format for audio,video, and images so
that the loss that keeps occurring over time as these objects are recompressed
with lossy formats doesn't occur. PNG was that for images but now someone is
suggesting a lossy format, of course it looks fine the first time but then
when the next compression comes out and people move to that you incur another
loss.

------
emptybits
Interesting but misleading. The author applied a blur filter which allowed the
non-lossy PNG format to compress better.

That's not lossy PNG. The information is lost in the blur (or other pre-
process) before PNG gets ahold of it.

~~~
Terretta
_Interesting but misleading. The author applied a blur filter which allowed
the non-lossy PNG format to compress better. That 's not lossy PNG._

 _The information is lost in the blur (or other pre-process) before PNG gets
ahold of it._

As I understood it, I think that's opposite of what he's saying. He's saying
blur helps restore info.

From the description, I understood the information is lost by deliberately
omitting certain pixels that PNG rendering will try to restore from adjacent
pixel data, and that he did a diagonal blur to influence the adjacent pixels
to contribute better data to the missing ones. Blurring spread info into
diagonally adjacent pixels improving results of using them to reconstruct
missing ones.

// source is invoking three png utils, so didn't dig into what it's really
doing, just saying I think the explanation is that _reconstructing pixels
saves space_ and reconstruction is improved by letting diagonal neighbors
contain more info on reconstructed pixel through blur.

~~~
maaku
Incorrect. PNG does not do reconstruction, just delta compression. By applying
a diagonal blur, he is removing entropy in a way that the lossless compressor
is likely to take advantage of.

~~~
Terretta
I didn't say PNG did anything, so I'm neither correct nor incorrect about what
PNG does (though on second read compressing is a better word than omitting,
though either way the "guessed" pixel is where the information goes missing).
I'm disputing parent's claim author said savings was from blur. He didn't. He
said:

> _PNG has an ability to “guess” pixels based on their top and left neighbors
> and successful guesses compress to almost nothing. Usually only few pixels
> match a guess, but latest ImageAlpha 's “Blurizer” option manipulates image
> data to match the guesses, making compression much much more effective._

~~~
maaku
What he said is correct.

------
DarkStar851
The only time I use PNG is if I'm aiming for pixel-crisp, anything else is
JPEG where fitting. I LIKE my PNGs to be pixel-crisp, and the only way to
substitute that I could think of is put PNGs in /lossless and /lossy folders
individually.

Maybe a quick JS speed test and if possible switch all your lossy to lossless?
Would look weird on load without a splash screen though.

~~~
pornel
Lossy PNG tends to preserve crisp edges.

------
fphilipe
I was able to shrink an iOS app from 9mb to 4.5mb by using ImageAlpha +
ImageOptim. There's a case study about TweetBot how they did the same
([http://imageoptim.com/tweetbot.html](http://imageoptim.com/tweetbot.html)).

------
rocky1138
What's the status on browser adoption of APNG? Last time I tried it (a couple
years ago) it required a Chrome extension.

For reference: [http://www.reddit.com/r/apng](http://www.reddit.com/r/apng)

~~~
sltkr
The situation has probably worsened. Opera lost out-of-the-box APNG support
with the switch to Webkit. Firefox has lost market share to Chrome on the
desktop, and were never big in the (still growing) mobile market.

There is a simple Chrome extension that adds APNG support, but anything that
is not installed by default is unlikely to be used by most users. 90% of
Chrome users don't even install an ad blocker!

~~~
kibibu
> 90% of Chrome users don't even install an ad blocker

You say that like it's crazy.

I don't install an ad blocker because I don't mind sites earning money for
their content.

------
soheil
Would love to see this added to the PageSpeed Apache module.

------
whiddershins
I've used this a bunch for iOs development. I keep meaning to ask pornel: we
need batch commands! :-)

~~~
nacs
The linked page provides this:

"Batch processing Available from command line. ImageAlpha is based on
pngquant. You'll find compiled pngquant executable in
ImageAlpha.app/Contents/Resources directory."

So you should be able to script that.

Also for people on non-Mac systems, it appears this is available for Linux /
Windows as well:
[https://github.com/pornel/pngquant](https://github.com/pornel/pngquant)

Edit: Better link to pngquant: [http://pngquant.org/](http://pngquant.org/)

