
Using ImageMagick to make sharp web-sized photographs - n_e
http://even.li/imagemagick-sharp-web-sized-photographs/
======
ivank
Part of the reason rescaled images look dim and blurry is because rescaling
software usually assumes gamma 1.0 instead of 2.2:
[http://www.4p8.com/eric.brasseur/gamma.html](http://www.4p8.com/eric.brasseur/gamma.html)

There are many more good essays on image rescaling here:
[http://entropymine.com/imageworsener/](http://entropymine.com/imageworsener/)

~~~
Camillo
Preview.app in OS X scales the picture correctly. I think any program that
uses Cocoa/CoreImage/CoreGraphics etc. to scale images on OS X should get the
right result.

However, Safari ignores gamma and produces a flat gray rectangle when scaling
down the Dalai Lama image, like Firefox does. I seem to remember that Safari
used to do proper gamme correction in earlier versions, but they were
pressured to abandon it because it was more important to give the result
expected by websites.

------
LukeShu
Dissecting a bad oneliner:

    
    
        ls *.jpg|while read i;do gm convert $i -resize "900x" -unsharp 2x0.5+0.5+0 -quality 98 `basename $i .jpg`_s.jpg;done
    

The part I take issue with is the unnecessary invocation of `ls`. The shell
does the glob expansion. A `for` loop is better suited for this. Also, if
using bash,* we can get rid of basename.

    
    
        for file in *.jpg; do gm convert $file -resize "900x" -unsharp 2x0.5+0.5+0 -quality 98 ${file%.jpg}_s.jpg; done
    

* zsh fans, I'm sure it works there too.

~~~
notaddicted
As long as we're _bashing_ , it could easily be written loopless like so:

    
    
        basename -s .jpg -a *jpg | 
          xargs -I{} convert "{}.jpg" \
           -resize "900x" -unsharp 2x0.5+0.5+0 -quality 98 "{}_s.jpg"
    

EDIT: weird line breaking is to prevent sidescroll box

~~~
rcfox
You could also easily parallelize this with xargs' -P option.

------
paulirish
mod_pagespeed (and ngx_pagespeed) can automate this for all your images. If
you have inline width/height or inline style dimensions, it will do the
resampling on your behalf:
[https://developers.google.com/speed/pagespeed/module/filter-...](https://developers.google.com/speed/pagespeed/module/filter-
image-optimize)

The advantages of having correctly sized imagery are immense, but in short..
something like 70% smaller total byte payload for mobile, huge reduction in
image decode & resize cost, better scroll performance, and faster relayouts
when orientation changes or browser window changes.

(And I do think browsers themselves should resample scaled images like this to
achieve equal quality, but your users will benefit way more if assets are
delivered well to begin with.)

~~~
smcnally
where are the resizes stored when this directive is used?

    
    
       > pagespeed EnableFilters resize_images;

~~~
dangrossman
Probably /var/mod_pagespeed/cache, though I don't have it installed right now
to check.

------
vladstudio
My solution: [https://github.com/vladstudio/Vladstudio-smart-resize-
Bash-s...](https://github.com/vladstudio/Vladstudio-smart-resize-Bash-script)

The biggest problem of scaling down an image is to find right settings for
resampling and sharpening. After many expreiments, I found it impossible to
achieve good results by simply running convert with a line of arguments. So I
came up with this script, which basically does the following:

* configures -interpolate bicubic -filter Lagrange; * resizes source image 80%; * applies -unsharp 0.44x0.44+0.44+0.008; * repeats steps 2 & 3 until target size is reached.

~~~
free
This is fine if you have a limited set of images that you handpick and
optimize.

Do you have any suggestions when there are thousands of images and all this
needs to be automated?

~~~
natch
He has an example at his github link where he uses a simple bash for loop to
do many images at once.

~~~
medde
but it does seem inefficient if you often have lots of images to process

~~~
natch
how so?

------
dietrichepp
Just for the record, the "unsharp mask" filter doesn't really sharpen but it
increases edge contrast, which makes it seem sharper. It _is_ something that
you should only apply to the final version of an image, after all resizing has
been done, however.

~~~
neilk
Sharpening improves "acutance"; your perception of boundaries.
[https://en.wikipedia.org/wiki/Acutance](https://en.wikipedia.org/wiki/Acutance)

Unsharp masking is mostly aesthetic. For an image like the example, a
mountainside, it gives you a _feeling_ of crisp details, but there isn't any
more data there. It looks great on a landscape, but it can be disastrous on a
portrait of a person (pores and stubble will be highlighted, usually in an
unpleasant way).

(EDIT: incorrect assertion about unsharp masking before or after scaling
removed)

Sometimes it's better to unsharp-mask it to a degree that looks slightly
oversharpened at a large size, but looks great when reduced - especially for
very small images, like avatars or other icons.

~~~
dietrichepp
> I have found that unsharp mask is best applied before you scale the image.
> Shrinking the image first will eliminate some of the details that you wanted
> to enhance in the first place.

Unsharp mask doesn't work that way: you get the same results applying it
before or after scaling, as long as you adjust the radius accordingly.

The reason I suggested applying it after scaling is because aesthetically
pleasing settings don't change much as the image size changes.

~~~
neilk
I tested this with a couple of images and you are right. Except for a few
errant pixels here and there the images are precisely identical. Sorry for the
misinformation.

------
free
Tangential to the point being made, 98 as quality seems way too high for an
image to be shown on the web. 85 seems to be a decent tradeoff. Doing so might
actually increase the image size.

~~~
Steuard
I agree: that jumped out at me enough that I came here to make the same
comment. I can hardly think of a case where I'd want to use a 98 quality
setting on a JPEG. Maybe it would be a good choice if I wanted to preserve an
almost-pristine copy of an original, but I'd compare the resulting file size
to a lossless PNG first. Every quality step from 100 down to 95 gives a huge
benefit in file size, and going from 95 to 90 almost always seems like a hefty
savings for imperceptible differences, too. I usually save web images at
quality settings between 70 and 90, and I've never felt like I'm losing by it.

~~~
crucio
This also jumped out at me. My company
([http://www.firebox.com](http://www.firebox.com)) is built on having amazing
looking images for products. No one could see any difference between the
quality of images between 87 and 95, but it saved us roughly 50% in file size.
Also for what it's worth, we spent a lot of time a/b testing 87 vs 95 with our
users, but there was no conclusive difference

~~~
cheald
We use GraphicsMagick at 92, as dipping into the 80s has an annoying tendancy
to introduce visible noise on something like 1:50 images. It's very annoying
to ship that extra bandwidth.

Interestingly, we've done WebP support, and while the files are smaller across
the board, visual quality deteriorates _really_ quickly once you start
dropping that quality value, even in small increments.

------
sengstrom
The sharpened version looks over-sharp to me. You may find that a level or
curves adjustment is more what this image needs to pop a little.

~~~
n_e
Yup, I over-sharpened it a little so the effect is more obvious.

Though the amount of sharpening can depend on the context : photographers tend
to be annoyed by over-sharp images, while on a marketing website it might be
good idea to make the pictures pop.

------
eCa
Trivia: The example image is taken at the Col de l'Iseran which, at 2770
metres, is the highest tarmacced road pass in Europe.

------
ppradhan
ImageOptim has been my best buddy, been getting very agreeable results with
it. It's just for compression but end results don't seem to have the need for
sharpening. Using ImageMagick this way might be best for special cases...
[http://imageoptim.com/](http://imageoptim.com/)

~~~
kawsper
ImageOptim is not just for compression, it is for stripping away useless
metadata, color palettes and other tricks.

For optimal images, you could do something like this:

1) Resize the image (And apply the trick from this article) use the proper
quality marker (anywhere between 85-95)

2) Run it through Imageoptim

If you want a commandline version of ImageOptim, I have had good results with
[https://github.com/toy/image_optim](https://github.com/toy/image_optim)

~~~
ppradhan
ah.. right on. cheers!

------
a_c_m
Has anyone played / tested this with regard to the responsive image hack/trick
by the filament group :
[http://filamentgroup.com/lab/rwd_img_compression/](http://filamentgroup.com/lab/rwd_img_compression/)

~~~
andrewmunsell
I've actually started using this method, and it seems to work pretty well. It
reduces file sizes significantly and yet the quality is fairly good. If you
pay attention, you can see some quality loss/artifacts, though it's not bad.

An example:

[http://www.andrewmunsell.com/blog/](http://www.andrewmunsell.com/blog/)

Scroll down to the "Now is the Future" blog post. The cover image of the
Seattle skyline is 1700x666 and ~65kb (though, it's been recompressed into
WebP by mod_pagespeed if your browser supports it). The JPG (before WebP
conversion) is ~88kb.

To see it without the recompression and turn mod_pagespeed off, you can look
at the image on the page:

[http://www.andrewmunsell.com/blog/?ModPagespeed=off](http://www.andrewmunsell.com/blog/?ModPagespeed=off)

------
speeder
For my games, I found out that the best thing when scaling is use IM Lancsoz
filter.

Granted, my games use high-res hand-drawn vector-ish art (not pixel art,
neither photo-style or paint-style art) so I dunno if this is applicable to
photos.

~~~
anonova
Imagemagick already uses a Lanczos resampling filter when downsampling.
Lanczos inherently preserves sharp transient data, e.g., sharp edges in
images, but it looks like the author wanted something more.

------
cientifico
I think the script to apply it to a folder could be clear with:

for image in *.jpg ; do gm convert $image -resize "900x" -unsharp 2x0.5+0.5+0
-quality 98 `basename $image .jpg`_s.jpg ; done

~~~
fragmede
The script on the linked page deals with filenames with spaces while that for
loop does not.

Neither of them deal with filenames with \n in them, eg 'pretty

sunset.jpg'

For that, you'll want to run find:

    
    
        find . -name '*.jpg' -exec convert '{}' -resize "900x" -unsharp 2x0.5+0.5+0 -quality 98 `basename '{}' .jpg`_s.jpg  \;

------
Camillo
In my experience, scaling down an image makes it look _sharper_. A blurry
image will often look perfectly fine when sufficiently scaled down; which is
not surprising, because a blur with a five-pixel radius at camera size can
easily become a fraction of a pixel at web size. Similarly, scaling down
counteracts camera noise, because each output pixel averages out the noise
between several different sensor pixels.

Of course, you can always make them even sharper with filters if you want.

------
3amOpsGuy
Reduce the quality quotient (severely, like .98 -> .3) and don't cut the
resolution as far, e.g. Keep 2x the pixels.

Much better appearance for the same file size.

------
ansgri
Could also be useful to downscale images using the 'high contrast downscale'
filter (as we call it in our lab; maybe this isn't the common name). For each
set of pixels to become one, you compute min, max and mean, and select the one
of {min, max} which is closer to mean.

Unfortunately, I don't have any examples handy.

------
free
One problem I recently encountered is that some images refused to load on IE8.
Turns out that the IE does not support CYMK color spaced images and the image
appeared significantly differently on chrome and firefox. Was quite surprising
since I had assumed that jpeg was a standard format and would be supported by
all.

~~~
erikig
I run into this same issue when creating previews from print comps. You should
be able to specify a colorspace to resolve that:
[http://blog.rodneyrehm.de/archives/4-CMYK-Images-And-
Browser...](http://blog.rodneyrehm.de/archives/4-CMYK-Images-And-Browsers-And-
ImageMagick.html)

~~~
free
Yeah, but changing the color space to RGB also significantly alters the way it
appears. Then I have users who complain that this is not what they uploaded
and its a genuine complain.

I was going to suggest that the real problem is people using IE8 but that
wouldn't go too well I guess

------
dbbolton
I think before _and_ after shots would have been helpful.

~~~
tobyjsullivan
Although there technically are, the after shot is not available for those of
us on mobile.

"If you hover the mouse on the image, you'll see the sharpened version."

------
geon
I have noticed that sharpening an image makes them look _worse_ on high
resolution displays like the iPhone 4+ or Macbook Retina.

------
kmfrk
Is this to be regarded as image quality preservation or a photo manipulation
that increases sharpness "artificially"?

~~~
ansgri
The latter.

------
coherentpony
It'd be great if you put two in this post so we can see the difference without
having to run all of the commands first.

~~~
dageshi
Apparently if you mouse over the image, the sharpened image overlays the
original (no I didn't pick that up either, it was mentioned on another comment
:)

~~~
gambogi
But it totally breaks when you can't hover over the image... like on mobile

------
Void_
Flickr does this, don't they? I always wondered why images on Flickr look
sharper.

------
wfunction
Is this tool doing anything other than a deconvolution?

~~~
susi22
I would guess it doesn't even do deconv. but only adds a simple high pass
filter to it. That would explain the parameter choices (sigma is the std.
dev.).

------
dbcooper
madVR uses the ImageMagick jinc with anti-ringing filter to upscale video.
Looks pretty good. :)

------
praveenhm
really who time to do all this, image by image.I use aperture which is ok.

~~~
gus_massa
Read the bottom of the article:

 _> How do I resize a whole folder of images? [...]_

