
Essential Image Optimization - hunvreus
https://images.guide/
======
iamphilrae
Even lower hanging fruit than optimising compression, is actually choosing the
right image format in the first place. Put simply (very simply I may add), if
it’s an illustration, use PNG, if it’s a photo, use JPEG. Once we’re over that
hurdle, we can go onto managing compression techniques and attempting advanced
formats such as SVG and WEBP.

All too often I’ve seen people encoding 8MP photos as PNGs and wondering why
they’re all 50MB in size.

Next on the hit list is outputting images at the right size. If the image is
never going to be shown greater than 500px wide, save a copy that’s 500px
wide. WordPress has this feature built in- learn to use it if that’s your
flavour of CMS. Your bounce rate will go down, I guarantee it.

~~~
bscphil
This is not as surprising as it sounds to a web developer. For high quality
photography purposes jpeg is bordering on useless, even more so if one isn't
using Mozjpeg or guetzli to get images with reduced artifacts. Even for the
most basic uses like sharing a photo with friends and family this _can_ be a
problem. For example I sent out an album of travel photos and a family member
bought me a poster size print of one of them for Christmas ... using the
3200x2400 jpeg photo I had posted to Google Photos. So while if you're a web
designer using stuff like big hero or background images jpeg makes sense,
there are a lot of ways in which it simply doesn't cut it for most of us.

Fortunately, and _I can't emphasize this enough_, we're now in a situation
where the major browsers support Webp or can have fast decoding support easily
added. Webp is based on the I-frame encoder of VP8, and for lossy encoding
represents a several-generations improvement over Jpeg. And _it 's here
today_. Sure, we'll all be using Avif in a HEIF container one of these days,
but that isn't relevant to people building systems today.

Use WebP! For everything! Check out some comparisons between WebP and MozJPEG
here: [https://xooyoozoo.github.io/yolo-octo-bugfixes/#buenos-
aires...](https://xooyoozoo.github.io/yolo-octo-bugfixes/#buenos-
aires&jpg=s&webp=s)

I now upload WebP and HEIC images to Google Photos, where they are both
supported.

~~~
vladdanilov
> For high quality photography purposes jpeg is bordering on useless

> Use WebP! For everything!

Jyrki Alakuijala, one of the creators of WebP, on WebP vs JPEG [1]:

>> For high quality photography, I (and butteraugli) believe that JPEG is
actually better than WebP.

>> Below JPEG quality 77 WebP lossy wins, above JPEG quality 77 JPEG wins (for
photography).

>> This was based on the maximum compression artefact in an image -- averaging
from 1000 images.

Better meaning here [2]:

>> Faster decode (up to around 6x faster) and less bytes needed at high
quality (in comparison to butteraugli scores).

[1] [https://encode.ru/threads/2905-Diverse-third-party-
ecosystem...](https://encode.ru/threads/2905-Diverse-third-party-ecosystem-
for-optimization-of-webp-and-JPEG-XR-images?p=55785&viewfull=1#post55785)

[2] [https://encode.ru/threads/2905-Diverse-third-party-
ecosystem...](https://encode.ru/threads/2905-Diverse-third-party-ecosystem-
for-optimization-of-webp-and-JPEG-XR-images?p=55757&viewfull=1#post55757)

~~~
bscphil
"For high quality photography, I (and butteraugli) believe that JPEG is
actually better than WebP."

That's interesting. Of course, the subjective part of that is one person's
take, and the "objective" part of it is pointless because the whole point of
Guetzli (the jpeg encoder) is to try to maximize the Butteraugli score, so
saying that WebP gets a lower score is not significant.

Personally, WebP looks a lot better to me in the direct tests of equal file
size that I've seen. It even looks better than Pik, which is Google's
experimental successor to Jpeg that also uses Butteraugli.

And it would be odd, to say the least, if a codec from the early nineties
could beat a modern one on Intra-frame coding, which has been a subject of
immense research over the years.

Take a look at some of these for yourself. [https://wyohknott.github.io/image-
formats-comparison/](https://wyohknott.github.io/image-formats-comparison/)

~~~
lstamour
Like MP3 encoders, JPEG encoders have only gotten better over the years,
perhaps they’ve fewer bugs or compromises for compression? Also, there are
newer standards for JPEG including JPEG2000 and JPEG XR, etc. Plus many other
alternatives:
[https://developers.google.com/web/fundamentals/performance/o...](https://developers.google.com/web/fundamentals/performance/optimizing-
content-efficiency/automating-image-optimization/#how-far-have-we-come-from-
the-jpeg)

So my advice is encode in multiple formats to achieve the broadest browser
support and the best image quality/size trade off that you’re willing to
allow. That said... it does sort of bug me that every couple years we have to
revisit which codecs we’re using because the implementations keep marching
on...

------
toastal
One thing people don't think about with some of these optimizers is that it
will strip color profile and all metadata. The caveat here is that you can
lose color accuracy/range for the former, and vital licensing information for
the later.

------
brianzelip
Off topic, but what a great site and source repo. Some highlights for me:

\- use of `<details>` and `<summary>` elements for native web dropdown (see
Table of Contents)

\- works/looks just as great with js disabled

\- package.json: {..., "main": "index.md", ...}

\- very informative README, including notes on how he builds the site,
[https://github.com/GoogleChrome/essential-image-
optimization...](https://github.com/GoogleChrome/essential-image-
optimization#additional-repo-details)

~~~
spiralx
All of Addy Osmani's stuff is well-coded and/or well-written, I've been
reading his front-end stuff for almost a decade now:

[https://addyosmani.com/blog/large-scale-
jquery/](https://addyosmani.com/blog/large-scale-jquery/)

------
quickthrower2
Nice article. I am a manual image optimiser, but this made me see if Hugo has
something built in. And it does - [https://gohugo.io/content-management/image-
processing/](https://gohugo.io/content-management/image-processing/). Awesome!

------
Klathmon
A little bit of self-promotion:

I wrote and maintain imagemin-webpack-plugin for optimizing images during the
build process for webpack-based javascript projects. It works using imagemin
and the various plugins for it, which themselves are just small wrappers
around most image optimizers.

[https://github.com/Klathmon/imagemin-webpack-
plugin](https://github.com/Klathmon/imagemin-webpack-plugin)

------
HHalvi
I had a landing page with 37 high quality images spread across 5 pages. Used
Squash at first to bring down the aggregated size of the images from ~550MB to
~320MB and voila my bounce rates started going down. A few months later i
tried out Cloudinary and the conversion rates improved since the biggest
bottleneck of the landing page was the images and they were loading smoothly.
IMHO one of the low hanging fruits that is worth solving.

~~~
kunguru
320MB is still high going by third world internet standards

~~~
bscphil
It's high by any standards. A 60 MB webpage would take 5 seconds to load on a
100 Mbps home broadband connection, which is well above the median in the
United States. On most mobile connections a 60 MB webpage is going to be
borderline unusable.

------
ChrisGranger
Firefox 65 added native WebP support.

~~~
clouddrover
And you can use a WebAssembly build of libwebp to support WebP in Safari:

[https://webmproject.github.io/libwebp-
demo/webp_wasm/index.h...](https://webmproject.github.io/libwebp-
demo/webp_wasm/index.html)

WebAssembly is a nice way to add support for image containers and formats
which don't have browser support yet.

~~~
skunkworker
This is really impressive actually. The thought didn't cross my mind to just
use webassembly for webp decoding.

~~~
pbhjpbhj
It's maybe not a "good" thought, but it brings to mind the idea of using this
technique for a proprietary format (webp with a header change, say) to make
images only work on your own website.

Typical users wouldn't be able to view downloaded images; they could only see
them through your site.

I'm not sure on the details of web assembly, but if you can obturate the
workings then this technique becomes stronger, can web assembly be delivered
compiled?

------
sfusato
This is a must-read for any web developer. Image optimization on the web is
the low hanging fruit when it comes to reducing load times.

------
Veedrac
> The ability for PJPEGs to offer low-resolution ‘previews’ of an image as it
> loads improves perceived performance – users can feel like the image is
> loading faster compared to adaptive images.

My understanding was that progressive JPEG actually feels slower, since users
are less sure when the image has finished loading, and are thus best avoided
in most cases.

------
photonios
This is an excellent guide! A must read for web developers.

I am really glad we invested in automated image optimization where I work. We
run a couple of large real-estate websites and we store tens of millions of
images and process thousands a day. Optimizing all of them from the start was
one of the best things we did.

When an image gets uploaded, we re-encode it with MozJPEG and WebP and then
create thumbnails in five different sizes and upload them all to S3. They get
served through a CDN. We initially did the re-encoding and scaling on the fly
and then cache them forever. But MozJPEG is really slow so we changed the
system to pre-process everything.

When we first implemented this two years ago, Chrome was the only browser that
supported WebP. This investment really paid off as all major browsers now
support WebP. The website loads really quickly and images rarely give us
problems.

~~~
kunguru
does this process all happen on aws?

~~~
photonios
Sort of. We run on Heroku, so we indirectly run on AWS.

We're heavy users of Python and we're using the excellent Pillow-SIMD [1]
library to do most of the heavy lifting. We made our own builds to link it to
MozJPEG instead of libjpeg and include libwebp.

[1] [https://github.com/uploadcare/pillow-
simd](https://github.com/uploadcare/pillow-simd)

~~~
dmitrymukhin
btw Uploadcare is also doing image processing on the fly for you :)

------
vladdanilov
I have been making Optimage which is currently the only tool that can
automatically optimize images without ruining visual quality [1]. It is also
the new state of the art in lossless PNG compression.

I have raised a number of issues [2] with this guide. It’s been over a year
and they still have not been addressed [3].

[1] [https://getoptimage.com/benchmark](https://getoptimage.com/benchmark)

[2] [https://github.com/GoogleChrome/essential-image-
optimization...](https://github.com/GoogleChrome/essential-image-
optimization/issues/55)

[3]
[https://twitter.com/addyosmani/status/914207017589288960](https://twitter.com/addyosmani/status/914207017589288960)

~~~
bscphil
Your benchmark is not a legitimate comparison because it does not compare
files of equal size. See
[https://kornel.ski/en/faircomparison](https://kornel.ski/en/faircomparison)
for an explanation of the problems with your methodology. The files your
closed source tool generates in this benchmark are 56% larger than those
created by ImageOptim! Have your program generate smaller files and then post
the results.

~~~
vladdanilov
> 56% larger than those created by ImageOptim!

And noticeably degraded if you compare those images with the originals.
Optimage does apply chroma subsampling, the major winner here, when it makes
sense.

My goal is automatic image optimization with predictable visual quality, i.e.
images have to remain authentic to originals.

> your closed source tool

FYI, ImageOptim API is closed source and way more expensive if that was your
point.

> Your benchmark is not a legitimate comparison

If you have a better one,

> then post the results

I did post mine. Just why are you taking them out of context and missing
others, e.g. lossless compression results?

------
tambourine_man
ImageOptim is great. The amount of bytes it has saved me over the years is
staggering.

------
dmit
Please note that this guide mostly deals with optimizing for network transfer
size. As is almost always the case with optimization, there are multiple axes
you should be aware of. In this case, a lot of the techniques presented
negatively impact CPU usage and, by extension, battery life on mobile devices.

See
[https://twitter.com/nothings/status/1102726407744978944](https://twitter.com/nothings/status/1102726407744978944)
for some concrete examples.

~~~
fyfy18
The article mentions that progressive jpegs are slower to decode.

------
mnbvkhgvmj
This looks like dupe of
[https://developers.google.com/web/fundamentals/performance/o...](https://developers.google.com/web/fundamentals/performance/optimizing-
content-efficiency/automating-image-optimization/)

EDIT: I am not an expert on licensing but it does look like everything is in
order from a licensing perspective (the original content is CCA3 and to me it
looks like things are credited properly).

~~~
hunvreus
Indeed. I'm the one who posted it and I had 0 idea.

Should I reach out to an admin to get this removed?

~~~
bscphil
I don't think it's an issue actually - the author is the same, Addy Osmani.
See the contributors on the Github page linked by images.guide.

------
muratgozel
Would like to add a practice that may be helpful for someone: I store an
uploaded image as an array of objects in the DB. Each object contains format
(jpeg, webp, etc.), name, size, hash and other pieces of information about the
transformed image. Frontend app chooses which one to render according to a
browser (use webp in Chrome for example.) and UI element.

------
devwastaken
Recently I've tested just compressing images as 1 frame in an av1 video with
ffmpeg and it seems to work surprisingly well. Browsers seem to automatically
display the first frame of the video without needing to play.

------
adityapatadia
[https://www.gumlet.com](https://www.gumlet.com) is one of the service which
takes all of this into account and makes all essential image optimisations
available via nice and tidy API.

~~~
photonios
This looks nice! Judging by your comment history, you are the founder of
Gumlet, or an employee.

Why should I pick Gumlet over alternatives? There are a lot of competitors
such as kraken.io

~~~
adityapatadia
I am indeed founder of Gumlet. Here are few reasons which makes it superior.

\- cheaper pricing and no minimum monthly payments

\- global image processing locations - ensures lowest latency regardless of
location of your users

\- no lock-in and wide variety of cloud storage support including DigitalOcean
Spaces

\- prompt support - less than 8 hour response time for all users

\- GIF support

\- especially compared to kraken, we charge only $0.1 per GB compared to their
lowest pricing of $1 per GB

\- strong enterprise focus - lot of features planned for enterprise support.

~~~
photonios
Thanks for the straightforward reply!

Are there plans for a self-hosted option? Some business might be scared of
being so dependent on Gumlet's uptime for their mission critical image
processing.

Gumlet seems to cater towards processing images on the fly, which is what a
lot of your competitors also do. It might be interested to also cater towards
using Gumlet as a service to pre-process images as part of a pipeline.

I am just thinking aloud, I have no real basis for these ideas at the moment.
Just my two cents.

I wish you the best of luck and I'll keep Gumlet in mind, the homepage looks
very sleek.

~~~
adityapatadia
That's a good idea. We will consider it if we find appropriate customers for
the same. Meanwhile, for uptime, we provide 99.9% uptime SLA which we have
been able to maintain since our launch.

------
technotarek
I'm curious, how would the HN community tackle optimizing (and/or re-
optimizing) ~200k images sitting in an S3 bucket?

~~~
craz8
I would think about using Cloudinary to process these on demand as needed if
your usage can support this

Cloudinary Fetch pulls images from your S3, caches them, and serves the size
and format you request all without touching the original files

~~~
technotarek
Cloudinary is nice, but it gets expensive really fast on a high traffic site
that's image heavy and bandwidth optimized. For example, a single image might
have 5-6 versions on top of the original -- low quality thumbnail, high
quality thumbnail, default, default 2x retina, default 3x retina. Then
activate their webp service and perhaps offer another image aspect ratio. Each
is a conversion that counts against your quota. Now you've got 15+
(con)versions and you're shelling out hundreds of dollars a month.

~~~
dmitrymukhin
Cloudinary, Imgix, Uploadcare (which I' working for) and others save you money
because you don't have to develop and maintain these moving parts. You have to
constantly check what's happening with browsers, what formats/encoders are
available and which are the best etc.

It's classic build vs. buy dilemma. In majority of cases it's much more cost
effective to buy.

BTW, Uploadcare doesn't charge for file processing at all, only for CDN
traffic. So you can create as much image variants as you need.

