But one thing I'd caution is that webp is not a panacea for image optimization. It's only supported in Chrome. If you want to fully leverage next-gen image formats cross browser, you'll also need JPEG 2000 and JPEG XR...and even if you do all of that, you still won't get support for Firefox.
There's also srcset and lossy compression, which are also viable options: https://userinterfacing.com/the-fastest-way-to-increase-your...
I used to use the rule of thumb that JPEG was for images, and PNG for charts, and things with text.
But these days I go through each image I get from the art department and optimize it myself in Photoshop, cycling through a series of presets.
To my surprise, depending on the image, sometimes an 8-bit PNG will end up smaller than a JPEG, and provide better visual quality.
Naturally, your mileage will vary; at participating locations; not valid in Alaska, Hawaii, or Puerto Rico; no cash value; batteries not included; do not taunt Happy Fun Ball.
There are some commercial services in this space, as well as other similar open source services.
If you're looking for a quick way to get thumbor up and running with docker, I'd plug https://github.com/minimalcompact/thumbor
AWS even created a cloudformation script to deploy it with few clicks of a button
In any case, my main point was that there are plenty of great options for resizing/manipulating images on-the-fly rather than pre-processing :) how you choose to deploy it is secondary.
 it's providing thumbor 6.4.2, and the latest is 6.5.1 (I'm pretty sure it's possible to upgrade though)
I'd still recommend it if you've an old-fashioned webserver (like idk, random wordpress installation) and don't want to pay a third party. I don't know if it's still being maintained or updated though, I haven't heard much of it since its release.
You acnkolgege some people want to "maintain there own server", but its worth pointing out for high-traffic orgs there are some good reasons for doing so. Maybe you want to be an organization that does this at the CDN origin rather as CDN as to be CDN agnostic / have a multi-CDN strategy. This doesn't mean you are living in the past. It means you want CDNs to compete for your traffic and not have you locked in with crazy price leverage over you.
you mean like when privacy was still a thing?
Bandwidth is cheap - except on cloud services!
So, users' privacy is now officially a thing of the past?
From my perspective, it seems that everything has gotten way more bloated there is an assumption that everyone has unlimited data and bandwidth. I used to have a 1GB data cap on my phone that I would blow out in a couple of weeks from just reading news websites. For example, bloomberg.com shouldn't need to make nearly ~300 requests and download 18mb of data just to load the front page.
Boy was I wrong!!!
- 500/565 requests
- 7.0 MB transferred
- 9.86 second to load
- 400 requests (adds about 1 per second after that)
- 6.8 MB transferred
- 6.1 seconds
- 307 requests
- 4.6 MB
- 9.03 seconds
On mobile, but last I checked was around 700 HTTP requests on a desktop without adblock.
I always find it difficult to imagine what kind of process leads to that result. The developers can't be happy about it, surely. It has to be a case where management and everyone else throws things they want into the project, a big concensus party, and you end up with a bloated monster at the end.
As helpful as decreasing image size was, switching to HTTP/2 was even better. Highly recommend giving HTTP/2 a try if your dealing with a lot of file requests & latency. A small test of 150 images went from 6 seconds down to 1 second in our tests.
Turns out the intranet team had decided to add a panel for Yammer updates. It doesn't help that the Yammer implementation has errors. For example, the X-XSS-Protection header returns 1; mode=block twice in the same header.
If there is anyone here from the Yammer team, let me know and I'll show you.
I wrote about it here: https://ma.ttias.be/optimize-size-png-images-automatically-w...
- Don't have to think about it
- Optipng is really good at reducing PNG's to their bear minimum
- Doesn't resize images (if a 1024x768 is displayed as a 10x8, it'll still download the 1024x768)
- Only does PNG
- If your images are stored in git (and you didn't pre-optimize before committing/deploy), you can get merge conflicts
Still, better than nothing.
It helped to increase the Google's PageSpeed Insights rating from 56/100 up to 99/100.
The automatic cropping methods are pretty cool and work really well, too.
For SVGs, svgo (brew install svgo) usually produces the best results for me.
On Mac, you're definitely right that ImageOptim is the best.
Because it adheres to the company's IT policies?
Because they like to review results instead of relying on some black box automation?
Because they like to work in-house and not on someone else's cloud?
Just off the top of my head. I'm sure there are a thousand other reasons.
It also has a GUI.
Being focused on theoretically lossless compression techniques, svgcleaner’s GUI doesn’t look to include rendering of the SVG, and focuses on batch operation. For svgo there is Jake Archibald’s SVGOMG which renders the SVG and so lets you see what the impact of the lossy compression employed.
Webp and .jpg file both had similar dimensions, picture detail complexity, and dpi. Webp format came out to 50% smaller file size
I didn't have enoughof a sampling and/or tests though.
I personally don't think image optimization with webp should be a thing though. The lack of native web support is one issue, the next is lack of native support on windows OS is another.
Two things IMO are most important about image optimization for heavy-image load sites. One is lazyloading.js (frontend library) via specifying a class for those images past a threshold browser height on the backend. Most notably this is used in many ecommerce sites, but on analysis amazon doesn't seem to be using this.
Next would be sprite compression of common social links. A great example of this is amazon actually, checkout this image I extracted from from their webpage.
I wouldn't mind a solution on the server-side though, where in your html you just put an img tag and the server determines what the best format is based on browser support. Of course, that would mean there'd be a header (or some smart user agent analysis) for every image request, and you'd like to keep that overhead to a minimum.
mogrify -path $3 -filter Triangle -define filter:support=2 -thumbnail $2 -unsharp 0.25x0.08+8.3+0.045 -dither None -posterize 136 -quality 82 -define jpeg:fancy-upsampling=off -define png:compression-filter=5 -define png:compression-level=9 -define png:compression-strategy=1 -define png:exclude-chunk=all -interlace none -colorspace sRGB $1
smartresize image.png 300 outdir/
It will run a slew of image optimizers by default using imagemin, and has support for a wide range of others.
It also supports caching and optimization of images that aren't being directly imported through webpack (thanks to some awesome contributors) so it's a great way to set it and forget it and never have to worry about sending 3mb images to your users by accident.
I used to give him a few dollars per week when Gratipay was still up and running, sadly I don't know of an alternative now.
 - https://liberapay.com
*I have no affiliation with liberapayn just did some curious research.
(No connection to me - just think it's a great idea, and totally free.)
We had a site outsourced (because I found Shopify to be ..'tricky'.. to meaningfully work with on a PC) and the developer was darkening 'hero' images on the homepage shout in CSS, rather than specifying that we simply pre-process them in something like Photoshop before uploading.
When I realised (I wasn't particularly hands on by this point) I was livid, so changed the code and we knocked a few hundred Kb off the front page. Our site is necessarily image heavy, so any gains anywhere are useful.
When I first started I had a ceiling limit of about a 120Kb for the entire page - images and all - so today's internet is a weird and foreign land to me.
Now, I was quite brutal in the compression and probably should have backed off a bit to avoid artifacts.
It might be nice to have an automated test that the system under test limits the bandwidth and try to load your pages, and set some times that you'd like to hit.
Reducing the size of your images is just the first step and there are many things you need to consider in order to make your website faster, things you need to solve:
- Format: deliver the images in the right format for each browser (e.g. using WebP for Chrome)
- Size: What size for each image? What happens on mobile, tablet, desktop and the different screen sizes and pixel ratios (it's not only retina or not)
- Quality: Is your image being resized by the browser? Are you using raw files to generate the optimized images?
- Thumbnails: Are you also generating thumbnails for listings or smaller versions of your images? How are you going those thumbs to your original images? Do you need to use a Database?
- Storage: Where are you going to store those images?
- Headers: Caching static assets it's key for recurrent users, are you using Apache or Nginx? Is your setup working well?
- CDN: are you using a CDN to deliver those assets? CloudFlare is great but it's not the fastest way to deliver images. What about setting the right configuration for that CDN? How much are you going to spend?
So what's next? Going to one of the API service to optimize images, read their 500 pages of docs to get to resize and crop an image? Ad complex plugins to your backend and have a high dependence integration?
I mean, if you like to add more dependencies to your project, maintain more code, spends hours rebuilding scripts and running cron jobs to update your images, go for it.
That's why about 9 months ago we started working on a new concept, solving all these problems with a service that integrates as easy as a lazy-loading plugin and it solves EVERYTHING about image optimization (and yes, everything that you're talking about on every comment on this HN post).
Don't get me wrong, we have a lot to improve and there are many details of our product that we need to polish, but we believe we've built a solution that solves perfectly all the most important parts of image optimization and delivery. It's not about reducing the image size by 1KB more, it's about everything else and understanding the big picture.
We love feedback and our backlog is prioritized based on our customer needs, let us know what you think.
Here's the link to our startup website: https://piio.co
Moreover, this solution is good enough for people with small blogs just like mine. Anyone who needs something more involved can use your service or other alternatives.
As for responsive images, I implemented that on my site too although I didn't mention it in the article. I plan to write a follow up article on that topic soon.
On a side note, you might want to bump up the font size of the navigation links on your site. They're too faint.
Would love to connect to chat about this and for sure I'll read the follow-up article.
Thanks for the feedback too!
There's also a difference between performance, speed and actual perception of speed, but that is very hard to measure and maybe it's an issue for another topic.
I used Fontello to strip out unnecessary FontAwesome icons and uncss to remove unused Bootstrap styles, replaced some Bootstrap JS with vanilla JS and made use of SVGs (optimised with SVGOMG) for backgrounds and the logo.
The resulting site is a total of 178Kb when viewed in Chrome (down from over 1MB), including bootstrap, analytics, some screenshots, a custom font and animated logo. There's plenty more I could do to trim off size, but I had more important things to do.
There are so many ways to make webpages smaller and more efficient, and it can be a really fun learning experience.
But they could have just done the same thing in Photoshop, Preview, MS Office image tool, etc. JPEG is a standard; file size does not depend on what tool you use to create it. It's strictly dependent on the image itself, and the render settings you choose. Same with PNG.
In fact, you'll get better quality for the file size if you go directly from the original image straight to your final resolution in one step. Rendering to high-quality JPEG, then re-rendering on the server to shrink the file size, will give you worse image quality than just going straight from the original file to the final in one render.
WebP looks promising but is not yet well-supported. Most sites can go a long way just by caring about, testing for, and adjusting image rendering defaults to optimize for file size.
EDIT to add a bit more:
If you are optimizing images as part of a pre-deploy build process, you can use whatever library you want. The only thing that really matters is your choice of format (JPG or PNG), and the render settings. Or, you can hand-optimize the images and drop them into your repo to deploy as-is.
If you're running a CMS where non-developers are going to be uploading images through an admin UI (like Wordpress), your CMS should be using a server-side library to render optimized versions of the images that get uploaded, then serving the optimized versions. You can adjust the settings of the server's image library, although that might require a plugin or module, or custom code, depending on the CMS.
Missing this is a common killer mistake in page load times. I visited a site the other day that served a 16 MB JPEG file for the "hero" image on the homepage. My guess it that it was the JPEG straight out of a high-resolution camera.
This is also good for user privacy, as the server-side rendering should remove IPTC and EXIF data that would get served with the original image.
That’s simply not true. The specific choice of cosines can make a huge difference on compression while having nearly zero perceived visual difference. Most encoders however take a naive approach whereas something like Guetzli does an amazing job of compressing JPEGs way better than Photoshop ever could.
I guess I should specify that I'm trying to give practical advice for people who think the linked blog post is instructive. For the vast majority of people, the simple act of thinking about, selecting, and testing the available settings in popular image optimization tools is going to have a far greater effect than the small optimizations (and sometimes big tradeoffs) that might come from cutting-edge stuff like Guetzli.
The reward per effort of going from "not optimizing my images" to "purposefully optimizing my images using common tools" is typically much bigger than the step from the latter to "using the absolute best possible tool for each image."
What is it designed for? I downloaded and compiled it, and it seems to work quite well for the photographs on my website. The README says:
> Guetzli is a JPEG encoder that aims for excellent compression density at high visual quality
It only goes down to quality 84 and it takes a looonngg time to optimize. As a point of comparison, the author of the linked blog post dropped his JPEG quality to 70 and was happy with it. A JPEG at 70 (or lower), if you're happy with the look, has a good shot to be even smaller than the smallest Guetzli output.
Generally speaking, the easiest gains in JPEG file size will probably come from just dropping the quality down and down in tests, and deciding what you can live with. But if you have to have the best quality, and have plenty of resources/time for encoding, then maybe Guetzli will be a good fit.
This is not correct. For both JPEG and PNG the compression ratio can depend on the tool.
For JPEG the reason is that the quantization tables are not fixed. The quantization tables dictates how information is thrown away, and as such is responsible for the lossy part of JPEG.
The JPEG standard merely contain recommended quantization tables, however a lot of research has gone into deriving better quantization tables, especially image-dependent (tailored) ones that can provide better compression for the same image quality, or better image quality for the same compressed size.
For PNG the standard defines five pixel filters used to transform pixel values into something more compressible. However it leaves the encoder free to decide which filter to use and when. Thus a simple encoder is free to use the "None" filter for everything, ie don't do any filtering.
In addition, the PNG standard allows for additional pixel filters to be registered as extensions. Thus encoders with more advanced pixel filters could potentially compress better than an encoder supporting only the standard filters.
Here's why: PNG has a pre-processing step to turn pixel data into bytes, then a general purpose compressor is used on the bytes. The algorithm used is picked per row, and if you pick the right one the compressor will have a better shot at small output, but which algorithm is best for each row of your image?
The popular libpng reference implementation contains a weak heuristic to pick algorithms that might do well, but a tool can do much better... Or it can do much worse. Early PNG support in Adobe Photoshop just picked "do nothing" for every row, resulting in huge PNG files.
I agree with you that for JPEG there is no silver bullet, but like that other comment here suggests, you could perfectly just run optipng on max settings in a cronjob and call it a day, because you know there will be absolutely no quality loss if you don't explicitly request it.
You can compress (or rather, optimize) an existing JPEG if the Huffman tables were suboptimal (which they often are). It usually gives a few percentage points of decreased file size.
You can also drop metadata (e.g. EXIF) in many circumstances, which saves a bit more.
Image formats are not used wisely:  is PNG not JPEG and  is JPEG not PNG.
> I found that setting quality (mozjpeg) to 70 produces good enough images for the most part, but your mileage may vary.
You can get away with this setting for hidpi sizes but 1x will look horrible . If you care about quality, the mileage is actually 75-95.
> (Pngquant) quality level of 65-80 to provide a good compromise between file size and image quality
Again, it may only be applied to hidpi sizes, and it will easily ruin any gradients or previously quantized images.
Pngquant is a great color quantization tool but it does not actually perform any lossless PNG optimizations, which can save you at least 5% more, and up to 90% in some cases.
All of these tools will also blindly strip metadata (but it's not guaranteed!) along with color profiles and Exif Orientation resulting in color shifts and image transformations respectively.
Most importantly, none of them are good enough for automatic lossy compression. Guetzli is the closest but it still has some severe issues . I'm also trying to build a real thing, and it is hard.
> there’s value in using WebP formats where possible
WebP lossless and WebP lossy are quite different formats. WebP lossy being always 4:2:0 is not a good replacement for JPEG  especially at higher quality. On the contrary WebP lossless has evolved into a decent alternative for PNG including lossy .
Proper responsive images would give you considerably smaller page weight and improve performance on mobile devices. BTW Google treats oversized images as unoptimized .
I use mod_pagespeed - there are versions for nginx and Apache that do all of the heavy lifting.
With mod_pagespeed you can get all of the src_set images at sensible compression levels. All you need is to markup your code with width= and height= values for each img.
With this in place the client can upload multi-megabyte images from their camera without having to fiddle in Photoshop etc. It just works and the hard part is abstracted out to mod_pagespeed.
By taking this approach there is no need to use fancy build tools. However, a background script to 'mogrify' your source images is a nice complement to mod_pagespeed, if you want your images to be in Google Image Search then 1920x1080 is what you need.
The really good thing about taking the mod_pagespeed route is that you do get 'infinite zoom' on mobile, e.g. pinch and zoom and it fills in the next src_set size. Keep going and you eventually get the original, which you have background converted to 1920x1080.
There is also the option to optimise image perceptually, so you are not just mashing everything down to 70% (or 84%).
On your local development box you can run without mod_pagespeed and just have the full resolution images.
Or you can experiment with more advanced features such as lazy_loading - this also comes for free with mod_pagespeed.
If you want your images to line up in nice squares then you might add in whitespace to the images. Maybe taking time in Photoshop to do this. However, it is easier to just 'identify' the image height/widths and to set something sensible for them, keeping the aspect ratio correct. Then you can use modern CSS to align them in figure elements to then let mod_pagespeed fill out the src_sets.
Icons and other images that are needed are best manually tweaked into cut down SVG files and then put into CSS as data URLs, thereby reducing a whole load of extra requests (even if it is just one for a fiddly 'sprite sheet').
Oh, a final tweak, if you are running a script to optimise uploaded images and to restrict max size then you can also use 4:2:0 colour sampling. This is where the image still has the dots but the colours are 'halved in resolution'. This is not noticeable in a lot of use cases and particularly good if you are using PNGs to get that transparency.
As mentioned, mod_pagespeed reduced project complexity by offloading the hard work to the server, keeping cruft out of the project and making the build tools out of the way. It can also be covered to inline some images and plenty else to get really good performance.
Mileage may vary if the decision has been made to use a CDN where such functionality is not possible. However, if serving a local market then a faux CDN is pretty good, i.e. a static domain on HTTP2 where the cache is set properly and no cookies are sent up/down the wire to get every image.
Title needs spellcheck.
$ echo "How Image Optimization decreased my website's page weight by 62%" | wc -m
$ echo "scale=3; (67-65)/67" | bc
Math checks out, sir.
$ echo "" | wc -m
Made deep learning improve thumbnail representation.
Facebook (or google) showed up, gave him a 6 to 8 zeros to the left of the decimal, preceded at the far left by a $ and O(1).