Currently, I'm converting all the jpegs to webp at 55 compression at the time of import and I love both the quality and the sizes of the files, and so far couldn't be happier with the results I managed to shave some 20GB from the disk usage as well.
The 800lb gorilla is, however, Safari. We all know that Safari does not currently support webp and due to internal politics, may never do so.
So the first "solution" is to keep copies of both formats and use the <picture> tag. This just won't do I have zero interest in doubling my already prodigal disk usage.
There are some rather complicated OS libs that will take a jpg and convert on the fly to webp...I want to do it the other way around.
One thing I hate are "kitchen sink" libs that do SO much that you are often left confused on how to do the specific thing you want to do, so I am in the process of rolling my own solution by using GD and intelligently converting the 11% of all requests that are Safari and using nginx caching to keep these converted images ready to go.
I would like to hear of anyone with better solutions, if possable, since as you can see, I haven't yet implemented my ideas.
It would be really great if there was an on-the-fly conversion library -- basically, reference the image once and let the run-time deliver up the right image type. What I don't like about that is that I generally shy away from doing any sort of "if then" based on browsers' user-agent-string.Edit/Update: I don't like that that because it got us all into a lot of trouble during the Browser Wars and is now generally considered bad practice. 
Can't you just base it on the "Accepts" header? That's the whole reason it exists, after all.
If you go that route, be aware that HTTP caching mechanics can get tricky, and some CDN service providers (deliberately and self-interestedly) violate the rfc's (eg refuse to cache anything w/ a "Vary" response header at all -- looking at you Akamai)...
In retrospect, it was great fun to effectively piece together a web performance optimization service layer to support and accelerate the heck out of a traditional 3-tier web application... but doing it all by hand, these days, might not have great ROI.
Other things like making thumbnails of all the images can also help reduce bandwidth but depends on how the site is structured.
That is terribly wrong. Actually it just opposite: as further you move your eyes from screen as smaller that "CSS pixel".
Yet the whole idea of that "CSS pixel" is terribly flawed:
> The trickiest one is a so-called CSS pixel, which is described as a unit of length, which roughly corresponds to the width or height of a single dot that can be comfortably seen by the human eye without strain.
"comfortably seen" is an "average temperature of patients in a hospital" - makes no reasonable value at all for particular me.
CSS pixel is exactly this: a 1/96 inch square measured on surface of the screen.
You are misreading it. The pixel gets physically larger so that you will see it at the same size.
A CSS pixel is a 1/96 inch square on the surface of a typical desktop screen. (It wasn't adjusted to the recent trend of screen growth, so that number is getting more wrong with time.) Different media are expected to change that size so that your page renders the same way.
Anyways, this is a good thing. It shouldn't really adjust with screen growth, but it's important to note that it's an abstract dimension (so thinking in terms of PPI/DPI with CSS pixels is unwise). It allows you to be agnostic to devices, and respect the user's preference (if possible) for how close to the physical display they are.
386 ppi for a Full HD 5.7" screen (i.e. my phone).
Hum... Yes. That's the entire point of my comment.
One is 90cm away, the other is 150cm away. An arcsecond of viewport on one is almost half of the linear size of the same viewport size on the other.
My monitor is closer to that usually. And that's the point.
It makes absolutely no sense to measure viewing angles.
The only thing that we can agree is that area of finger/screen touch measured on screen surface is roughly equal for humans - 1cm x 1cm or something like that.
So size of your buttons shall be measured in units of screen surface, but not in some mysterious things that depend on distance orthogonal to screen surface.
1. CSS pixels must be treated as logical length unit equal 1/96 inch, and,
2. Browsers must ensure that 1in or 10cm are exactly that when measured by physical ruler on screen surface.
I, as a developer, must know that my 1cm button is exactly that on screen and so human clickable.
But CSS sizes are defined by viewing angles, not by linear size. There was plenty of debate at the time it was standardized, and really, none of the options is good so they got one of the bad ones.
Who would have thought that we will start clicking <button>s by literally fingers.
But how much of dev time wasted already in discovering that `px` stands for points in MacOS terms (1/72 on inch initially and now 1/96 of inch) and nothing near real pixels ...
Glitch art wasn't inspired by lossy JPEG compression, it was inspired by corrupted digital feeds and missing image data: https://en.wikipedia.org/wiki/Glitch_art
(I'll refrain from linking the site to avoid the appearance of naked self-promotion, but my username is a dead giveaway if you care to see it in action).
* when you can
1. The PNGs tended to be MUCH smaller than the SVGs, and I think you'll find this likely whenever you have even a very minorly complicated graphic (the article points this out).
2. There is an issue on Android devices where scrolling was absolutely awful when there were lots of large SVGs on the page. This completely went away when we switched to PNGs.
I think the best recommendation is to store the initial image either as an SVG or very high resolution raster image, but then use an on-the-fly transforming CDN so you can experiment and easily get the right image (size and format) where necessary.
Yes, flash had other problems. No, this doesn't make their point false.
I found the article very nicely written and do not think there was filler in there.
It's not so clever given changing implementation of video prefetch logic, and differing codec behaviour with 1 frame videos with hardware decoders.
I knew of few Android devices which claim to be able to do 4k VP9 in hardware, but plainly hang upon playing VP9 from Youtube.
Does Android Chrome do JPEG decoding with hardware these days?
Also, is HEIF royalty free? Looks like AVIF is using it for container, which is surprising.
HEIF has problems because of HEVC encoder, which was replaced in AVIF.
The most interesting bit of it is that it contains a JPEG1 recompressor that saves about 20% space but allows exact reconstruction of the original file. It uses more modern entropy coding and goes to more effort to predict coefficients than JPEG1. It has almost exactly the same gains as, and sounds a lot like, Dropbox's Lepton, described here: https://blogs.dropbox.com/tech/2016/07/lepton-image-compress... .
Seems like a big deal to plug seamlessly into all the JPEG-producing stuff that exists, without either doing a second lossy step (ick) or forking into two versions when you first compress the high-quality original. 20% off JPEG sizes is also a bigger deal than it may sound like; totally new codecs with slower encodes and a bucket of new tools only hit like ~50% of JPEG sizes. As Daala researcher Tim Terriberry once said, "JPEG is alien technology from the future." :)
For JPEG XL's native lossy compression, it has some tools reminiscent of AV1 and other recent codecs, e.g. variable-sized DCTs, an identity transform for fundamentally DCT-unfriendly content, a somewhat mysterious 4x4 "AFV" transform that's supposed to help encode diagonal lines (huh!), a post-filter to reduce ringing (that cleverly uses the quantization ranges as a constraint, like the Knusperli deblocker: https://github.com/google/knusperli ).
Interestingly it does not use spatial prediction in the style of the video codecs. A developer in a conference presentation mentioned that it's targeting relatively high qualities equivalent to ~2bpp JPEGs -- maybe spatial prediction just doesn't help as much at that level?
Don't know if AV1-based compression or JPEG XL will get wide adoption first, but either way we should actually have some substantial wins coming.
Have you considered HTTP/2 that supports multiplexing at a protocol level