You know, I don't really personally care if the successor to JPEG is AVIF (based on AV1), HEIF (based on h.265, pushed by Apple), or WebP (based on V8).
I just wish we could standardize on one. Trying to figure out which browsers support which, which way the winds are blowing, etc. keeps me sticking to JPEG.
It seems like h.265 "won" as the most widely supported new video codec with hardware acceleration. Yet confusingly, HEIF has not ridden on its coattails.
Where is this all going? And will we even have settled on a single successor before next-gen codecs like h.266 disrupt it all again?
>You know, I don't really personally care if the successor to JPEG is AVIF (based on AV1), HEIF (based on h.265, pushed by Apple), or WebP (based on V8).
You should care because those formats aren't equivalent. In many important cases, WebP performs worse than the good old JPEG[1]. It's popular mainly because of Google's brand and cargo culting. HEIF is doomed because it isn't royalty-free.
In my opinion, JPEG XL is the most interesting new format. Cloudinary published many blogposts comparing JPEG XL to the other formats, for example: [2][3]. I really recommend reading them.
BTW, Chromium has recently added support for JPEG XL decoding[4]. It's currently hidden behind a feature flag.
[1] - In medium to high BPP photographic images (which, I believe, is the most common use of lossy image compression) and in generation loss tests.
> In medium to high BPP photographic images (which, I believe, is the most common use of lossy image compression)
I don't think this is true. Most web images shouldn't be "near lossless" they should be "not ugly", with the exception being images that are core content (think Flickr or Unsplash).
If a tech product is illustrated with a child laughing at some salad, or whatever, it doesn't matter if some detail is lost, as long as it doesn't look ugly-compressed. Users will be scrolling past it, not zooming in and comparing it to the original.
> I don't think this is true. Most web images shouldn't be "near lossless" they should be "not ugly", with the exception being images that are core content (think Flickr or Unsplash).
Hard disagree here. We're talking image format, not image renderers. Whatever ends up being standardized on will eventually become the format "core content" is being stored in and transferred in. It will be the format in which my phone will save photos of my daughter, the format that her grandparents will receive those photos in (possibly twice transcoded by whatever IM they're using these days), so believe me, "not ugly" isn't gonna cut it here.
The photographers, the stock photo sites, the marketers - they'll all know how to keep images at high quality, and they have their reasons for doing so. Regular people will typically just use the defaults, so it's important to optimize the defaults for them, instead of marketing spam. If marketing spam is indeed the majority of data traffic on-line, there are other avenues to address it.
> I did a performance analysis of a set of sites recently, and many have images that are 10x bigger than necessary
Yeah, they also don't care. Why would they? Bandwidth is usually too cheap to meter for most commercial sites like these. If we try to fix their image sizes top-down by more aggressive quality reduction, some marketer will eventually notice the images "look bad", and fix it. Meanwhile, regular people will get the short end of the stick.
Phones are already using different formats for image storage. Apps are already sending those at different qualities and formats when you share them.
You shouldn't just take an image from your phone as-is, stick it on a website, and display it at 400x400 or whatever. Firstly, that won't work if you took the image with an iPhone, since it uses HEIF, which isn't supported by any browser. Secondly, it'll be orders of magnitude too big.
Sites will care about this stuff when it impacts their metrics like first-contentful-paint, which will soon begin impacting Google search ranking.
I understand the theory, but the reality seems to disagree. Rarely if ever companies care about page load speeds on slow connections. Website bloat in general is a widespread problem.
Sure, there are tons of websites that don't optimize for page load speeds.
There are also tons that do -- including some of the largest, most popular websites on the planet, that perform very fine-tuned image compression for this purpose. Managers pore over analytics reporting 95th pctl client-side loading time, etc. So "rarely if ever" is just utterly false.
I mostly agree, with the caveat that a great deal of web traffic is now image sharing social sites and so any reduction in traffic size is still a good thing.
I also want to point out that people can tell the difference between when something should be good quality and when it doesn't matter. Like even on Instagram, people will forgive a pixelated meme but still be disappointed by a portrait with compression artifacts.
In my experience, folks are referring to blocky JPEG compression in those cases. Or, for digital art, specifically chroma subsampling.
I can't decide if Instagram falls under the like-Flickr use-case or not. I think on mobile, AVIF-style compression would still be best, but it could be different on desktop.
> with the exception being images that are core content
This is not the exception. This is the common case.
Instagram/Facebook posts, Tweets (a very large % have images now), TikTok/Youtube/Twitch/Netflix thumbnails, people sending photos of their family/dog/cat/houseplant to a WhatsApp/Telegram group: all of these are transferred in compressed formats.
That's even before you get into the debate of whether photojournalists' images on news articles (FAR more common than product pics on tech sites) fall under your definition of "images users scroll past", or "core content".
It seems pretty bizarre to think 10 F1 websites are representative of the common cases.
There was a paper with stats from Google Chrome, with over a billion of image sampled, saying 90%+ of image on the web has BPP over 1.0. And the median of image BPP is between 2.0 to 3.0 depending on image dimensions.
I keep swinging from one side to another, on one hand I want image size to be much smaller ( HEIF / AVIF or better yet, something based on VVC ) BPP 0.1+, on the other hand a 500x500 image is only 32KB at BPP 1.0 with JPEGXL.
Right now I am mostly alining to JPEGXL because I think AVIF or others Ultra Low BitRate format are sort of over optimising, not to mention JPEGXL decode faster and uses less resources. So it seems to be the most practical one.
> If a tech product is illustrated with a child laughing at some salad, or whatever, it doesn't matter if some detail is lost, as long as it doesn't look ugly-compressed. Users will be scrolling past it, not zooming in and comparing it to the original.
In that case the best compression is to omit the image.
We added a preview of the JPEG-XL encoder to https://squoosh.app (and WebPv2) so you can try it with your own images.
In my experience, AVIF tends to outperform JXL in most web use-cases. However, for cases where the image should be really high quality (where the image is core content, like Flickr or Unsplash), the winner is less clear. Also, at those kinds of sizes, JPEG-XL's progressive rendering becomes an important factor.
But seriously, there's so much hype around these image formats, the best thing to do is to test it with your own images and your own eyes.
> AVIF tends to outperform JXL in most web use-cases
Matches my experience pretty well. AVIF can look quite good (for web purposes) at very low bitrates, while JXL falls apart. (JXL actually has two different modes of operation - modular mode is better at lower bitrates, but it's not considered to be JXL's main mode of operation.)
JPEG-XL seems to be designed to replace JPEG for high quality photographic images, and in my experience it's fantastic at this. I see much better results at higher bitrates (compared to AVIF), and JXL can also losslessly compress JPEGs to a smaller size.
There's a comparison between AVIF and JPEG-XL here https://afontenot.github.io/image-formats-comparison although JPEG-XL is rather fast moving target at the moment and the comparison hasn't been updated in quite a few months.
As a photographer I just want a format that I can send to clients (and for my own sake) that’s better than 8 bit jpegs. Bonus points for lossless editing and re-saving. It’s ridiculous how we’ve been stuck on such an out of date format for so long when everything else in photography has gotten so advanced.
Not just their brand and cult. It's because they control search, they count page speed as a ranking factor, their benchmarks tell you to prefer webp for better performance.
All OLED iPhones, iPad Pros, newer (2018+) Macs, and just about any "flagship" Android phone have HDR capable displays. There's tens if not hundreds of millions of devices that support HDR in the wild. Whether or not it's appreciated there's at least a huge population that can see HDR images correctly.
HEIF has the problem of being patent-encumbered. Cisco has graciously given out decoding licenses, but encoding is murkier. Luckily, hardware acceleration usually isn't required for still images, so you can encode to whatever, as browsers can decode them all just fine.
AOMedia has a "royalty-free patent licensing commitment from all AOMedia members", but that doesn't mean that AV1 users won't have to pay to license patents from non-members.
Royalty-free blanket patent licensing is compatible with Free Software and should be considered the same as being unpatented. Even if it's conditioned on a grant of reciprocality. It's only when patent holders start demanding money (or worse, withholding licenses altogether) that it becomes a problem for us.
> Royalty-free blanket patent licensing is compatible with Free Software and should be considered the same as being unpatented.
I agree 100%, and am not the person arguing that patent-encumbered technology is incompatible with free software or bad in any way.
The point of my unpopular post is simply that making a decision based on whether a technology is patent-encumbered makes zero sense in this case since all of them are.
>Royalty-free blanket patent licensing is compatible with Free Software and should be considered the same as being unpatented.
If I remember correctly, Firefox at one point refuse to include h.264 decoder even if they granted royalty free usage because the codec is considered patent encumbered.
That's why the word "blanket" is included there - if you just give Firefox and Firefox alone a royalty-free H.264 license, that's not compatible with Free Software, you can't legally fork it without getting another patent license that will almost certainly cost money.
AFAIK the thing Firefox refused to accept wound up happening anyway. Cisco released a nominally BSD-licensed H.264 decoder and agreed to pay the maximum royalty possible to MPEG-LA. If you use their specific codec binaries, then you're covered under the patent license. So Mozilla wound up writing a new plugin framework specifically to host OpenH264 so that WebRTC could have a standard codec.
Ah, the good ole' React license trick. Where is the mob outrage? Or are web developers so fickle to not care outside of JavaScript frameworks? Video encoding standards are much lower on the stack and the potential for damage is much greater compared to a website.
The difference between the AOM license and React license is that the React license terminates if you sue Facebook for any patent infringement, whereas the AOM license only terminates if you sue an AOM user for an AV1-related patent.
Reciprocal patent grants are not an uncommon or bad licensing practice. The only problem was that Facebook was doing it, which raises suspicion purely because Facebook is Facebook and Facebook is bad.
Are you suggesting that the members (including many tech giants) are overlooking some patents held by others which will then be used to extract a price ruining the format they have all been pushing? This seems somewhat unlikely right?
> Are you suggesting that the members (including many tech giants) are overlooking some patents held by others which will then be used to extract a price ruining the format they have all been pushing?
They're definitely not overlooking it — the AOMedia statement I linked to above was a direct response to the collected patent claims by non-members, which number over 1,000 so far.
I really hope that the single successor will be JPEG-XL. It's much better format for still pictures (faster encoding and decoding, higher quality, almost unlimited picture size, layers, color channels, etc), and the ability to losslessly recompress existing JPEGs is huge.
> And will we even have settled on a single successor before next-gen codecs like h.266 disrupt it all again?
I don't think there is great risk of getting disrupted by next-gen codecs because the gen-to-gen improvements for stills will be so marginal, especially compared to the improvement from jpeg to avif/heif. So there is little motivation to iterate there, especially considering the computational costs.
I see whatever we settle on will probably last next 20 years. So taking few years to settle feels fine in my books
I think that by the time we decide on the next image compression format the ML super-resolution will become so popular and easy to use that browsers and other devices will have an "upscaler" implemented in them.
Video hardware acceleration tends to be optimised for one stream, one frame at a time. It doesn't often translate well to loading multiple images of different dimensions & settings in parallel.
In my case, because I have full color management enabled, the AVIF image in the article displays incorrectly (too much saturation on my wide-gamut screen) while the JPG and WebP display correctly.
And this is the exact reason that it hasn’t been stabilised yet. It was going to be released in 86 (February), and was even enabled in the 86 beta, but people pointed out that this should be considered a blocker as it would scupper the format for years by undermining all confidence in it, so it was disabled again.
Thanks for pointing that out. I posted my top level comment in order to alert anyone using AVIF in Firefox about why the images are likely to look wrong, as well as warn off anyone thinking of including AVIF in an HTML5 <picture> element until the situation has improved.
It's worth pointing out that (as far as I know) the fact that canvas elements and videos aren't color managed in Firefox hasn't been considered a blocker for shipping either of them. There hasn't been any movement on the bug I linked in many months (other than users confusing the issue in comments), and so I hope the problem isn't eventually swept under the rug.
It gets worse, apparently the code just assumes that the data is encoded limited range YUV, so it does the expansion to full range even on files that are already full range (the default output for some encoders). https://bugzilla.mozilla.org/show_bug.cgi?id=1654461
I believe this may affect the images in the article, actually. It looks to me like the dark areas of the animal's hair are crushed pretty badly - not just the saturation issue I'm talking about.
I recently moved an Eleventy static site to start generating AVIF images. It is absolutely incredible the savings. I have images which went from 260kB to 16!
I used an npm module called @11ty/eleventy-img which renders the image variants at build time and generates the appropriate HTML for the browser to select the correct variant.
All I have to do is this, and I get correctly sized variants in JPG, WebP, and AVIF:
I need AVIF converters to get faster before I start using it. I added it to a project recently built on an SSG and it added about 1-5s per image to build time - which adds up on 200 images.
Yes, AVIF encoding is quite slow. Are you caching/storing the images? Converting 200 images will take a few minutes, but it should be a one-time process, from which other people will benefit.
Have you tried libheif? According to https://github.com/joedrago/avif/issues/11, it's a bit faster. Probably will need CPU/GPU chips that can accelerate the codecs more efficiently before the encode times really start coming down.
libavif (and the libraries that back it) have a speed parameter to control the CPU/quality tradeoff. Have you tried tweaking that and seeing if you get acceptable speed?
I've made the same experience: AVIF images sometimes look strange in Firefox (AVIF support behind a feature flag), but they look good in Chrome. I think Firefox might use the OS AVIF decoder, because the images also look strange when I open them directly on my Windows machine.
I can see the differences in Chrome too though? I came to the comments to see if anyone else could tell that the AVIF example is inferior to alternatives.
For those who use Jekyll as a static site generator, I highly recommend the plugin "jekyll_picture_tag" [1]. It takes care of generating multiple images in different formats and sizes, including AVIF.
You can also specify a list of images formats in decreasing order of preference, like:
The main reason to use new formats online is to speed up page loading. Sites that do this usually have WebP (and now AVIF) and a safe format (JPG, PNG) as fallback.
Often the size reduction is worth the extra disk space. Many sites also use CDNs that convert images on the fly, so they just need one file/URL and then the CDN sends WebP to browsers that support it and JPG to browsers that don't.
The extra markup is usually worth it. AVIF support is already at 63%, with Firefox coming soon. So ~40% of people will get a few more HTML bytes, but 60% of people might profit from a bigger saving on images.
I just wish we could standardize on one. Trying to figure out which browsers support which, which way the winds are blowing, etc. keeps me sticking to JPEG.
It seems like h.265 "won" as the most widely supported new video codec with hardware acceleration. Yet confusingly, HEIF has not ridden on its coattails.
Where is this all going? And will we even have settled on a single successor before next-gen codecs like h.266 disrupt it all again?