AVIF works extremely well at compressing images down to very small sizes with minimal losses in quality but loses comparatively to JPEG XL when it comes to compression at higher quality. Also I believe AVIF has an upper limit on canvas sizes (2^16 pixels by 2^16 pixels I think) where JEPGXL doesn't have that limitation.
Also existing JPEGs can be losslessly migrated to JPEGXL which is preferable to a lossy conversion to AVIF.
So it's preferable to have JPEG XL, webP, and AVIF.
- webP fills the PNG role while providing better lossless compression
- AVIF fills the JPEG role for most of your standard web content.
- JPEG XL migrates old JPEG content to get most of the benefits of JPEG XL or AVIF without lossy conversion.
- JPEG XL fills your very-high fidelity image role (currently filled by very large JPEGs or uncompressed TIFFs) while providing very good lossless and lossy compression options.
Possibly an underrated but potentially very useful unique feature of JXL is that it completely eliminates the need to use a third party thumbnail/image-scaling rendering site or workflow. If you need a full size JXL image rendered down to 25% size for one of your web views, you literally just truncate the bitstream at 1/4 the total (or whatever percentage of the total number of pixels of the full-size image you need, that's a trivial math calculation) and send just that.
That's tremendously simpler, both from an architectural and maintenance standpoint (for any site that deals with images), than what you would usually have to do, such as relying on either a third party host (and added cost, latency (without caching), and potential downtime/outage) or pushing it through the (very terrible and memory/cpu-wasteful codebase at this point) ImageMagick/GraphicsMagick library (and potentially managing that conversion as a background job which incurs additional maintenance overhead), or getting VIPS to actually successfully build in your CI/CD workflow (an issue I struggled with in the past while trying to get away from "ImageTragick").
You get to chuck ALL of that and simply hold onto the originals in your choice of stateful store (S3, DB, etc.), possibly caching it locally to the webserver, and just... compute the number of pixels you need given the requested dimensions (which is basically just: ((requested x)*(requested y))/((full-size x)*(full-size y)) percentage of the total binary size, capping at 100%), and bam, truncate.
Having built out multiple image-scaling (and caching, and sometimes third-party-hosted) workflows at this point, this is a very attractive feature, speaking as a developer.
That's just progressive decoding, though, and is only possible if you encoded the image correctly (which is optional). You can also do similar things with progressive jpeg, png, and webp, with jpeg being the most flexible.
The thing with JPEG XL though is that its design is inherently progressive. Even when there is no reordering you will get 8x downsampled image before everything else (and the format itself exploits a heck out of this fact for better compression).
Apart from limited resolution probably the biggest problem with AVIF: It doesn't support progressive decoding. Which could effectively cancel out its smaller file size for any web applications. AVIF only shows when it is 100% finished. See
This comparison video is admittedly a little unfair though, because AVIF would have easily 30% lower file size than JPEG XL on ordinary images with medium quality.
Hehe, I see we have been down the same route. Sad to say but ImageMagick is awful at resource usage. VIPS can do 100x better in many specific cases, but is a little brittle. I do not it that incredibly difficult to build though
- JPEG XL can do lossless compression better than PNG if I’m right.
- At low bit rates, JPEG XL isn’t that far from AVIF quality. You will only use it for less important stuff like “decorations” and previews anyway so we can be less picky about the quality.
- For the main content, you will want high bit rates which is where JPEG XL excels.
- Legacy JPEG can be converted to JPEG XL for space savings at no quality loss.
The use cases of WebP is limited, the actual advantage over decent JPEG and isn't that big, and unless you use a lot of lossless PNG I would argue it should have never been pushed as the replacement of JPEG. To this day I still dont know why people are happy about WebP.
According to Google Chrome, 80% of images transferred has an BPP 1.0 or above. The so called "low bit rate" happens at below BPP 0.5. The current JPEG XL is still no optimised for low bitrate. And judging from the author's tweet I dont think they intend to do it any time soon. And I can understand why.
AVIF is even more limited in resolution than that, just 8.9 megapixels in baseline profile or 35 megapixels in the advanced profile.
If you have image-heavy workflows and care about storage and/or bandwidth then JPEG-XL pairs great with AVIF: JPEG-XL is great for originals and detail views due to its great performance at high quality settings and high resolution support, meanwhile AVIF excels at thumbnails where resolution doesn't matter and you need good performance at low quality settings.
JPEG XL Lossless: about 35% smaller than PNG (50% smaller for HDR). Source: https://jpegxl.info/ So with JPEG XL WebP may not serve any real purpose anymore.
You can scroll down (on mobile) to see an overview image comparing technical features on
https://jpegxl.info/. It doesn't mention color profiles (although I presume that just means they're all equal there), but jxl does support higher max bit depth per channel (32 vs 10 for AVIF) and more channels (4099 vs 10). So for raw sensor data, and intermediate formats for image processing, where information loss should be avoided, it should be a lot better.
I'm hoping it gets adopted as a better underlying technology for various RAW formats, and hopefully a better successor to the DNG format while we're at it (currently these are TIFF based). I'm not even a professional photographer, and my hard drive is still mostly occupied by RAW files.
Yeah, the points you mention are what I remember what photographers really dig about JXL. Also higher bit depth is a big deal for some pro photographers.
I actually studied photography (technically contemporary art, but photography was my main medium) but chose to not pursue a career in it. You are correct, bit depth matters. It is unlikely 32 bits will ever be needed for RAW files though.
Specifically, it matters for source files and intermediate files.
With RAW files from the camera, the higher the bit depth of the analog-to-digital conversion (ADC) step, the less posterization this introduces on the signal. Theoretically at least, you're still limited by the sensor's dynamic range, and there are other subtleties involved, like light perception being logarithmic instead of linear, but RAW encodings being linear[0][1]. But in simple terms: paired with a sensor with high dynamic range and good ADC, a higher bit depth results in less noise and higher dynamic range. Which allows one to recover more fine detail from shadows and highlights. Which makes the camera more forgiving in normally difficult lighting scenes (low light and/or high contrast). So a higher bit depth can aid in giving photographers creative freedom when shooting, and more flexibility in editing their photos without loss of fidelity.
So yes, it is an important cog in the machine that is the whole processing pipeline.
Having said that, as I mentioned our eyes perceive light logarithmically. The dynamic range of the human eye is... complicated to determine, because it adjusts so quickly. At night it may go up to 20 stops, during the day 14 stops is likely to be the typical range[2]. So it's probably not a coincidence that digital cameras have "stalled" at using 14 bits for their RAW files, typically: the photographer likely wouldn't be able to see more contrast in the lights and shadows before taking a photo anyway!
According to the article, WebP requires more CPU to decode. JPEG XL also supports lossless transcoding from JPEG, so it could be used for old image sets with no loss in image fidelity.
There are arguments for the new format, but the Chrome people seemed unwilling to maintain support for it when pick-up was non-existent (Firefox could have moved it out of their purgatory. Safari could have implemented it earlier. Edge could have enabled it by default. Sites could use polyfills to demonstrate that they want the desirable properties. And so on.)
To me, the situation was one of "If Chrome enables it, people will whine how Chrome forces file formats onto everybody, making the web platform harder to reimplement, a clear signal of domination. If they don't enable it, people will whine how Chrome doesn't push the format, a clear signal of domination", and they chose to use the variant of the lose-lose scenario that means less work down the road.
> There are arguments for the new format, but the Chrome people seemed unwilling to maintain support for it when pick-up was non-existent
Of course there is no pick-up when Chrome, with its massive market share, doesn't support it. Demanding pick-up before support makes no sense for an entity with such a large dominance.
- Microsoft enabling the flag in Edge by default and telling people that websites can be 30% smaller/faster in Edge, automatically adding JXL conversion in their web frameworks
- Apple doing the same with Safari (what they're _now_ doing)
- Mozilla doing the same with Firefox (instead of hiding that feature in a developer-only build behind a flag)
None of that happened so far, only the mixed signal of "lead and we'll follow" and "you are too powerful, stop dominating us." in some issue tracker _after_ the code has been removed.
Why are you talking about Microsoft, Apple, and Mozilla, when Chrome has a larger market share than all of them?
> "you are too powerful, stop dominating us."
That's twisting things. The problem was that the argument of the Chrome team against JPEG XL was self-refuting. They were themselves the main cause of what they complained about.
Because Microsoft, Apple and Mozilla can still exert pressure: "Support this feature we enabled and benefit from 20% less traffic with users of our browsers" and "Use Edge/Safari/Firefox to browse the web faster (and with metered connections: cheaper)" still has an effect on Chrome's decision making.
Chrome had that code, hidden behind a flag. There wasn't any kind of activity. No questions "when will you put it in by default in Chrome?". No other Blink-based browser (Edge, Brave, Vivaldi, Opera) that could easily pick up the support by enabling that damn flag by default did so. Firefox hid JXL support even better than Chrome. No image sharing site that did the math and considered "200KB for a polyfill saves us and our users megabytes in traffic on each visit" and acted on that.
That doesn't look like anybody is interested in JXL support.
I'm bringing this up again and again because I dislike that notion of "Chrome is the market leader and we're powerless to do anything about it. Bad Google." It neither encourages the Chrome folks to do better nor anybody else to pick up the slack. It's 100% complaint, no matter what Chrome does.