Hacker News new | past | comments | ask | show | jobs | submit login
Request: Re-open JPEG XL issue (chromium.org)
302 points by blurred on Aug 7, 2023 | hide | past | favorite | 160 comments



As a photographer I am looking forward to it.

I process photos in ProPhoto RGB and I’m in the process of switching up my process to always publish images to the web as Display P3 which can be done just fine in JPEG and WEBP by attaching a color profile.

Display P3 is moderately larger than the old standard sRGB; you are trading some color resolution in the “mainstream” area for more saturated greens and reds.

4K TV’s use Rec 2020 which has a huge color gamut, because it is covering a bigger space, 8-bit color is not enough, you need to go to 10-bit, 12-bit or more (I process in 16 bits) and neither JPEG or WEBP can handle that. AVIF can, but so can JPEG XL.

I know people doing synthetic tests (instead of looking at the image they run a program that estimates how bad compression artifacts are) are impressed with AVIF but I’ve done some shootouts with JPEG/WEBP/AVIF/JPEG XL where I look at images with my own eyes.

For pictures that are moderate-low quality (say images for a blog) I think AVIF does very well, but I want to publish pictures I took with my mirrorless where I work really hard to get them “tack sharp” (e.g. sometimes a 4000x6000 image w/ my Sony looks almost like pixel art when you blow it up) and I want people to see something consistent with that on the web. And my experience is that AVIF falls down at that, it does not really save bits compared to JPEG and WEBP at high quality. JPEG XL gives superior compression at high quality and it supports high color depths and it’s an option I’d really like to have.


> And my experience is that AVIF falls down at that, it does not really save bits compared to JPEG and WEBP at high quality.

In all the comparisons I've seen, it's not even a contest.

"I picked this image because it's a photo with a mixture of low frequency detail (the road) and high frequency detail (parts of the car livery). Also, there are some pretty sharp changes of colour between the red and blue. And I like F1.

Roughly speaking, at an acceptable quality, the WebP is almost half the size of JPEG, and AVIF is under half the size of WebP. I find it incredible that AVIF can do a good job of the image in just 18 kB."

https://jakearchibald.com/2020/avif-has-landed/

It'd be interesting to see file size comparisons of AVIF lossless images vs. JPEG's "almost lossless" 100% compression, but I haven't run across any yet.


That seems like a bad image for the stuff the OP is talking about - large print high detail stuff - since it's so small, but the AVIF "acceptable" still, even at low res, seems to be clearly throwing away detail compared to the JPEG "acceptable". Look at how the JPEG preserves some of the body seam in the second "l" in RedBull, for instance. So ok, it's a quarter the size, but they aren't showing me how big an AVIF with the same detail as the JPEG would be.

They're just showing that it can do a less-offensive job of erasing detail smoothly to take filesizes down to tiny levels than WebP or JPEG? But "tiniest size with least offense" is VERY different than "best size with greatest detail."


Exactly, that's what I'm talking about.

For some applications people would be really happy with that F1 car image and I would even be happy with it for some applications (e.g. some random image to give spice to a blog) but for my photograph I'd say that's pretty lossy.


It's a Jedi mind trick.

The compression artifacts in the self-shadows of the red bit of the car to the left of the driver's head look awful to me. It's true that the compression artifact blends in pretty well and you might think the car really looks like that but personally I can't unsee things like that once I look at them in comparison.

The thing is that it is that play of reflections and shadows that makes an expensive sports car look so sexy.


> It's a Jedi mind trick.

All compression is. :^) The additional twist is that our eyes/brains do a great job at glossing over compression artifacts that we're used to.

When you expand the image, you can change the formats on both sides for A/B testing. Comparing "JPEG - 20.7 kB" to "AVIF - 18.2 kB" is an enlightening like-vs-like size comparison.

I'd be happy to do an AVIF encode of a large uncompressed/losslessly compressed image that meets your "near visually lossless" bar. I'm assuming that JPEG must do better in comparison to AVIF at large file sizes, but I can't find good examples of this.


> Comparing "JPEG - 20.7 kB" to "AVIF - 18.2 kB" is an enlightening like-vs-like size comparison.

You can't extrapolate that comparison to higher bitrates though - so unless your use case doesn't require preserving image detail (i.e. the images might as well not be there), both formats are inadequate at that size.


This "acceptable quality" photo at the top in AVIF made me initially think someone took a blurry photo. I only understood what happened after switching to original and JPEG, which look much better. So this is an apples-and-oranges comparison IMO. I'd never use that AVIF version on my photography website.


I don't enjoy seeing compression artefacts or blur or other loss of detail.

I don't want the internet to look like that F1 car to save a few milliseconds or $0.0000001 for the company sending the pic to me.

I certainly don't want my family photos to look like that.

I don't want the photographs on the internet to become much faster, much cheaper and much worse. I'd like them to become a bit faster, a bit cheaper, but also more crisp, vivid, emotionally engaging, and realistic.


Yes, the unfortunate thing is that Google is not interested in a higher quality Web so much as they are in a Web that is cheaper to index and serve.

So it's unsurprising that they have pushed the format optimized for "as few bits as we can get away with before things like too terrible" rather than actually improving quality and extending capabilities.


Here are visual comparisons between AVIF and JPEG XL on some test images with various bitrates / quality levels:

https://afontenot.github.io/image-formats-comparison/#end-of...

It seems AVIF has better compression at lower bit rates. At high bit rates they seem similar. AVIF especially shines for pictures with large homogeneous surfaces like the sky.

However, AVIF is missing some important features, such as progressive image loading. The maximum resolution is apparently also quite limited.


A recent comment (https://news.ycombinator.com/item?id=36286548) made me notice chroma contamination in low-BPP JPEG XL in this comparison. Since then, I have been leaning towards adopting AVIF for medium-low-quality pictures, especially of people. (People will be unhappy if their teeth are yellowed in post-production.) I am not so sure about AVIF and the sky. A reply to the comment I have linked shows an example where AVIF smooths out a complex sky texture at every tested quality setting, but JPEG XL does not: https://afontenot.github.io/image-formats-comparison/#reykja....

AVIF is missing JPEG XL's ability to re-encode JPEGs losslessly and reversibly with a reduction in file size. It may prove a serious advantage for JPEG XL. AVIF also lacks anything like https://jpegxl.info/art/. :-)


I also noticed more chroma contamination in JPEG XL, and overall more "ringing" artifacts.

Yeah, AVIF seems to remove noise fairly aggressively, or what it assumes to be noise. I think it looks pretty good in this sky example, although not overly faithful. It's less good when the removed "noise" is actual high frequency detail, e.g. on the fur of animals.

But what I meant with AVIF being good at homogeneous surfaces is that it apparently uses them sometimes to "save" bit rate, and to use it instead in other portions of the picture. Images with large surface portions tend to look significantly better in the rest of the image, e.g.

https://afontenot.github.io/image-formats-comparison/#clovis...

https://afontenot.github.io/image-formats-comparison/#us-ope...

I think here AVIF "small" looks overall better than JPEG XL "medium", e.g. in the details of the middle balloon basket, or the face of the tennis player.

I still think the lack of any progressive image loading makes AVIF completely unsuited for the web. The picture will only show once it was downloaded completely. That's a big step back from JPEG, and even more so from JPEG XL.


The 'Large' is still relatively low quality -- you can observe this by 3x zooming and comparing to original.

You can easily see that AVIF blurs (beautifies faces) and removes properties of the red cloth in the 'end-of-show' image.

Internet average image quality is higher than the 'Large' setting, so those names are not representative of actual internet use. Camera and image processing use is even of much higher quality.


No, I think Internet average quality is a lot lower than AVIF or JPEG XL at "large". Instead of using such a high bitrate to save a small amount quality, it makes much more sense to use a low bit rate with a significantly higher resolution, and end up with the same file size.

Even cameras often have relatively little detail per pixel, since they do interpolate a lot of information due to the usage of Bayer filters (which record only one primary color per pixel instead of full RGB values), and because anything with less than perfect lighting will be somewhat noisy and blurry anyway.


I made quite a bit of unpublished effort to understand the average quality. Web almanac media chapter is showing average jpeg bpp density. In my collection median quality as reported by image magic is 85 and roughly 2–2.5 BPP.


Compare the AVIF with the original and you see how bad it is. You need to make the comparison at a bitrate where at least one of the format gives acceptable results.


> 4K TV’s use Rec 2020

Is defined by some standard to be able to be declared "4K", or is it just what seems to be happening because all/most of the panel makers just threw it in?


The video is supposed to be encoded in Rec 2020. The panel is what it is. TV manufacturers negotiated to get a green primary close to what they could manufacture but really the panel is likely to be a little different and be color managed.

My main monitor is a Dell that is very close to Adobe RGB which is great for print work because it covers the CYMK gamut well.

I am interested in getting something better but it is not so clear to me that you can really get a Rec 2020 computer monitor other than a crazy expensive monitor from Dolby. Maybe I gotta download a bunch of monitor profiles so I can know what various monitors really support as I’ve already developed a system for simulating how channel separation works for red-cyan stereograms even on monitors I don’t have.

A better TV has been on the agenda too except somehow people keep giving me free TVs on the railing edge such as a Walmart TV which had great sound (better than many sound bars) that had the backlight burn out, then I got gifted a Samsung which sucks but is working fine in my TV nook downstairs. My main AV room doesn’t have room for anything bigger than what I’ve got unless I move everything which I don’t have a good plan for…


> The video is supposed to be encoded in Rec 2020. The panel is what it is.

Worth noting that it is physically impossible for current flat panel displays to have full coverage of rec.2020, because the spectral width of each primary is too wide with current flat display tech (LED, LCD, etc.)

Full rec.2020 coverage requires the use of lasers, so you can get a ~spectrally pure primary.

My prediction is that the next big move in display tech after 8K will be a transition from LCD/LED to VCSELs or some other teeny laser pixel, so they can advertise full rec.2020 coverage.

After that, maybe tunable quantum dot lasers so they can get full CIE 1931 coverage, but that's probably at least 15-20 years away.


The panels with quantum dots can theoretically reach an about 97% coverage of the Rec. 2020 color space, which should be enough for most purposes.

There are many commercial models which exceed a 90% coverage of the Rec. 2020 color space. However they are expensive, so they are used mostly in high-end TV-sets and only seldom in expensive computer monitors.


The only difference between Adobe RGB and the PAL/SECAM color TV-sets from 1970 is in the primary green color, which is much more pure in Adobe RGB and it indeed makes Adobe RGB great for print work.

On the other hand, for viewing images on monitors, Adobe RGB is not a desirable color space. The worst primary color of PAL/SECAM and of the closely related SMPTE C and Rec. 709 color spaces is the primary red color.

While in the blue and green regions the colors that are not representable in PAL/SECAM/SMPTE C/Rec. 709 can be represented by reducing their saturation, in the red corner, from purple to yellow, besides colors that can be represented by reducing their saturation there are also colors that can be represented only by reducing both their saturation and their brightness.

Moreover, in the red corner, from yellow to purple, there are many frequently encountered objects with such saturated and bright colors, e.g. flowers, fruits and clothes.

So the really noticeable improvement in color reproduction on monitors is when passing from PAL/SECAM/SMPTE C/Rec. 709/Adobe RGB to monitors with DCI-P3 primary colors.

DCI-P3 keeps the primary blue of PAL/SECAM, but it has much better primary red and primary green colors. The green is not as good as that of Adobe RGB, but the better red provides a much more visible improvement.

Many relatively cheap monitors, e.g. all my Dell monitors, have a menu option to replace the default sRGB color space with the "DCI-P3" color space (and they can display almost 100% of the DCI-P3 color gamut). On any such monitors, by "DCI-P3" is meant the Apple Display P3 color space, i.e. DCI-P3 primary colors combined with the PAL/SECAM D65 white and with the sRGB non-linear transfer function.

In cheap monitors, the DCI-P3 color gamut with 10 bit per color component is the best that can be found. The monitors whose color gamut is a larger fraction of the Rec. 2020 color space are expensive, because they normally must use quantum dots or OLED or both.

Nevertheless, as the output color space for any high-quality photograph, the Rec. 2020 color space is preferable, even if most people who would look at the photograph now would clip its color gamut to the sRGB or DCI-P3 of their monitors. However those who have better monitors, whose numbers will be increasing in the future, will be able to see any color that can be displayed by their monitors, without having the quality of the photograph already degraded by whoever has processed it.


ITU-R UHDTV Standard, the mainstream 4K TV standard, use Rec 2020.

In theory you can use any color primaries with any video resolution in computer (NOT on TV as those normally only support mainstream standard) as long as the the color space metadata is properly set, but in practise, some softwares ignore the metadata or the metadata got lost in the video processing chain. So in general, 4K video use Rec 2020, HD/FHD use Rec 709, and SD use Rec 601, for maximum compatibility.


> I want people to see something consistent with that on the web.

Don't get too hung up on picking a file format then. All sorts of middleboxes, CDNs, and edge network acceleration systems can potentially "right-size" your image for what the requesting device can handle optimally.


>a 4000x6000 image w/ my Sony looks almost like pixel art

Can you share some examples of such images/fragments?


JPEG has 12-bit somewhere in the standard, not sure on where it's implemented.


I'm not aware of any photo editing software that can export 12bit JPEG (although to be fair I don't know many), I am aware though that JPEG 2000 supports 12bit color and that JPEG XT supports up to 16bit color, perhaps that's whats causing the confusion?


The archetypal libjpeg has supported 12-bit for a long time, but it's a been a compile-time option, so an application (or distro package) had to choose which bit-depth to support.


Darktable supports it under JPEG2000: https://docs.darktable.org/usermanual/4.0/en/overview/suppor... (Darktable also supports JPEG XL)


JPEG2000 is a different format, just as JPEG XL isn't the same as original JPEG.


12 bit jpeg is a different format in practice. Better to move forward than adding support for a new ancient format.

Also, jpegli can support 10+ bits within the old compatible "8 bit" formalism.


jpegli can encode and decode 12 bits out of 8 bit jpegs. Or 10-12 bits, more when smoother, less when noisier.


Is 8bit enough to cover Display P3 without banding artifacts?


no. 8 bits isn't enough to cover SRGB without banding artifacts (without dithering).


Wait, what? Even with gamma encoding?


yeah (especially in dark areas). On http://www.lagom.nl/lcd-test/gradient.php I can definitely see some in the lower quarter of brightness. It's not glaringly obvious, but it's definitely there.


8 bit is not quite enough, not even for the usual sRGB let alone HDR or wide gamut. This is why jpegli is such a promising new option.


With proper dithering, I would guess even rec. 2020 would be doable.


But that's cheating. You're effectively using N-times 8bit, by using N 8bit pixels to encode the intermediate values.


If we consider dithering cheating, then 10 bit without dither is also not enough for banding-free 4K, no matter the color space.


I think so.


So, all it takes to consider a small community requested change in Chromium is a massive protest from thousands of users, small businesses, and Fortune 500 companies for almost a year...

Or maybe they are just trying to keep feature parity with Safari.


Of course Apple adding support in Safari is far more important than Internet outrage!

At this point adding new {image, audio, video, compression} codecs to browsers is probably a net negative, unless there's a good chance they get deployed across the entire browser ecosystem. Safari is generally the browser that's most conservative about implementing anything new, so their support makes a huge difference in the viability of getting the format universally supported.


None of this stopped Google's push for WebP or AVIF.


Right, WebP was almost certainly a mistake in hindsight. Whatever advantages it had weren't worth a decade of ecosystem fragmentation. Isn't learning from mistakes and not repeating them a good thing?

I don't think AVIF is a good example of this though. All major browser makers were members of the consortium that created AVIF.


On the contrary, the webp push didn't really hurt anything... there were (IIRC) no major exploits related to it, and it sees some use. I wouldn't say it was a mistake.

But it was a tremendous amount of effort and promotion on Google's end for a relatively narrow format.


I would seriously disagree. Plenty of CMSes switched all their photos to webp resulting in a huge loss in quality.


Google has enough moat via the Chrome, Chrome-derivative browser and Youtube client marketshare to push through a new format virtually everywhere outside the Apple ecosystem.


Yet WebP support is still pretty dreadful.


Dreadful where? In this day and age, WebP is supported by every browser, I can browse them comfortably in my file manager and basically every image-related program can open and edit them (I'm running GNOME on Arch). Where is it lacking?

I'm not a fan of WebP, mind, but the idea that support for it is dreadful is strange to me.


I think was less than a year ago that OBS got support for webp. Just about the same for Photoshop. Plenty of software still doesn’t support it or support it well. Glad your experience has been good.

Personally, the fact that it was Googles idea means I’m going to hold it at arms length, if not avoid it entirely. The web should be built on open standards.


I was using WebP-lossless for quite a few years, for the subset of images I had that fit within WebP's limitations (being the 16383×16383 dimensions and 32 bpp color depth). I've recently converted everything to JPEG XL. Old JPEGs got losslessly transformed, and both PNGs and WebP-lossless into lossless JPEG XL.

Seems to be a similar level of "works everywhere" for me, with the exception of web browsers this time.


actually, given that they now support JPEG XL and AVIF, edge is the most conservative. They sure did take their sweet time with webp, though.


I would guess that Apple make business decisions based on metrics that others do not have access to, including those unknown to Google and average internet users.


[flagged]


Its not just gamergate style rage though. Large and small companies have made statements in support of JPEG XL standardization.


> It's not just gamergate style rage though. Large and small companies have made statements in support of JPEG XL standardization.

These facts support my point: the issue filed as linked above correctly cites one of those statements of support, without resorting to outrage, and has been successful at getting Chromium team's attention to the matter.

https://bugs.chromium.org/p/chromium/issues/detail?id=145180...

(Note that I wasn't able to identify any viewpoints or opinions in your reply, other than the facts discussed above. If that's an oversight, I'm happy to revisit.)


What’s outrageous is Google’s BS reasoning for dropping the format.

They claim lack of interest when they (and Firefox) have been hiding JPEGXL support behind flags (and nightlies in the case of FireFox) yet WebP and AVIF is fully supported despite their similarly niche status.

Chrome’s support for JPEGXL was also already more or less complete when they decided to drop it.

Ironically though it has brought a lot of publicity to JPEGXL as “the format Google is trying to kill” in a Streisand effect sort of way. But with Chrome’s dominance that’s not enough to give it a fighting chance until Apple quietly announced support for it.


I agree. Google’s reasoning cannot be trusted. The decision they made was self-serving and their expressed reasoning was highly misleading.


Yeah, Google's top three revenue sources are ads, ads, and ads. (Respectively search, network, and YouTube.) Their customers are advertisers. Chrome's job (and Android's) is to make sure they retain control of sufficient surface area to place ads. Chrome user opinion to them is important to their business in about the same way meatpackers care about what cattle think of the design of the feeding stations. As long as they keep coming to eat, it's just mooing.


> As long as they keep coming to eat, it's just mooing.

I love this phrase thank you.


I dont think anything has changed. JPEG XL being supported by Apple would only be 20% of user world wide. Assuming every one uses it. According to the initial Google thread this is likely not considered as high enough interest.

With Google's study [1], by Google's Engineer, JPEG XL is no where near good enough compared to AVIF.

None of the above facts have changed since Google Chrome's decision on JPEG XL.

/S

[1] https://storage.googleapis.com/avif-comparison/index.html


That study is far from being conclusive. Please read also

https://cloudinary.com/blog/the-case-for-jpeg-xl


>That study is far from being conclusive. Please read also

Yes that is why I had "/S" at the end.


Uh? This is just a random request on a bug tracker that someone is triaging the same way they triage all other things?


Yeah, it's barely a response at all.

But it still more of a response than Google has given in ages.


Can't lose the edge to Tim Apple...


>[E]dge

There's a browser joke here somewhere, I'm sure.


Adding new image format support to a web browser is not a small change.


It's not a 5 minute job, sure, but compared to something like WebGPU it's tiny.


It was already implemented, and gated behind a flag.


And surely doesnt come with any security baggage.


it adds 185 kB of compressed binary size -- probably by compressing the graphics that come along Chromium will 10x counter the added weight and it will actually save size


Ignore the codec information (fascinating though that branch of comp.sci. is), what's interesting here is exactly how much Google is in control of Chromium, and by extension the web.

The fact that we have to get on our knees and plead for their consideration versus just fork and ship should make you ill. No compression without representation or some such.


It seems even the triage is done by bots, or they just don't read the issue, or it's just a tactic to prevent anything from being done, but seriously:

> @Reporter Could you please confirm the OS details.


Yeah, same thought. This actually does not feel like a bot (see random punctuation), just a typical bad 3rd-level support employee.


My favorite bit:

> As the issue seems similar to crbug.com/1178058 adding firsching to cc list for more inputs.


JPEG XL looks like a great format, I hope it takes over.

I get that most bandwidth goes to video, but it would still be nice to have a great modern standard for images.


I think Google's answer to that is WebP.


If anything, it'd be AVIF.

WebP is obsolete. It's still based on VP8 codec, which in video has been replaced by VP9 long time ago. AVIF is based on AV1, which is a successor to VP10. So WebP is a few generations behind in the VPx lineage, and is no match for modern codecs.


AVIF was a quick hack originally by Netflix by placing an AV1 frame into a HEIC container. I believe it was done in a few weeks of work.

AV1 was largely based on VP9/VP10 and was developed by a team working in Chrome organization.

JPEG XL main mode (VarDCT) and the JPEG recompression is largely developed by Google Research.

WebP as a format was based on VP8, a video codec built by On2 Technologies. On2 was bought by Google in 2010 -- a year before Google published WebP. The transparency and lossless encoding as well as non-video keyframe-by-keyframe animation were designed at Google. The On2 VP8 codec used initially in WebP lossy was not that suitable (too many artefacts) for photography transmission. Jeff Muizelaar wrote a great blog post about this. The codec for WebP were redesigned (without format) changes at Google, and kept improving significantly until around 2015 when it reached pretty good maturity.

(Personally, I don't like what it does to highly saturated dark colors, such as dark forests or dark red textures, but it is much much better than it was.)


We haven't even switched to WebP yet, and it's already obsolete? What hope is there for anything?


WebP was the classic Google “ship the prototype” move - they were hoping Chrome could muscle it through but it delivered only modest compression improvements (10-15% real world - the marketing papers promised 30% based on comparisons to unoptimized JPEGs) but it was missing features and had very primitive software support, making it harder to produce, deliver, or share (when Facebook switched, a common complaint was someone downloading a picture and finding it didn’t work in another app or when they sent it to a friend).

Very few sites pay for so much outgoing image bandwidth to make that compatibility cost lower than a 15% savings.


it's still a version of jpeg that supports transparency, and it's actually well supported (down to ios 14) to use without also having to deliver a fallback format now. it's not as good as it's successors, but if you are chosing one format and care about image size, it's the best choice.


I’m not saying it was terrible but that it took a long time for it to be worth the trouble unless you really needed transparency. It’s only been the last year or so that you could expect to be able to use it for anything non-trivial and not spend time dealing with tools which didn’t support it.


it did, but mostly because safari dragged its feet for years. Thank god they didn't take this long for AVIF (though I would have loved it if they had shipped AVIF in io15, since a bunch of devices won't get ios16)


Safari was the least of it - most image processing tools didn’t support it (e.g. PhotoShop got support last year, Microsoft Paint was the year before) or you had to do things like recompile the versions of common tools to add support (again, better now but it takes a while for support to spread through Linux distribution releases), and now you have more security exposure. That was a lot of hassle for very modest compression gains.

AVIF has gone better because it wasn’t based on a video codec which was never really competitive, was developed collaboratively, and didn’t have feature regressions from JPEG. As with tool support, that last matters a lot at many organizations because the edge experience tends to decide things - even if 95% of your usage is boring 8-bit 4:2:0 the institutional memory tends to be shaped by the times you hit something which can’t be used without more work. If it compressed as well as AVIF, more people might have decided WebP was worth it but since it only marginally outperformed JPEG the case was never that strong.

Part of what I meant by “shipping the prototype” was this kind of stuff: someone at Google wanted to find another use for the On2 IP they’d purchased so they tossed it into a 20 year old container format and shipped it. As with WebM, the benchmarks were fast and loose which meant that anyone who replicated them saw substantially lower performance, which is another great way not to build confidence in your format.


AVIF works on safari on iOS 15. Open the browser inspector on a blogpost on xeiaso.net and you'll see that it's selecting an AVIF image.


I switched to WebP about a year and a half ago. I’d been watching for a long time and it had finally reached the point where support was universal enough that I could not publish a JPEG.

WebP has the big advantage that the quality setting is meaningful, you can set it at a certain level and then encode thousands of images and know the quality is about the same. This is by no means true about JPEG, if you are trying to balance quality and size you find you have to manually set the compression level on each image. Years back I was concerned about the size of a large JPEG collection and recompressed them which was a big mistake because many of the images were compressed too hard.

In 2023 I think you can just use WebP and it will work well, my experience looking at images is that AVIF does better for moderate to low quality images but for high quality images it doesn’t really beat WebP.


> Years back I was concerned about the size of a large JPEG collection and recompressed them which was a big mistake because many of the images were compressed too hard.

Distortion metrics[0] such as MS-SSIM, MS-SSIM*, SIMM, MSE, and PSNR can be used to define a cut-off or threshold for deciding the point at which the image is "compressed enough" by using one or more of those algorithms and predefining the amount of acceptable/tolerable distortion or quality loss. Each of those algorithms has some trade-offs in terms of accuracy and processing time, but it can definitely work for a large set if you find the right settings for your use-case. It is certainly more productive than manually settings the Q-level per image.

Some SaaS such as https://kraken.io do this on JPG images.

[0] https://sourceforge.net/projects/iqa/


Actually this isn't accurate, Kraken.io doesn't use any SSIM-related algorithms, it just blindly applies some standard compression regardless of the image's content.

If you're looking for a tool that really smartly optimizes the images (by using SSIM) that is https://shortpixel.com/online-image-compression


You can use jpegli for better jpeg compression heuristics. It uses custom heuristics that were originally used in JPEG XL, then copied over and further optimized using nelder-mead to minimize distortion metrics (butteraugli and simulacra 2)


This image is helpful to see how they all kinda stack up, feature wise.

https://archive.smashing.media/assets/344dbf88-fdf9-42bb-adb...


There is plenty of wrong or misleading information here:

- What is the cross supposed to mean for PNG compression of photographic images? PNG can compress photographic images just fine and for some applications (where you want lossless) it used to be a good choice.

- PNG has animation support, even if everyone except Firefox tried to

- While WEBP and AVIF support lossless compression and animation, those features are not available in all browser versions that support static lossy webp/avif images.


a) This is a property of the encoder, not of the image format.

and

b) Like any other video codec-based format, webp overcompresses dark areas in images so no, you can't rely on consistent quality across collections of arbitrary images.


"switched to"? There's no full switch to anything.

Webp has been around for over a decade. AVIF will probably get adoption as a faster rate in my estimation.


avif still has patents I think?


AV1 is licensed royalty-free with an alliance of all big tech behind it. AVIF has been shipped by Apple and Google.

There's always going to be FUD around software patents, because the system is broken, but AVIF is as good as possible in the pathological system.


Good joke.


Scanning the comments here and I don't see anyone addressing the elephant in the room: PATENTS.

After a bit of searching, it's unclear what degree of "patent risk" comes with JPEG XL. JPEG historically was subject to patent troll lawsuits until the patent expired in 2006.

Please note that it's not enough for there to be a "royalty-free reference implementation" of JPEG XL, even if it's licensed with Apache 2.0, because you can't be sure from a glance that the Apache license patent grant includes all relevant patents. If you care about open source and free formats, you should look for two things: a comprehensive patent pool transferred to the standards body AND a royalty-free patent license to anyone with no strings attached.

The game here is that companies with potential claims over some techniques used within codecs have an incentive to withhold their patents from the official pool until years after adoption. Then they sue the biggest users of the codec (like Google) for obscene sums of money. That's why ALL of the patents used in a codec must be assigned to the standards body for open licensing, and you have to be SURE that none are withheld. This is difficult.

AVIF (and it's standards body, the AOM) was created in part (I believe) to solve this very problem. All the major tech companies are members and they've effectively agreed to a patent truce with regards to codecs.

This is arguably the most important commercial concern in distributing a browser for free that includes codecs. If you ship unlicensed codecs, some random company can crawl out of the woodwork 5 year later and sue you for a billion dollars.

In my view, AVIF only needs to be competitive with compression and quality. It patent risk is so low that it is the obvious choice. AVIF is truly open, there are multiple implementations, and its reason for existing is to solve the codec patent problem.

Source: I was near the activities within Netflix that helped found the AOM.

Disclaimer: I'm not a lawyer and this isn't legal advice; also I'm several years out of date w.r.t. JPEG XL specifically, so I'd be happy to be corrected about the relevant patent risk. Maybe someone has better info?


Did google cite patents as one of the reasons they removed support initially? I thought it was all around lack of benefits and difficulty of maintenance.


Apparently Microsoft was granted this patent on rANS in early 2022 (https://patents.google.com/patent/US11234023B2/en) and Google deprecated JPEG XL support late 2022. JPEG XL uses rANS, so I think there's some likelihood that this motivated Google to change their focus. Google didn't mention anything about this in their reasoning, but would they have mentioned patent issues publicly if that were the real reason? Google isn't obligated to tell us everything and the reasons they gave always felt weak and weirdly dismissive.


JPEG XL doesn't use the kind of rANS that Microsoft has patented.

JPEG XL decides the codes at encoding time and does context modeling the same way as WebP lossless and Brotli, by deciding which entropy codes to use explicitly.

Microsoft's rANS patent is supposedly centralized around updating rANS codes at decoding time (based on past symbols). This is slightly more efficient for density, but much slower and may negate the speed benefits that rANS brings. For practical implementations JPEG XL/Brotli way is quite a bit better.


This isn't the issue regarding JPEG XL support. That issue is still closed as "won't fix". This is a new issue asking for that issue to be re-opened.


Although I can sympathize, I don't really understand the point of opening a new issue when all the same information has already been left in comments on the old closed issue.

If the new issue gets closed, then it just reaffirms that the Chromium team doesn't care about this feature request. If the new issue somehow convinces the team to do something about it, then it shows that the team is utterly dysfunctional because their decision-making is more influenced by whether you say "pretty pretty please" in the right way than by the content of the discussion.


Further discussion on that closed issue is no longer possible right? So any new information that might cause a re-evaluation needs to be presented in a new issue.

The new information here seems to be 'most people thought the previous decision was bad', rather than 'please I really want this'. Changing an old decision because most people think it was bad is not a sign of utter dysfunction.


Further discussion is possible there. It just does not get triaged like a new bug.


"most people" of which group thought that?


I still need to sit down and convert my personal Linux computer over to using JPEG XL for picture archival and figure out what tools need to change or be updated.

Using it on the web is one thing, but getting better compression for my family photos would also probably be a win, and I suspect it would be possible to build a pipeline for viewing/editing that would be fairly transparent.


I've been pretty happy with it, myself. All the base OS libraries support JPEG XL so it's nearly pain-free, excepting lack of web browser support.

Combined with GNU parallel, I did this:

    find -type f -iname \*.jpg -print0 | parallel -0 cjxl --lossless_jpeg=1 {} {.}.jxl 
    find -type f -iname \*.png -print0 | parallel -0 cjxl -d 0 {} {.}.jxl
    find -type f -iname \*.webp -print0 | parallel -0 dwebp -o {.}.png {} \&\& cjxl -d 0 {.}.png {.}.jxl
JPEGs get losslessly recompressed with JPEG XL. PNGs and (lossless) WebPs get converted to lossless JPEG XL.


Nice, I should give this a try and see what it does to my photo library size.

There are a few holdout programs that I think are missing support (Blender3D springs to mind), but I also think in the rare instances where that's a problem I could probably set up a quick shortcut or some hooks to on-the-fly run cjxl and convert back to jpeg/png temporarily for whatever operation I need to do.


For those who are unaware, this looks like a good article for the backstory.

https://www.techspot.com/news/98355-google-deprecating-jpeg-...


This isn't a good article because of how biased it is against Google. It ignores that there is added cost to Google and their partners in supporting it and ignores the recommendation to use a WASM decoder.


Oh please, Google’s parent company is worth 1.64 trillion USD. They can afford a few programmers to maintain a codec.

Lack of popularity hasn’t stop them from supporting webp and AVIF.


>They can afford a few programmers to maintain a codec.

This is a bad argument because with that money they can also afford to do almost anything. They could also add any random file format to the browser, but that increases costs to support the web for more than just Google. Meanwhile adding a polyfill to support the format is performant without adding complexity to the web.


It’s also a bad argument because it confuses market cap with “money available to invest in developing products” (not that Google isn’t swimming in dumb money for other reasons).


I'm just reading through the wikipedia page on this for the first time.

Does JPEG XL allow encoders to switch between the DCT and modular modes on a per-macroblock basis, or is it just on a per-channel basis?

If it's the former then I can see this offering a lot of utility over other image formats because you'd be able to disable the DCT on high-contrast macroblocks and finally be done with all those god-awful "checkerboard" artifacts around the edges of objects.

But if it's merely on a per-channel basis then I'm not sure I see what the point of this is since I can already use a different format when I need lossless encoding; If anything JXL would become an annoyance because I can't tell if a JXL image is lossless or not based on the file's extension.


This is a good look at the benefits of JPEG XL: https://cloudinary.com/blog/jpeg-xl-how-it-started-how-its-g...

It was discussed here a few weeks ago: https://news.ycombinator.com/item?id=36801448


> Does JPEG XL allow encoders to switch between the DCT and modular modes on a per-macroblock basis, or is it just on a per-channel basis?

Tricky but it can be indeed done on a per-macroblock basis. The encoding itself is fixed per frame, but JPEG XL mandates zero-duration frames to be merged with the prior frame, so multiple frames with different encodings can be used for that. In fact I believe patches already work like this.


JPEG XL has 10 8x8 transforms and 9 larger transforms (IIRC).

Two of the 8x8 transforms are extremely local. One is called IDENTITY and the other DCT2x2. It is very difficult to produce ringing artefacts when using these transforms.

When going to higher quality settings in libjxl, it tends to favor the DCT2x2 quite a bit.

This is in VarDCT -- not modular coding.


Can someone summarize the issue with JPEG XL? Is this something that really matters? I've seen this mentioned a couple of times in the last few days but I don't see what the big deal is, is it really that necessary?


JPEG is 30 years old so we need something more modern (better compression, less visual artifacts, web optimized, etc...). There already was a plan to change it with JPEG 2000, but it failed, obviously, as we still use jpegs.

Now several formats are competing, most notably AVIF (which is basically just single AV1 video compressed frame) and JPEG XL. JPEG XL might be slightly better in some cases (as AVIF is based on a video codec) and most importantly it's backwards compatible with JPEG. So this means we can re-encode 30 years of JPEGs to JPEG XL without image degradation. Having a wide support would help immensely to make the format standard as otherwise everybody will just continue to use JPEGs. Google is somewhat against this as they already have support for AV1 and thus don't need to maintain a separate codec for JPEG XL.


Google Research is developing and maintaining JPEG XL, including a Chromium patch, without having expressed future maintenance cost worries.


Thank you! A lot of the context was lost for me!


If Google supported JPEG XL in Google Photos, they wouldn't be able to sell as many storage upgrades.


Also exciting that AVIF is finally coming to Edge, the last holdout.


this is a rando spamming the chromium issue tracker. what is newsworthy about this?


Because lots of people are hoping Google will change is mind and this is a small concrete step towards that.


Is there a good writeup about why people want this over other, existing formats?


> Is there a good writeup about why people want this over other, existing formats?

All the existing JPEG files can be converted to JPEG XL while gaining 20% size and still having all the exact same data. There are, what, tens or hundreds of billions of JPEG files out there for which the "original" (RAW or anything) is long gone (or never existed) and people don't want a "photocopy of a photocopy".

You can even decompress the JPEG XL back to the original JPEG file, bit for bit.

For that alone there shouldn't be any question: it's a wonderful feature.

In addition to that Apple / Safari are going to support JPEG XL and there are huge number of applications supporting that format.

People don't want this "over" existing formats. They want this in addition to other formats.

And I think they'll get it.


I'm not aware, but the gist is that it's in some circumstances better than AVIF in size and/or quality. Both of the new formats are wayy better than good old JPEG and PNG, but AVIF is the one Google is pushing.


> but AVIF is the one Google ...

but AVIF is the one Google, Apple, Edge, Firefox are pushing...

Let's not be misleading here :)


Sure, everyone else is pushing AVIF too. And I don't mind, it's a great format.


The fact that JXL is a new convenient crusade for people that hate on Chrome. You'll notice that noone is screaming at Mozilla for the same choice.


I am. It’s worse than Chrome in a way frankly in that JPEGXL is only supported in the Nightlies behind a flag.

However Mozilla has not completely dropped the feature like Google.


Mozilla's JPEG XL feature request is closed in a way that not possible to add comments or express interest.

Chromium's JPEG XL feature request is marked wontfix but still possible to comment or express interest (star the bug).


mozilla didn't remove their jxl support? It's behind a feature flag, like it always has been, but that doesn't mean they will get rid of it or never unflag it.


Exactly, it's not enabled either and there's no plans to. Neither for Edge.


it is implemented, it's just not released. if there's no plans, why did they add the feature flag to begin with?


Same reason Chrome did.

Chrome team themselves quoted no interest of those browsers as the reason for removal of the flagged code. Seems like an obvious way to pressure Google into supporing JXL would be to get Mozilla to launch their support, together with Edge and Safari?


but you do understand the difference between getting mad about a hypothetical situation vs something that actually happened, right?


As a loyal firefox user since the 0.5 release which took 30s to start up, Mozilla is a non-entity these days. It exists solely so that Google can avoid legal disputes. In the mean time, a bunch of grifters have taken over Mozilla for their own financial purposes and push some activist causes rather than a solid browser.


this has nothing to do with what i am talking about. the issue is legitimate but there are no news or interesting insights here.


Please can we have browsers not advertise support via Accept heders or <picture> tag support this time until they actually support all features so that those don't become useless for progressive enhancement of anything that isnt a static lossy image.


How much work is actually involved in adding support for this format? Like is it just plugging an existing implementation into the abstractions they already have for other image formats? or is there more to it?


Integration of a new decoder is not all that complicated code wise. What is complicated is the effects of the change and ongoing support.

1. Binary size cost, in my experience working on Firefox this is in the 100s of KiB range when adding a new decoder.

2. Ongoing costs increased compile times, new integration tests, functional tests and so forth. Keeping those tests passing and non-flaky.

3. Once something is accepted into the web ecosystem the intention is to support it for 10s of years if not forever. Web feature deprecation is quite slow, ex <keygen> & <blink>. The web has not deprecated a primary image format.

4. Security, a 'new' binary format is a place for security vulnerabilities, crashes and hangs. The web is actively hostile place for web browsers.


For a browser, it means permanent, forever, support for the format and continued maintenance and security patching for the library. Any CVE, any issue that might cause the browser to be insecure will be blamed on the browser and the developers will have to make sure any codec they use is safe forever.

That's the cost for the maintainers. Codecs are historically one of the most problematic sources of security issues (they're complex code that handles malicious downloaded files) and supporting a new one is a rather big maintenance burden for everyone involved.

And if Chrome gets backdoored by a JXL library security hole, everyone will blame Google for it.

If, by any chance, supporting JXL becomes too much of a burden, everyone will again blame Google for being evil if they ever remove it from Chrome.


These sound like good reasons to quickly disable AVIF and move fully to JPEG XL. AVIF is about 3-5x more code than JPEG XL.


At this point this just seems like one of those internet religious wars instead of anything actually tecnically usable.

I bet after (re)introduction, most of people yelling for it won't actually convert their JPEGs to XL. Just like almost noone whining about Reader actually uses or pays for any of the alternatives.


> I bet after (re)introduction, most of people yelling for it won't actually convert their JPEGs to XL.

The idea is converting workflows to JPEG XL (and particularly to enable uses for which JPEG isn’t suitable and even AVIF is supposedly less optimal), not converting existing JPEGs, mainly.


JXL to de moon!


Personally, I've got no great love for these new image formats.

It's always a pain in the ass when you discover your phone has actually been saving your photos as heic or webp or avif or whatever and hardly anything will open them.

I could understand wanting to improve JPEG in the age of dial-up and 1.44MB floppy disks - 60% smaller images could have been a great benefit in those days. But today, even if I'm taking 30 photos every day at 4k resolution, it'd take 20 years to fill up a $50 1TB disk.

The other benefits of the format might be great for some specialist applications, but options like billion-pixel-wide images, 32 bits per channel and 4099 channels ready for medical imaging only get a shrug from me. I doubt my browser is going to start displaying 4099 channel images.

I just wish we could get rid of heic, webp and avif at the same time.


> I could understand wanting to improve JPEG in the age of dial-up and 1.44MB floppy disks - 60% smaller images could have been a great benefit in those days. But today, even if I'm taking 30 photos every day at 4k resolution, it'd take 20 years to fill up a $50 1TB disk.

60% smaller images are great for hosting providers. We have ample storage and bandwidth compared to the 90s, but it still ain't that cheap.


lossless? large sizes? multi-band (> 3) data? transparency? animation? I work in software support of scientific imaging, and jpeg-xl looks to be the only format to date that supports those features in addition to excellent compression and royalty-free licenses (we currently use jpeg-2000 which has no good/fast open-source implementation, we really on an expensive proprietary license with lots of restrictions on redistribution, in fact our industry is largely moving back to TIFF now with the storage factors you mentioned, but using 10x the disk space is non-trivial).


You very clearly don't care about 90% of the rest of the world who doesn't have fast internet

you also very clearly don't care about the entire internet experience, at all, whatsoever.

Edit: 60% space savings only available in the age of the floppy.... what? 60% cost savings when serving multiple terabytes of image data is useless?

You seem to view everything through the extremely tiny lens of a photographer or something... pun intended


2g mobile still running in my country


This is extremely aggressive and personal, I suggest editing out the edit.


WebP was I believe the first image format that supported lossy compression with transparency. You could argue that you can just use PNG if you want transparency, but allowing lossy compression if you need alpha is more like 10x smaller, not a mere 60%. Also it came out back in the era when 4MB for a web page was a lot.

AVIF was the first format accepted by the web that supports HDR (not already tone-mapped HDR, true HDR.) Which maybe you don't personally care about, but is something that fundamentally cannot be done with existing JPEG and PNG implementations.

AVIF might not have happened, and the above paragraph might have read "HEIC", if HEVC had had similar licensing terms as H.264. But there's no predicting that stuff before it happens.


The thing is, you may not care, but web hosts definitely do, especially because cloud providers get you in the door with cheap storage/compute and then gouge you for outgoing bandwidth. Cutting image sizes by 60% when you're serving millions of images adds up to huge $$$ savings.


Even if you don't care at all about file sizes (which is definitely A Take), there is the whole other side of improving image quality.

Everything about computer imagery is pretty sadly limited when compared to the capabilities of human eyes and brains. And for quite some time now the ends of the pipeline (camera sensors and computer displays) have been improving, but are bottlenecked by the middle of the pipeline (image formats).


Call me back when 1TB smartphones are $150. Until then, these formats are a life saver. Maybe instead of dragging everyone down you could champion for programs to better their support?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: