Yes, it losslessly recompresses existing jpeg's into the jpeg XL format, while also making the files ~20% smaller, key point being lossless. Thus it is the 'perfect' upgrade path from jpeg which is the current lossy standard, as you will get better compression and no loss in quality when shifting to Jpeg XL.
This being a 'killer feature' of course relies on Jpeg XL being very competitive with AVIF in terms of lossy/lossless compression overall.
I'm assuming this is bidirectional? You can go back from XL to jpeg losslessly as well? If thats the case, I'm having trouble imagining a scenario where you're not correct; it'd be an utterly painless upgrade path
>A lightweight lossless conversion process back to JPEG ensures compatibility with existing JPEG-only clients such as older generation phones and browsers. Thus it is easy to migrate to JPEG XL, because servers can store a single JPEG XL file to serve both JPEG and JPEG XL clients.
>You can go back from XL to jpeg losslessly as well
I don't think so, but I don't quite see the point unless you are thinking of using it as a way to archive jpeg's, but in that case there are programs specifically for that, like PackJPG, Lepton etc.
Decompressing and recompressing a zip gives a lossless copy of the actual data, but there's no way to reconstruct the same exact zip you started with.
The same thing can be done with image data. For something like jpeg you can keep the coefficients but store them in a more compact form.
For what it's worth JPEG XL claims that it's 'reversible', but I'm not sure if that means you get your original jpeg back, byte for byte, or you get an equivalent jpeg back.
> Decompressing and recompressing a zip gives a lossless copy of the actual data, but there's no way to reconstruct the same exact zip you started with.
I don't think getting the same JPEG is the goal here but getting a JPEG that decodes to the same image data.
Btw, if memory serves right, Google Photos deliberately gives you back the same pixels but not the same JPEG codestream bits under some circumstances. That's too make it harder to exploit vulnerabilities with carefully crafted files.
Is it possible to get a JPEG back without any loss when recompressing a JPEG to another JPEG (after discarding the original, ie JPEG -> bitmap -> JPEG)?
If you can go both directions, you can store the more efficient jpegXL format but still have perfectly transparent support for clients that don't support jpegXL.
If you can't produce the exact same original jpeg, then you can still have some issues during the global upgrade process -- eg your webserver database of image hashes for deduplication has to be reconstructed.
A relatively minor problem to be sure, but afaict if jpegXL does support this (which apparently it does), the upgrade process is really as pain-free as I could imagine. I can't really think of anything more you could ask for out of a new format. Better & backwards+forward compatibility
Sounds like it will just take an existing JPEG and reduce the size, but not re-compress it - so even though the original JPEG is lossy, there will be no additional loss introduced, whereas another format not based on JPEG would require a re-encode pass that would lose additional information.