Pages automatically defaulting to webp so you try to send the link through a messenger and you might notice that it doesn‘t work for everyone, either iOS users can‘t see it or the messenger interprets it strangely. So you try to save the image and send that, same problem, you get the webp image…
WebP in particular, as the name suggests, was conceived as an "image format for the web" and while I think it's good to have the web in mind when designing an image codec, I think it's a bad idea to limit the scope like that. Design decisions like limiting the maximum dimensions and bit depth at the codec level "because that's all the web needs" plus limited attention/focus to adoption outside browsers does lead to the phenomenon you describe where "it doesn't work for everyone", causing the small gain of improved compression to be dwarfed by the huge inconvenience of breaking workflows.
Any new codec of course has this problem even if they do target wide adoption (like JPEG XL): adoption is never instantaneous. It is a fact that the release cycle of browsers is more suitable for innovation than that of most other software, so it does make sense to start there even when it will still cause things to break in software that doesn't support it yet.
To mitigate that, I think it would help a lot of browsers would have a "Save As..." dialog box on images that gives users the choice to save the actual image in whatever format it is in, or to convert it to PNG or JPEG.
Also there is a huge financial incentive for large web hosting companies (e.g. Google, Facebook) to adopt compression formats that save bandwidth, using automated tools to apply those wherever possible.
People making e.g. image archives or building image capture hardware are going to be slower. They don’t want to use new untested technologies that might not succeed, and they don’t want to switch more frequently than necessary. When they do transition it will be by applying new technologies to new images but not immediately transforming older images to use the new technology.
Applying lossy recompression on already-lossy-compressed images is something you should avoid at all cost, since it inevitably causes generation loss (and it is also likely to not be very effective, since you're basically spending a lot of bits in the new codec on replicating compression artifacts of the old codec).
When image quality and observed latency are at a level where users don't care even subconsciously (say, images look like camera originals and load in 100 ms), then bandwidth cost optimization may become a good 2nd objective.
Browsers could allow a selection of all sources provided by the picture element (this would also include differently scaled variants based on media queries) in a download dialog.
JPEG XL on fly conversion to JPEG, offers a nice solution for this use case.
The images are converted and stored when they are uploaded or changed. I can see this being an issue with that kind of volume though. But it’s a performance gain for the clients while content doesn’t need to worry about it.
Dumping these raw pixels into a trivial intermediate format, like uncompressed PNG or BMP, should not increase the attack surface in any noticeable way.
In a way, they already do for images: right-click and "Copy". In some desktop environments, you can even paste that directly into a folder to save it as a file. Dolphin allows choosing which format (of those that the browser can convert to) to save it in.
Presumably the HTTP request that gets sent when you save an image sends different Accept headers.
Doesn't seem to work all the time, but presumably thats down to a combination of different Cloudflare settings, origin server configuration and what format the original image were in too.
Well, screenshot is a thing for images. Video is another story.
And as long as websites tend to modify, delete, move, or otherwise play games with urls and content, I will see value in saving a permanent copy. That I should be able to do that is frankly how the internet was intended to function; if that's not desirable for the content, then perhaps it should not be published on the internet at all.
Except an artist can deliberately decide to only make an image publicly available for a limited time, and therefore taken the image down from the website. Just like art moves from museum to museum, an artist can allow an image to be used within a pre-defined window. Just because you have the technical know how to extract an image that is not readibly downloadable via the UI does not mean you should.
Maybe one of the features of JXL would be a timebomb type of setting where after a certain date the data is no longer useable.
I sympathize with both sides of this argument. I get that info wants to be free blah blah, but I also understand that artists are in a difficult situation with the internet. I mean, an artist's work posted on the internet is not the cure for cancer, or basic information on algebra where the info should be evergreen. The group think is more of "I want what I want" vs consideration for what the artist's intentions are. If you enjoy an image so much that you're willing to go to the effort to get the image, why not acquire the image throuh legit method?
> If you enjoy an image so much that you're willing to go to the effort to get the image, why not acquire the image throuh legit method?
Do you make the same argument when people use a VHS? If you're willing to go through the effort to press the record button, you should go buy a copy for $20?
The image file you downloaded from someone's website without their permission in miles better in quality than the stupid VHS. It's more like the DVD/Blu-ray you ripped from your buddy that actually paid for it. Just because you can doesn't mean you should
DVR sounds like a very good analogy to me. The website is showing you something, and you make a personal capture that you can replay at any time. It was distributed to you specifically, and you're time-shifting it. You're not taking a personal copy held by one person and making it two personal copies held by two people, which is what happens when you rip someone else's DVD. And the same way, you shouldn't take that image you saved and start distributing it around.
A better analogy would be if you make a painting of a painting without paying the original artist, is it stealing ?
It's not : it can be construed as counterfeiting though, and it might cause the artist to stop painting because he does not make enough money, but calling it stealing is simply wrong.
The word "steal" is chosen because it carries a strong negative connotation. It is an example of loaded language (https://en.wikipedia.org/wiki/Loaded_language).
Regardless of whether it's ethical to copy whatever files in whatever situation being discussed, the term "steal" is intended to sidestep that question and make it feel unethical by drawing an equivalence between it and something most people agree is unethical.
It's a slightly underhanded rhetorical technique, so it's reasonable to put the word "steal" in quotation marks to call attention to it.
At least I cut the image so it's not obvious
Could WASM in an extension be good enough for some use cases like this?
Right click — "Copy image" — paste into any program that supports PNG.
On Android though this is not the case.
ImageMagick is just few keystroke away and you can convert to almost any format you like.
Very confused as to why this is done - anyone?
Most comparison links posted here are to older (almost a year old) versions that don't reflect the current state of encoding. Both JPEG XL and AVIF have improved tremendously.
1) Progressive decoding. Like the original .jpg, .jxl can give you a low-quality image when a fraction of the file is loaded, then a decent-quality image, then the final image. This can give JPEG XL the edge in perceived load speed even when the full .avif is smaller than the full .jxl. (Old demo from a JXL contributor at https://www.youtube.com/watch?v=UphN1_7nP8U )
2) Fast conversion: JPEG XL encoding/decoding is fast without dedicated hardware. Facebook found encode/decode speed and progressive decoding to be points in favor of JPEG XL for their use: https://bugzilla.mozilla.org/show_bug.cgi?id=1539075#c18
3) .jpg repacking: JPEG XL can pack a JPEG1 about 20% smaller without any additional loss; the original .jpg file can be recovered bit-for-bit.
4) Lossless mode. JXL's lossless mode is the successor to FLIF/FUIF, is really good, and also has progressive decoding. AVIF has a lossless mode too, but JPEG XL seems ahead here.
(I know the parent comment is from a JXL contributor, I'm saying this for other folks.)
I think those will give JPEG XL a niche on the Web. Meanwhile I suspect e.g. Android phone cameras will save .avif someday, like iPhones save .heic now. Phones want the encode hardware anyway for video, and you can crunch a zillion megapixels down to a smaller file with AVIF before attention-grabbing artifacts crop up--at low bitrates AVIF seems good at preserving sharp lines and mostly blurring low-contrast details (compare Tiny images).
Finally, worth noting the codecs are different due to a bunch of rational choices by their devs. AVIF is the format for AV1 video keyframes. Progressive decoding doesn't help there, and doesn't jibe well with spatial prediction, which helps AV1 and other video codecs preserve sharp lines. And video codecs need hardware support to thrive anyway, so optimizing for fast software encoding probably wasn't an early priority. Otherwise the new formats have a lot of overlap in fundamentals--variable size and shape DCTs, better entropy encoding, chroma-from-luma, anti-ringing postfilters, etc.
Glad to see support for both getting more widespread.
Note this means that animated images on the web (like GIF) are significantly smaller with AVIF than JPEG-XL which has no inter prediction.
JPEG XL does have some weak forms of inter prediction though (but they were designed mostly for still image purposes). One of them is patches: you can take any rectangle from a previously 'saved' frame (there are four 'slots' for that) and blit it at some arbitrary position in the current frame, using some blend mode of choice (just replace, add, alpha blend over, multiply, alpha blend under, etc). This is obviously not as powerful as full motion vectors etc, but it does bring some possibilities for something like a simple moving sprite animation. This coding tool is currently only used in the encoder for still images, namely to extract repeating text-like elements in an image (individual letters, icons etc) and store them in a separate invisible frame, encoded with non-DCT methods (which are more effective for that kind of thing) and then patch-add them to the VarDCT image. The current jxl encoder is not even trying to be good at animation because this is not quite its purpose (it can do it, but 'reluctantly').
Anyway, I think that animation is in any case best done with video codecs (this is what video codecs are made for), and I wish browsers would just start accepting in an <img> tag all the video codecs they accept in a <video> tag (just played looping, muted, autoplay), so we can for once and for all get rid of GIF.
Any format that doesn't have this is doomed to fail as a GIF replacement.
Also a plus for saving phone snaps, since the camera often saves a short video these days anyway.
At 'Large' and 'Big' settings of this image -- which are still in much less than 1 bpp bitrates, i.e., below internet image quality -- you can still observe significant differences in the clouds even if balloons are relatively well rendered.
JPEG XL is the first codec to have a practical encoder that can be configured by saying "I want the worst visual difference to be X units of just-noticeable-difference". All other encoders are basically configured by saying "I want to use this scaling factor for the quantization tables, and let's hope that the result will look OK".
crf in x264/x265 is smarter than that, but it's still a closed-form solution. That's probably easier to work with than optimizing for constant SSIM or whatever, it always takes one pass and those objective metrics are not actually very good.
It is a bit like looking at bitrate for Video quality without looking at video resolution.
Part 3 will describe conformance testing (how to verify that an alternative decoder implementation is in fact correct), and part 4 will just be just a snapshot of the reference software that gets archived by ISO, but for all practical purposes you should just get the most recent git version. Parts 3 and 4 are not at all needed to start using JPEG XL.
Bitrates vary from 0.26 bpp (Nestor/AVIF) to 4+ bpp (205/AVIF) at the finest setting. Nestor at lowest setting is just 0.05 bpp, somewhat unusual for an internet image. A full HD image at 0.05 bpp transfers over average mobile speed in 5 ms and is 12 kB in size. I rather wait for a full 100 ms and get a proper 1 bpp image.
I just compared the original tiger image to the lossless version for JPEG XL but there's some small changes that it makes.
JPEG XL further includes features such as:
- alpha channels
- lossless and progressive coding
I wonder if progressive loading can halt loading (network I/O) at certain resolutions. This would remove the need of img-sets.
Interesting talk at https://www.youtube.com/watch?v=t63DBrQCUWc and https://www.youtube.com/watch?v=RYJf7kelYQQ
Esp. the "visual target" instead of "technical target" when deciding the encoding quality.
Also, the lossless and reversible transcoding from JPEG, GIF and PNG.
Yes, not resolution but predetermined quality levels.
Here is a demo, which uses the different resolutions to create a pseudo-animation:
It would be theoretically possible to write a server with a "give me the next quality level now" API endpoint, to enable the client to signal that it's ready for the next resolution.
This is far too janky to be used in production, but at least its fun.
Otoh the low-res thumbnail might be just enough to show as a (big) placeholder to bridge the (short) loading time to bring the image to a resolution that the user won't notice a difference.
(brain off ... can't write coherently ...)
One of the surprises of progressive loading, and why it's so good that JPEGXL has it, is how quickly you get to "good enough" and showing that before you get all the way to perfect.
FireFox status: Only in Nightly and behind a flag
But regardless. All main browsers but Safari will get it. The jury’s still out on Safari/WebKit.
To summarize, JPEG XL has the potential to use a single file to deliver many formats. This would deprecate the current practice of web developers storing many versions of the same image to optimize for different device types/sizes.
The advantages are shockingly large:
- It simply is much easier (time saved)
- Huge storage/cost savings for media-heavy sites/apps
- Significant world wide environmental benefit
- Less need for huge platforms (Facebook, Twitter) to aggressively compress photos
- No need to constantly add new sizes, future-proof
A dream feature, if you ask me. Do note that this feature requires both browser and web server support, so don't hold your breath. But one can dream.
As has been pointed out in the comments, it needs to be adopted beyond the browser. And easily.
Anyone hear of FlashPix?
It was a staged-resolution format that was introduced by a consortium in the 1990s.
The biggest problem (of many) was that, in order to read or write the format, you needed to use the Microsoft Structured Storage library, which was a huge pig (at that time. I assume it's better, these days).
The format was basically strangled in the crib. It was actually a fairly interesting idea, at the time, but files could be huge.
HEIC and AVIF are based on the QuickTime file format, which doesn't seem very svelte either. I can't find any reference on the JPEG XL container format so it's probably it's own thing.
The JPEG XL container (which is optional and only needed if you want to attach metadata to an image) is also based (more directly and with less header overhead) on ISOBMFF.
This can be a problem if you're the kind of completionist who needs to implement everything they see and make one C++ class per QuickTime atom - a problem I saw with a lot of mp4 codebases.
But there's no need to do this because almost all the things in the spec don't matter. Just don't read any of them and handle the rest procedurally and it'll be fine. It looks like JPEG XL also has too many features (like this animation and patching thing) so maybe just ignore that too.
The main issue with implementing all of MPEG-4 is the spec is overdetermined (the same fields exist at different layers and can disagree), but also it's full of nonsense nobody cares about, like the alternate codec for animating faces only.
I remember one of my first big OO projects was writing a TIFF reader.
What a nightmare.
The counterpoint to that is that some things are fast in hardware even if slow in software. H.264 is like this due to some design mistakes but JPEG2K could be the other way round.
They could have gone with JPEG XXL or JPEG XXX but the former is a bit too fanciful and the latter might cause some adoption problems outside a certain niche industry.
I prefer calling it JXL (or jxl) which is visually sufficiently unique to avoid confusion.
> It can do pretty fast encode/decode (for mezzanine use cases)
I'm not sure what this means, but zipping individual lines takes care of the interactivity problem of working with compressed intermediary frames.
Unfortunately, over the last several years, that's gone out the window.
Mojave can't read some Big Sur volumes, even though both are using "Apple File System".
HEIC image formats coming out of iPhones have changed /several/ times in the past couple years: witness the scrambling by the libheif project as they find out they can't read images from the latest iDevice.
Apple's in-device conversion to JPEG is lossy, but I don't expect you will notice. Most of the metadata it's retained (at least with the current iOS), and I couldn't see egregious artifacting.
I'd personally keep the original and hope tooling keeps up with Apple shenanigans, but disk is cheap, and if you're worried at all, use `sips` (if you're on a Mac), the built-in "compatibility mode" conversion, or `heif-convert` to transcode to JPEG.
(Source: building support for HEIC in PhotoStructure)
The reason is that they do fast hardware HEIC encoding at relatively aggressive quality settings in order to get the file size savings they want to boast about. They claim the quality to be the same, but that's not actually true. It is a lower quality.
One advantage of HEIC though is that it can also contain the depth map (in 'Portrait mode'), which I assume gets lost when you use JPEG. This is the information that gives a rough separation of foreground and background, so you can later do effects like applying bokeh to the background only.
If you're looking into archival, you'll probably want the PRORAW then, sounds like that is the lossless format.
Converting from JPEG/HEIV to lossless is a bit of a pointless thing to do, you already incur generational loss from that.
So in terms of long term ability to read... RAW wins, various versions aren't as old but JPEG got it's fair few of extensions too.
For long term readability, I don't think JPEG would win on another standpoint; bitrot. It'll happen eventually, even if you use ZFS, you will eventually loose some bits. Maybe a sector of data. JPEG doesn't like loosing parts of the file.
On the other hand, a TIFF file can be recovered from bitrot, if you don't mind loosing a part of the image. Because there is no compression, loosing a sector of data amounts to loosing the bits on that sector. The only sensitive part of the file would be the header, which isn't terribly complex and can be typed on notepad if needed be.
Every vendor and application has weird extensions or behaviors with the format that means only custom built support for the software that makes it actually works well.
Every piece of software on the planet seems to support jpeg out of the box.
Or even store multiple copies, it's still smaller!
(And btw, parity doesn't protect you from bitrot forever, only for like a decade or two)
> (And btw, parity doesn't protect you from bitrot forever, only for like a decade or two)
Based on what settings and what environment?
By the time you're losing a large percentage of your sectors, you're probably losing everything regardless of format. You don't use file formats to protect from entire disks or tapes failing.
Also, if you set up paranoid levels of parity you can recover a perfect image even when a RAW file would be covered in gaps and noise, while still being a lot smaller.
I can either store one copy of a RAW, or I can store an unholy ball of parity that's exactly the same size.
The unholy ball of parity can lose up to 90% of the data and still be completely recovered, giving you a very high quality image, but if you lose more than 90% you get nothing.
A RAW image degrades more and more as you lose data, and if you lose 90% it's going to be useless anyway.
I'll definitely pick the compression+parity option.
Still, the JPEG recompression feature might be what helps adoption.
The player is missing the corner of her mouth in the JXL version.
JXL Medium (32KB) is about the same quality as WebP Small (19KB)
Note also that JXL is still being worked on. We have no information also about which encoder was used (I am assuming the reference encoder, since it is the only one that I know off right now) and which version.
Edit: the Citroën demo is also a clear win for JXL.
The singer’s left hand has wrinkles in the original image that disappear in WebP2.
Overall, WebP2 and especially AVIF are really good at very low bitrates (<1 bit per pixel), but unlike video, images on the Web will always be shown at the smallest bitrate necessary to be indistinguishable from the original; there, JXL tends to show all the details at a lower bitrate.
But it's also slightly worse than WepB Medium, e.g. with the corner of her mouth.
Most people do not want to have visible compression artefacts on the images they put on their web pages.
JXL starts from this premise and tries to answer the question: "How small can we then make the image?"
Care must be taken when trying to compare the performance of image codecs by increasing compression density until there are very visible compression artefacts and then evaluating whether A's or B's artefacts look worse: If both A's and B's artefacts are so bad that one would not want to put such an image on one's website, such an experiment gives no insight on what one would pick for images that one would actually put on one's website.
Figuratively speaking, if I buy a shirt, my main criterion is that it looks good in good condition, and not that it still looks good if I put a coffee stain on it.
So, before comparing codec quality at compression levels where artefacts show, always ask yourself: At that level of visual quality, would I actually want to put either of the two options on my website? Now, it is of course tempting to compare "away from the actual operating point", because it is just so much easier to do comparisons if there are very visible differences. Comparing near-identical images for quality is hard. Doing this over and over again in a human rater experiment is exhausting. But that's then answering the actual performance questions that need to be answered.
Comparing artifacts at 0.2 bpp is tempting because the artifacts are big there. But it's like buying a car based on how it performs when you are using only the first gear.
Internet uses jpeg qualities that average around 2-3 bpp today, and an improvement in compression density through a new format that compresses 50 % better would push it down to 1-1.5 bpp. The comparison tool displays the bitrates when you hover on the images.
For all these examples, I was comparing at Small (as with Tiny they were both bad enough but in different ways that I often couldn't decide which was least bad). For the Abandoned Factory and Panthera Tigres, I think the extra detail of JXL looks better than the blurring of WEBP. On the otherhand, I think WEBP looks cleaner than JXL for Buenos Aires, Reykjavik, B-24 Bombers without loosing significant detail. And Avenches is a mix as JXL looks much better than WEBP for the trees and tile roof, but has worse chroma artifacts near the edges of the hat and clothing.
But that isn't the whole story, as for some of the images WEBP seems to both preserve more detail, and have fewer artifacts, such as Air Force Academy Chapel, Pont de Quebec, Steinway and White Dunes. What all these cases seem to have in common is a smooth gradient adjacent to sharp detail. WEBP seems to do a much better job of dealing with that boundary by blurring the smooth part, put preserving the sharp lines.
And if you bump up to WEBP2, the number of cases where it both preserves more detail and has fewer artifacts than JXL increases significantly.
In VP9/WebP this wasn't a "choice" so much as they were optimizing for good looking marketing graphs instead of pictures. You get blurry images if you target a metric like PSNR instead of actually looking at your output. x264 does have a few different tunings, the film one will try to turn detail to noise and the animation one won't.
Last week I submitted the 'ac strategy decision tree traversal' change to libjxl, which reduces ringing artefacts substantially, often by 60-70 %. It does a diligent job by choosing the best combination of small integral transforms (there are 10 different 8x8 transforms) instead of just satisfying with a large transform.
This change also improves on image quality, sharpness and truthfulness, it doesn't just blur the artefacts away.
This JPEG XL quality improvement has not yet landed in online comparison tools.
JPEG XL would be Turing-complete without the 1024×1024 pixel limitation - https://news.ycombinator.com/item?id=27559748 - June 2021 (34 comments)
Not sure if this is a good or bad thing.
EDIT: Looking at https://eclipseo.github.io/image-comparison-web/#japan-expo&... some details about the metallic structure in the background are noticeably worse (less defined) in the jpeg-xl version.
What can happen though is that the "Original PNG" doesn't have an ICC profile and gets treated by your browser in a different way than the PNG produced by the jxl decoder. This is a problem of your browser though, not of JPEG XL, and likely the image you see for "lossless JPEG XL" is the correct one.
For me it looks different in Firefox, the same in Brave (the uncompressed one in Firefox is the odd one out)
>Erik Andre from the Images Infra team at Facebook here. I'd like to share a bit of our view on JPEG XL in the context of new image formats (e.g AVIF, JPEG XL, WEBP2, ...) and how browser adoption will let us move forward with our plans to test and hopefully roll out JPEG XL.
>After spending the last 5 months investigating and evaluating JPEG XL from both a performance and quality point of view, it's our opinion that JPEG XL has the most potential of the new generation of image formats that are trying to succeed JPEG.
>This opinion is based on the following findings:
>JPEG XL encoding at speed/effort 6 is as fast as JPEG encoding (using MozJpeg with Trellis encoding enabled). This means that it's practical to encode JPEG XL images on the fly and serve to client. This can be compared with the encoding speed of AVIF which necessitates offline encoding which offers much less flexibility when it comes to delivering dynamically sized and compressed content.
Depending on the settings used, JPEG XL can also be very fast to decode. Our mobile benchmarks show that we can reach parity with JPEG when using multiple threads to decode. This matches and in many cases surpasses the decoding performance of other new image formats.
The JPEG XL image format supports progressive decoding, offering similar gains in perceived image load performance we are already benefitting from with JPEG. This is a feature lacking in the other new image formats which are all derived from Video codecs where such features makes little sense to support.
>Having browser support from all the major browsers is going to make our lives a lot easier in upcoming WWW experiments and ensure that we can deliver a consistent experience across platforms and browsers.
Blink Tracking Bug  currently behind a flag ,Firefox on both  and , currently in about:preferences#experimental on Firefox Nightly. If I remember correctly it is supported on Edge behind a parameter as well. I thought it was all very quiet after the standard was published, turns out both Chrome and Firefox intent to support it.
AFAIK, neither Webkit nor Safari has any plan or intention to support JPEGXL. I think ( someone correct me if I am wrong ) Safari uses macOS image decoding library. So supporting JPEGXL may come from an OS update and not browser?
Finally, an open standard, Royalty Free, open-source reference implementation, and it is nearly better than all other alternative. As an image format on the web it is quite possibly close to perfect . It is exciting and I hope JPEG XL will succeed.
 I remember the conversion from a little more than 6 months ago current encoder is not optimised for image quality below bpp 1.0, those are going to be the focus once all the initial reference encoder and decoder standards and issues are done. So in case anyone wondering it doesn't look as good as other competitors ( but still a lot better than JPEG ), those improvements are coming later.
The only power Safari devs basically have is to decide which OS-supported formats to enable/allow and which ones to disable (e.g. JPEG 2000 is enabled but HEIC isn't).
So getting JPEG XL supported in Safari will most likely first require it to be supported in MacOS and iOS. If you have an Apple device and would like it to get JPEG XL support, then feel free to open a Feedback Assistant ticket (there's an OS-level application to do that) to make a feature request. (I did that 5 weeks ago but haven't heard back yet)
Progressive animation is a neat feature of FLIF but it is totally impractical in terms of decode speed and memory consumption. In general I think animation is better handled with video codecs (with multiple resolutions/bitrates to cover various devices/network conditions).
But again, the main bottleneck with image compression is embedded software, like smartphone, cameras, etc. There is a compromise and cost/benefit between file size, transistor requirement, CPU cycles, power required, etc.
For example, I would be interested to see the size of binary code required to compress JPG, JPG XL, BGP, WEBP, etc.
The parallel speedup for lossless mode depends on the speed setting.
I am very happy the result of combining PIK and FUIF was not called PIKFUIF (or FUIFPIK). Some say JPEG XL is a bad name, but it could have been so much worse...
edit: This is also the case in other images with faces (and especially the eyes).
EDIT: granted, computers being stuck on old versions of Internet Explorer back in the day and therefore holding back (for example) PNG adoption is a very different situation than that of the modern web, which makes it a bit of an unfair comparison.
I hope that Safari does not take as long to implement it as with WEBP...
what is new here?
Lots of technical, practical and and political reasons why this could quickly replace a whole bunch of legacy formats in various roles that have been hanging around for a while for one obscure reason or another.
A faster, better-looking, and simpler web, yes please! Please support this so that it actually happens this time.
Even if JPEG XL does not replace all other image formats, note that as the computing industry is growing, so is the number of use cases for image formats. There is no law that there has to be one format to rule them all, how inconvenient that may be.
E.g. some users might want to edit and encode extremely high resolution images on their embedded smartphones with weak processing power. They might not care if their images are 5% larger, they don't want their friends to wait for the encode to finish so that they can make another picture.
Other users might only encode an image once and then distribute it to millions of users. 5% improvement in bandwidth cost might be quite significant here. On the other hand, they have lots of and computational resources to throw at making the optimal image.
Can both use cases be served by the same format? Maybe. Maybe JPEG XL is that format. But these use cases came up as computers became embedded so you could carry them around, and as websites sprung up with billions of visitors. This is a development of the last 20 years.
Often the response to such developments is an increase in complexity: more formats, more tools, etc.
JPEG XL will be successful for this reason alone.
It does seem likely that a future edition of PDF will include JPEG XL.
It doesn't need to be useful everywhere to be useful.
It seems image formats have ossified. Nobody cares if the images are 50% smaller because storage and network are cheap enough not to deal with the hassle of using a non-standard image format.
Do you really think people are working on improved image compression just for fun, and not because "somebody actually cares"?
Also, it is not just about compression, it is also about functionality. HDR displays are a thing, images with alpha transparency are also a thing. There just is no way to properly do HDR and alpha without "dealing with the hassle of using a non-standard image format", unless you think 16-bit PNGs are a good idea for the web.
Its then (usually, but it is optional) encrypted to make sure that its hard to copy. The keys are sent directly to each projector to allow time limited, and or other limited use.
The reason why its used is because it optimized for quality, rather than size. It also allows custom colour spaces and other tweaks to maintain/enforce colour correctness/image quality.
Inovation ? Will it make life easier for people ? No. You still need to update. Will the page load faster ? No page still has 30 Mb at 50 kbps. Lossless ? Same as PNG. Lossy - same as JPG. So what is the inovation ? That with 20 threads on an Epyc is very fast ? How about my 1ghz phone with 4 cores ?
> Lossless ? Same as PNG. Lossy - same as JPG.
This is incorrect, JPEG XL is smaller.
Also why do you put spaces before your question marks? It makes your post look rushed.