On that demo page if I switch between the original, jpeg, webp, and avif tabs, the colors shift visibly. I have an AdobeRGB wide-color-gamut monitor configured in 10-bit mode, use Firefox with color-management enabled, and I have the AdobeRGB profile correctly set in Windows.
It is incredible to me that in 2021 the web makes it impossible to get even vaguely correct colours onto a screen.
I don't know who's to blame here: Web standards bodies, Mozilla, or the specific AVIF decoder.
But I do know that color-blind people are creating the next generation of imaging standards.
Given the original (lossless AVIF) and AVIF tabs match up and the others differ, I wonder if things went wrong with converting to the other formats? I don't have the time to look into the detail here right now, but it could simply be a case of the conversion between the formats losing the the colour space data from the image?
The one group highly unlikely to blame here is web standards bodies, who don't have anything to do with how image formats are rendered (it gets as far as defining the default colour space when none is provided, but that's it). But I'd guess it's likely an image file problem rather than a display problem.
> The one group highly unlikely to blame here is web standards bodies
I'm pretty sure I can blame a web standards body for not standardising colour management for the web!
The colour space of embedded images is no longer the only concern, and is not separate from the colours as defined in CSS, SVG, and Canvas to name a few.
Not to mention that even greyscale images need colour management because Macs, PCs, and Televisions all use different gamma curves.
Of course, their extension supports exactly two colour spaces: sRGB and Display P3, because fuck everyone else who isn't using an iDevice or a Mac, am I right?
If I sound salty, it's because I'm a photographer, and as such it grinds my gears that it is literally impossible to use wide-gamut images on the web for any purpose without degrading quality for a substantial fraction of the viewers. I fully expect this to be resolved satisfactorily some time in the 2030s, perhaps the 2040s. Any decade now...
Might be worth filling a bug with Mozilla. I'm pretty sure the files I used don't carry colour profile info that would cause this difference, but I've been wrong before.
The other image formats are colour-managed, so they look relatively desaturated on my WCG monitor. The AVIF images look "stretched" to the full gamut, making them garishly oversaturated.
Test in other browsers; I know of at least two bugs related to color accuracy in Firefox. They get brought up all the time in support forums like r/Firefox.
Whatever happened to jpeg2k? I was making PDFs of comics; discovered that pdf supports jpeg2k, and the subjective quality was so much better than jpeg, to the point of text being legible at about 5x smaller size, and in addition the artifacts it introduces are far less unpleasant; jpeg introduces massive amounts of ringing in high contrast areas, while jp2 seems to just end up with lower contrast
JPEG2000 is used in some industries, but was too patent-encumbered to succeed in general. AVIF and WebP should be better for most things, and AVIF is freely licensed.
Internet Explorer didn't support it, so it was dead from the start. By the time other browsers gained significant market share, it was already buried six feet deep so nobody even bothered.
That paragraph is poorly written and doesn't warrant a standalone heading at all. JPEG2000 is mainly a lossy format, comparing its rarely used lossless mode to other lossless formats is pointless.
While JPEG2000 doesn't catch on, it's still used moderately in some fields, particularly medical imaging and some other scientific imagings. And the use of wavelet-transform based algorithms in compression is adopted in lots of other formats too.
I recently learned that movies get delivered these days in a format called Digital Cinema Packages[1] - which is mostly JPEG 2000 images for each frame and PCM audio.
Indeed. And studios are rapidly adopting the use of JPEG2000 lossless as a replacement for their mezzanine master via a similar standard called IMF (Interoperable Master Format). So yes Hollywood is very well committed to j2k and has been for over a decade.
Generally speaking wavelet-based image / video compression are computation expensive, comparatively speaking. But they fit certain type of Lossless /Extreme High Quality usage.
This part was a little hidden and hopefully will get better, but it's pretty rough...
> At an 'effort' of 2, it takes a good few seconds to encode. 'Effort' 3 is significantly better, but that can take a couple of minutes. 'Effort' 10 (which I used for images in this article) can take over 10 minutes to encode a single image.
> The command line tools are orders of magnitude faster
and from the previous sentence:
> Encoding AVIF takes a long time in general, but it's especially bad in Squoosh because we're using WebAssembly, which doesn't let us use SIMD or multiple threads
Also, since I wrote that article, we shipped wasm to Squoosh that makes use of threads. We aren't there with SIMD yet (for AVIF, we have it for some other codecs), but tooling is getting better.
That's awesome. Do you have a rough value on what the best case effort=10 encoding is right now (in CLI, decent computer), compared to the 10m number given above? Will it be 10x faster? 100x faster?
Looks surprisingly good for vector images, but I'm not sure if the size comparison is fair: you'd gzip this content and it will probably work well with text/SVG, and probably won't work at all with AVIF. Not to mention in the first case it's truly lossless, and in the second case it's... well, not bad.
For F1 image, if you compare 20KB JPEG vs 18KB AVIF, there's no doubt AVIF is better. But if this picture is the content of page, not some useless decorations, I'd rather be served with 70KB JPEG in this example, it isn't really a close competition. So I wonder if 40-50-60 KB AVIF would look as good or better than 70 KB JPEG. Then it would be impressive. Otherwise, it isn't really a drop-in replacement and I wonder what the correct use-case strategy should be.
Jake’s point was to compare the minimum acceptable qualities, showing that when you take things to extremes, AVIF is way better than the alternatives in surprisingly many fields. But the trouble is that JPEG starts destroying parts of the F1 image a good deal earlier, and so you have to have parts of the image at a much higher quality than would seem necessary, lest other parts of the image be ruined. Similar happens with MP3 compression artefacts and Opus/MP3 comparisons.
It would have been good if the article also included an AVIF image increased in quality to match the JPEG’s DDSIM score, or at least indicated what that quality level was. I’d expect that 30KB would be sufficient, quite possibly even 25KB.
The JPEG is markedly higher quality in many details, as Jake discusses:
> In fact, when I showed this article to Kornel Lesiński (who actually knows what he's talking about when it comes to image compression), he was unhappy with my F1 comparison above, because the DDSIM score of the JPEG is much lower than the others, meaning it's closer in quality to the original image, and… he's right.
> I struggled to compress the F1 image as JPEG. If I went any lower than 74 kB, the banding on the road became really obvious to me, and some of the grey parts of the road appeared slightly purple in a noticeable way, […]
> The fine detail of the road is lost in all of the compressed versions, which I'm ok with. However, you can see the difference in detail Kornel was talking about. Look at the red bodywork in the original, there are three distinct parts – the mirror, the wing connecting the bargeboard, and the top of the sidepod. In the AVIF, the smoothing removes the division between these parts, but they're still mostly there in the JPEG, especially the 74 kB version.
Although he does seem to be comparing images at incredibly high compression rates, to the point that the full sized versions don't look anywhere near acceptable (which is subjective I suppose).
Yep, and "quality" isn't always the only control you have, i.e. chroma Sub Sampling is an option for JPEG (and maybe others?) as well, which sometimes has to be played around with in my experience if you really care about high-fidelity JPEGs (i.e. full screen photos).
I really wish something existed (or was even built-in to image format encoders in some way, although I recognise that would arguably be bloat) to "bisect" what the optimum settings are per format without degrading quality based off perceptual pixel differences...
I've written a basic thing myself I use to work out the chroma sub sampling (whether 444 is needed, or if 422 can be used) for JPEG encoding photos, and it can do a fair level of "quality" bisection as well, but it's far from perfect, but still useful.
Some of the examples use subsampling, some don't, depending on if I felt it needed it visually. From memory, I think I disabled subsampling for all the illustration images. Except in WebP where you can't disable it. For WebP I used its "sharp YUV" mode instead.
It would be actually pretty neat if HN was automatically linking to any relevant discussion for any link that was posted by a member here. Something as simple as superscripts would do.
That "Flat illustration" lossy AVIF case seems to have subtle but noticeable color shift on large pure color areas where webP doesn't have (check the background of the second row, third col figure). What causes this?
Really? I don't see any difference, except I do see a pixel shift if I switch between uncompressed/AVIF.
This could happen due to DC shifts (one pixel gets quantized, the rest are coded as a difference from it), or a mistake in the post-decoding pipeline (gamma/colorspace correction).
I just modified my comment to make it more clear what I meant.
But you're right, probably just error due to compression/quantization etc. in DC shift or in colorspace conversion. It seems to happen in low quality JPEG too after I did some quick tests myself.
When I found this out about jpeg, I was quite amazed. Things could look okay pretty much right away, then get sharper if you give it a second. Shorter loading times and a reasonable image pretty much no matter the size.
In practice, I never really used it. I toyed with it once or twice, I turn it on every time because with jpg it often yields marginally lower file sizes anyway (in gimp you can preview file size and looks), but never heard anyone say "what was that magic" and whenever I see it on websites I think to myself "yeah okay that 20x20 pixel image, it could have saved a render cycle, this is useless and looks terrible" before it downloads to 100% before the next render cycle.
So I'm not sure it will really be missed, even if you saddened my inner nerd. I really do like the feature, it's really neat to just load as much as you wish and get however much 'quality' fits within that data size.
It was really nice in the nineties when we had dial-up and large jpeg images (>100 Kbyte) were loading a bit slower so you didn't have to wait before getting an impression.
Now it's important in mobile networks, but you can also opt to turn off images. Who needs images anyway while reading, toddlers?
Progressive use is up since mozjpeg and some other tools (CDNs perhaps?) started defaulting to it. My guesswork is that during the last eight years of format development the biggest winner is progressive JPEG (from 8 % to 25 % of JPEG use). The 'green men' problem and interpolation of progressive JPEG have been fixed recently.
It seems likely that that blog post is wrong in that detail, given that pretty much everyone else says that making JPGs progressive reduces the file size.
All I know (if I remember correctly, I'm fairly certain but won't rule out that I'm mixing things up) is that GIMP has a toggle for it and when it's enabled, it sometimes saves a tiny amount but never seems to get larger. Perhaps the toggle doesn't do what it says on the tin?
The video itself is funny, because it looks like JPG is the first to finish by far, but in reality it still is getting more detail, but more most people that detail is not visible, and by the time they look more closely, it may have already finished loading.
> most people that detail is not visible, and by the time they look more closely, it may have already finished loading
I guess it depends at what point it does the first render. I rarely see images load progressively in the first place, but when I do it's usually with an initial render that looks like 1967 needs their image back. My guess is that it's busy downloading the many/largeish js/css files that prevent it from getting very far on images before the DOM is ready for initial render.
Or maybe, you only notice that one because it's obvious, but most of the time when you see a JPG which you think has finished loading, it's actually still not done but it looks like it's done, as in the example above.
I honestly could not tell it was still going after the 2nd progressive layer.
I kind of dislike progressive loading; certainly the type that's used as an (hypothetical) example in the article. The squashed blurry bad-quality image isn't all that useful, and you're never actually sure if it is or isn't loading a better version in the background. I'm not so sure if it's really better UX, because either 1) you care about the image quality in your use case and a blurry version just won't do, or 2) the image quality isn't all that important and you should be serving smallish images anyway (like the Guardian), possibly clickable to load a high-res version for those who desire it.
This is not true. Firefox readily supports blocking all video autoplay (Permissions → Autoplay has three choices: Allow Audio and Video; Block Audio; Block Audio and Video), which I gladly use. And I really wish animated images were treated the same, because it’s a bug that they’re not, but a well-entrenched one that’s nigh-impossible to fix.
Thatʾs what I said. I was correcting the parent comment that said that you couldnʾt block autoplay of muted videos. I block all video autoplay, and wish I could block animated image autoplay also.
This is the default, but some media sites have made autoplay videos _the_ next-gen whole page popups, so I gladly switched to block all.
It also saves a lot of bandwidth.
Maybe this is a stupid request, but I'd like <img> (or maybe the newer <picture> tag) to take a video but with autoplay=true, loop=true, muted=true and controls=false when given a video source
Safari already supports playing MP4 in <img> tags [1]. I think that's how "animated GIF" type of animations should be handled in the Web.
It's silly that there are video formats with excellent compression but image formats want to reinvent it (such as animated PNG, animated WebP, animated AVIF...). It just adds extra complexity to image formats.
Ah, that seems exactly like I was asking for, unfortunately in the 3 years since Safari added it, I can't find much pick up on it. I found this on Chromium but with not much progress. It seems their code path doesn't quite let img decode a video, I assumed it would be as easy as creating a fake <video muted loop autoplay> but it's probably much more messy.
Nor in documents : many PDF readers (including browsers ??) don't seem to be able to read MP4 in PDFs, EPUB doesn't support MP4, and MHTML has been discontinued for some weird reason (at least Libre Office displays MP4 in ODT just fine, but it has other issues). Maybe this is due to the patents in MP4 ? And GIF is supported but is just too big.
They should not have released it without a much longer open comment period, if they expected anyone else to implement it. VP8 had barely just come out of hiding from a private company and hadn't been reviewed.
I have known the author of WebP (he previously worked on XviD) and I'm pretty sure it was his pet project.
The Windows "Photos" app doesn't support webp, at least not without an extension, so it doesn't associate the file extension with the app. Chrome can display these photos, so it registers the webp file extension.
I'm not sure if that page is up to date. Firefox for Android and Chrome for Android both seem to support AVIF according to my tests. I'm not sure what "Android browser" means here, but it's been a while since I've seen the built-in browser engines not running on some recent version of Chromium as a base.
I don't expect Apple to ever implement AVIF. They have bought into HEIF as the new version of JPEG through licenses and hardware decoding and they're not exactly known of implementing open standards next to their own.
Encoding WEBP and AVIF and using the <picture> element on web pages should solve the bandwidth problem without depending on outdated browsers or lacking browser manufacturers. AVIF is better than WEBP in many cases, but WEBP is also better than JPEG or PNG in many places the older formats are still used.
I can't get AVIF images to open in the latest Chrome on Android, pages with fallbacks use the fallback as well. My guess is it'll be enabled when AV1 is (which should be soonish now that devices with hardware AV1 decode are starting to be shipped in volume).
Apple added AV1 hardware decode support to the A14 so I wouldn't rule it out in the future.
> I'm not sure what "Android browser" means here, but it's been a while since I've seen the built-in browser engines not running on some recent version of Chromium as a base.
Looks like it's the pre-Android 5, non-Chromium, AOSP browser ?
Great to hear. I hope that this means we'll get some hardware vendors roadmapping encoding support. AV1 is great and will likely replace things eventually, but holy hell encoding is slow right now. At the moment it only seems worth it if you have limited content but immense traffic.
There are things I'm really excited about with the format though. The forced 10bit support is a big one. A unified format is also nice (i.e. not having to worry about png for screenshots / jpg for photos). Obviously file size improvements are great too. Overall though the format has a few years to go before it's production worthy except in niche circumstances.
I personally use the cbrunsli/dbrunsli cmdline programs to archive old high-resolution JPEG photos that I've taken over the years. Having a gander at one subdirectory with 94 photos @ 354 MB in size, running cbrunsli on them brings the size down to 282 MB, which brings in savings of about 20%. And if I ever wanted to convert them back to JPEG, each file would be bit-identical to the originals.
Perhaps it's a little early to trust my data to JPEG XL/Brunsli, but I've ran tests comparing hundreds of MD5 checksums of JPEG files losslessly recreated by Brunsli, and have not yet ran into a single mismatch.
I can only say that I am very excited for the day that JPEG XL truly hits primetime.
Brunsli works very well, but is not compatible with the final JPEG XL format. For being able to reduce the binary size of libjxl we decided to use a more holistic approach where the VarDCT-machinery is used to encode jpegs losslessly. This saved about 20 % of the binary size and reduced attack surface. Now the decoder binary size is about 300 kB on Arm.
I'm really rooting for JPEG XL. AVIF is good but it's clearly a video codec adapted to fixed images, whereas JPEG XL covers a lot more use cases important when dealing with images (ie palette images, progressive decoding, more bit depths options, JPEG lossless recompression, etc). Adding to that producing large AVIF images takes an ungodly amount of memory and time right now.
"AVIF is currently better at low image quality. JPEG XL is currently better at medium and high image qualities, including the range of image quality used in the internet and the cameras today." https://encode.su/threads/3397-JPEG-XL-vs-AVIF#:~:text=AVIF%....
I just wanted to mention XL is currently not optimised for low bitrate. So there are lots of potential to improve.
And I do think taking median Image size and bitrate on Internet was the wrong point of optimisation. Lots of image on the internet just aren't properly optimised.
I think and I may be wrong, the format is now finalised. ( The original schedule was last July... so it was delayed abit ) People who are interested could start testing it.
JPEG XL's great properties extend to about 3-5x lower bitrates than what are used in the Internet and to about 10x lower bitrates than what are used in cameras. Also some improvement is happening in that area.
The JPEG XL reference implementation is under the Apache 2 license, which means that it comes with a royalty-free patent grant. So pretty much the same as AVIF (BSD 2 clauses + royalty-free patent grant).
Reference implementation is irrelevant to the patents which cover the use of underlying codec. The codec might get AOM protection, but then there's always some companies with patents outside of AOM.
H.265 debacle with 2 patent holder groups is a good example of that.
Those outside the AOM will face the combined power of AOM if they attack. In other cases it depends on who defends the codec.
H.265 is an anti-example, because there patents were pooled for offensive purposes to begin with, while AOM pooled resources for defense. I.e. something like MPEG-LA exists to extort money, not to defend anyone. That's also why you got two separate pools there. Patent aggressors don't want to share, each of them wants to get all the loot for themselves.
That sounds good, but AOM also provides a rather heavy weight protection against patent trolling and similar threats which is a benefit. It's similar to OIN in that sense.
AVIF is a format that puts an AV1 bitstream into a HEIF container. You're thinking about what AOM said about AV1 [6], not AVIF; they should not (and in fact do not) make that claim for AVIF -- which builds on HEIF that isn't their work -- though you ought to judge the situation for yourself [7].
I am not a lawyer, and I realize there's a fine line between genuine concern and spreading FUD, but in the spring of 2019 I looked into the patent situation around the HEIF container itself [1] -- the container upon which AVIF builds -- and skimmed through the 5 US patents I found, which cover some techniques that can be used in the format.
Most of them can probably be avoided for the purposes of an AVIF file, but patent US20160232939A1 [2] in my reading seems to be about in-container signalling to express relationships between a "static media item" and "one or more entities" that together "form a group", and "indicating, in the file, a grouping type for the group". The patent appears to be written in a way to allow this definition to encompass, say, a thumbnail and a bunch of frames thereafter, or, say a master image and a set of pictures derived from it, or alternate camera angles of the same thing. Some of these techniques sound like stuff we've seen before, but as is common in patents, the precise wording of claims is often key, and this is where patent lawyers come in.
A thorough look of the AVIF specification [3] and the patents registered with the MPEG LA about this format [1] is likely wise before any widespread deployment that makes use of advanced features of the HEIF container; using it to hold exactly 1 'one-layer' still-image is probably fine.
Additionally, in my reading [5], the HEIF reference software released by Nokia [4] includes a patent grant for non-commercial evaluation, testing and academic research only.
I had that question before too, still not sure why they picked HEIF for AVIF, instead of using some more obviously free container. They could easily avoid all the above.
We went through implementing AVIF support in our image service, only to find out that it generated larger images than WebP anyway. Not sure if that’s typical though.
Yes we also try converting images to .avif with the NPM module Sharp version 0.27.0 and we images as large as .webp using the default speed of 5/8 and it takes 10 seconds to encode on the same AWS Lambda of 512mb.
I red that increasing the speed to 8/8 will lead to smaller file but the encoding time will take 3 minutes.
Do you have an example? The F1 image from the post above looks pretty convincing, if you have different data then I'd be curious to see that. For example you say larger images, but what about quality? Was that not better for the larger file size? Part of this is subjective as well (I really mind the non-smooth areas and text artifacts in jpeg, other people might mind other things).
It's actually pretty easy to fix those artifacts in a JPEG decoder at the cost of making it a little too smooth. The text artifacts are only partially a compression artifact and partially from a poor quality YUV>RGB conversion, so you can do a better one.
Blocking artifacts can be reduced with a deblocking filter; JPEG/MPEG1/MPEG2 are so simple it doesn't take much work to guess what was supposed to be there.
We were adding support for .avif images in our backend this week.
It seems .avif encoding is very slow right now.
It takes 3 minutes to encode an image with the most compressed setting (speed 8/8).
Using the default speed 5/8 takes 10 seconds but gives a size practically as big as webp (which takes less time to encode).
I think/hope this will likely improve in the upcoming months.
Just like any other format, it's pretty much dead in the water outside of desktop until it gets implemented in hardware (hopefully, without the encumbering patents, ASAP!)
Imagemagick that's packaged in Debian doesn't have avif support unfortunately. They somehow split between version 6 and version 7 which is confusing, because both are progressing in parallel but they don't have feature parity.
Are you talking about "stable" repos? Do you know anyone who sticks to those, for personal use?
(A lot of people use "testing" for reasons I have not been able to comprehend. The main difference from "unstable" is it takes a minimum of 2 weeks for fixes to show up in "testing". Or did, last I checked.)
Ah, I didn't realize that! Looking at this [0] issue thread, it seems like the minimum version with AVIF support is v7.0.25. The suggestion to use Docker in that thread might be useful for you as well, if you are interested in a couple quick tests and are already set up for containers.
Can you put an AVIF animation in an img tag? That would be a pretty killer feature that would give you pretty instant adoption. There are still loads of places that force you to use GIF.
yes you can put an .avif animation in an <img> tag.
https://www.lambdatest.com/blog/avif-image-format/
You can also use the <picture> tag to specify multiple images of different types to newer and olders browsers.
Just wondering, is there a "lossless" conversion from PNG/JPEG/etc to AVIF? Kinda like Lepton[1] but just re-compressing without further loss of details.
AVIF does not support recompression of JPEG files like Lepton, Brunsli, PackJPG, 7zip media layer (using Brunsli), and JPEG XL. AVIF works very well in the high density low-bit-rate compression. The flip side there is loss of detail.
https://jakearchibald.com/2020/avif-has-landed/
This is incredible.