Likely the format with the best chance of overthrowing the jpg/gif/png incumbents is AVIF. Since it's based on AV1, you'd get hardware acceleration for decoding/encoding once it starts becoming a standard, and browser support will be trivial to add once AV1 has wide support.
Compression wise AVIF is performing at about the same level as FLIF (15-25% better than webp, depending on the image), and is also royalty free. The leg it has upon FLIF is the Alliance for Open Media is behind it, which is a consortium of companies including: "Amazon, Apple, ARM, Cisco, Facebook, Google, IBM, Intel Corporation, Microsoft, Mozilla, Netflix, Nvidia, Samsung Electronics and Tencent."
I'm really excited for it and I hope it actually gets traction. It'd be lovely to have photos / screenshots / gifs all able to share a common format.
I used to think the same as well, however I now think Jpeg XL is poised to be the 'winner' among next gen image codecs. It's royalty free, great lossy and lossless compression which is said to beat the competition, as well as providing a perfect upgrade path for existing jpeg's as it can losslessly recompress them into the jpeg XL format with a ~20% size decrease (courtesy of the PIK project).
It's slated for standardisation within a couple of weeks, it will be very interesting to see large-scale comparisons of this codec against the likes of AVIF and HEIF.
The etymology of the name "XL" is as follows: JPEG has called all its new standards since j2k something that starts with an X: XR, XT, XS (S for speed, since it is very fast and ultra-low-latency), and now XL. The L is supposed to mean Long term, since the goal is to make something that can replace the legacy JPEG and last as long as it did.
Could have been e.g. JPEG PF or JPEG LTS.
AVIF (and its image sequences) seems to be fairly complicated. Here's few comments  about it:
"Given all that I'm also beginning to understand why some folks want something simpler like webp2 :)"
"Despite authoring libavif (library, not the standard) and being a big fan of the AV1 codec, I do occasionally find HEIF/AVIF as a format to be a bit "it can do anything!", which is likely to lead to huge or fragmented/incomplete implementations."
Anyway, instead of adding FLIF, AVIF, BPG, and lots of similar image formats to web browsers, I think only one good format is enough and JPEG XR might be it. After something has been added to web browsers, it can't be removed.
Safari hasn't added support for WebP (which is good, there's no need for WebP after AVIF/JPEG XR is out) and it hasn't added support for HEIF (which is weird, considering Apple is using it on iOS), but maybe they know that there's no need to rush.
It is complementary to AVIF (which targets photo capture more than delivery over the web).
WebP2 is like WebP: keep it simple, with only useful and efficient features, no fancy stuff. And aim at very low bitrate, it's the current trend, favoring image size over pristine quality.
Progressive rendering uses ~3x more resources on client side. So, instead, better have efficient 300-bytes thumbnails in the header.
I think you meant JPEG XL here? JPEG XR  is a completely different thing.
(Speaking as someone who's seen several open, licensing-unencumbered image/video/audio formats fail to get traction with a majority of browsers).
Plus I'd love the ability to say "I'm on a low bandwidth, $$$ per megabyte network, stop loading any image after the first iteration until I indicate I want to see the full one" because you almost never need full-progression-loaded images. They just make things pretty. Having rudimentary images is often good enough to get the experience of the full article.
(whether that's news, or a tutorial, or even an educational resource. Load the text and images, and as someone who understands the text I'm reading, I can decide whether or not I need the rest of that image data after everything's already typeset and presentable, instead of having a DOM constantly reflow because it's loading in more and more images)
I know it's a small thing and doesn't really matter, but I don't like progressive photos.
Edit: This is just one context. There are plenty of other contexts where progressive is very useful.
oh, and again.
When it's the background for some webpage, or some other photo I'm not focused on, the progressive quality is probably a good thing.
Huh? It's obviously easier to ignore the spinner than to ignore a low-res image. You've seen the spinner before.
See, this is the issue. Progressive images aren't "mostly usable content"; they're vague, ugly blobs.
And to enable this, the site only needed to create one high-resolution image.
Seems like a victory for loading speeds, for low-bandwidth, and for content creation.
I think FLIF looks incredible.
Well, it's not finalized as of yet (though it is imminent), so rate of adoption is just pure guesswork at this stage. However, things I deem necessary for a new image codec to become the next 'de facto' standard are:
major improvements over the current de facto standards
Both AVIF and Jpeg XL tick these boxes, however Jpeg XL has another strong feature which is that it offers a lossless upgrade path for existing jpeg's with significantly improved compression as a bonus.
This being a 'killer feature' of course relies on Jpeg XL being very competitive with AVIF in terms of lossy/lossless compression overall.
>A lightweight lossless conversion process back to JPEG ensures compatibility with existing JPEG-only clients such as older generation phones and browsers. Thus it is easy to migrate to JPEG XL, because servers can store a single JPEG XL file to serve both JPEG and JPEG XL clients.
I don't think so, but I don't quite see the point unless you are thinking of using it as a way to archive jpeg's, but in that case there are programs specifically for that, like PackJPG, Lepton etc.
There is also a mode that only preserves the JPEG image data.
The same thing can be done with image data. For something like jpeg you can keep the coefficients but store them in a more compact form.
For what it's worth JPEG XL claims that it's 'reversible', but I'm not sure if that means you get your original jpeg back, byte for byte, or you get an equivalent jpeg back.
I don't think getting the same JPEG is the goal here but getting a JPEG that decodes to the same image data.
But it might be more hassle than it's worth it to create the code and algorithms to do so.
Btw, if memory serves right, Google Photos deliberately gives you back the same pixels but not the same JPEG codestream bits under some circumstances. That's too make it harder to exploit vulnerabilities with carefully crafted files.
If you can't produce the exact same original jpeg, then you can still have some issues during the global upgrade process -- eg your webserver database of image hashes for deduplication has to be reconstructed.
A relatively minor problem to be sure, but afaict if jpegXL does support this (which apparently it does), the upgrade process is really as pain-free as I could imagine. I can't really think of anything more you could ask for out of a new format. Better & backwards+forward compatibility
It's not quite the same thing, but a very highly compressed high-res picture would probably look okay in thumbnail format.
Encoding throughput is configurable from ~1 MP/s to about 50 per core. That's somewhat slower than JPEG, but can be faster with multiple cores.
Given the way GIF is used nowadays the only reason it exists is it let's you put a video on the most of the places where you are only allowed to put pictures. For some weird reason many of such websites still won't let you upload an MP4 file despite they auto-convert the gifs you upload to MP4/AVC. From the technical point of view there hardly is a reason for GIF to be used at all.
(ThinkPad T400s are really bad at thermal management.)
I also believe this was due to some license choices in the past.
But in this time of responsive websites it is a great format!
Just download the amount of pixels you need to form a sharp image and then skip loading the rest.
I really hope this becomes mainstream one day.
That said, lossless codecs have it easier because you can freely transcode between them as the mood takes you, with no generation loss, so there's less lock-in. For example, FLAC is dominant in audio, but there are a variety of other smaller lossless formats which still see use. Nobody much minds.
The hardest part is figuring out which patents they infringe, and how to circumvent them.
The UX concern you're describing doesn't necessarily have to have anything to do with the implementation details themselves, as demonstrated by sites like imgur rebranding silent mp4/webms as "gifv". As long as the new format makes it reasonably easy to access data about its content (e.g., header field indicating animation/movement), there shouldn't be any issue with collapsing the use cases into a single implementation and simultaneously addressing the UX concern you mention.
Here are some supporting implementations:
If you want to check how it looks like on supported browser:
I was also digging around and as OP mentioned found it interesting topic.
Modular mode in JPEG XL also has MA trees, but the bitstream is organized in a way that allows parallel decode (as well as efficient cropped decode).
In contrast, with JPEG XL, I can simply use the high quality setting once per image and be done with it, trusting that I will get a faithful image at a reasonable size.
> Keep in mind that the author of the FLIF works on the new FUIF (https://github.com/cloudinary/fuif) which will be part of the JPEG XL. So, probably FLIF will be deprecated soon. And as far as JPEG XL is also based on Google PIX, there is a high probability that Google will support this new format in their Blink engine.
(Compiled to WebAssembly, not by me)
I'm very excited about JPEG XL, it's a great codec that has all the technical ingredients to replace JPEG, PNG and GIF. We are close to finalizing the bitstream and standardizing it (ISO/IEC 18181). Now let's hope it will get adoption!
I'm not sure if putting it all under one standard will make it hard to adopt, and/or if AVIF (based on AV1) will just become the new open advanced image format first. Interesting to see in any case.
Brief presentation: https://www.itu.int/en/ITU-T/Workshops-and-Seminars/20191008...
Longer presentation with more of the rationales for choices: https://www.spiedigitallibrary.org/conference-proceedings-of...
Draft standard: https://www.researchgate.net/publication/335134903_Committee...
Play in your browser: http://libwebpjs.appspot.com/jpegxl/
> Input: 61710Bytes => JPEGXL: 35845Bytes 0.58% size of input
I think someone forgot to multiply by 100. (too bad, I mean 172x compression would be even more amazing!)
The reason we have PNG and JPEG is that they are, all in all, more than good enough. Yes, the dreaded "good enough" argument surfaces again stronger than ever. They are also easy to understand, i.e. use JPEG for lossy photos and PNG for pixel-exact graphics. But most importantly they both compress substantially in comparison to uncompressed images (like TIFF) and both have long ago reached the level of compression where improving compression is mostly about diminishing gains.
As there's less and less data left to compress further the compression ratio would need to go higher and higher for the new algorithm to make even a dent in JPEG or PNG in any practical sense.
Also, image compression algorithms try to solve a problem that has been gradually made less and less important each year with faster network connections. Improvements in image compression efficiency are way outrun by improvements in the network bandwidth in the last 20 years. The available memory and disk space have grown enormously as well.
For example, it's not so much of a problem if a background image of a website compresses down to 500Kb rather than 400Kb because the web page itself is 10M and always takes 10 seconds to load regardless of which decade it is. If you could squeeze a half-a-megabyte off the website's image data the site wouldn't effectively be any faster because of that (but maybe marginally so to allow the publisher to add another half-a-megabyte of ads or other useless crap instead.
Compression is still majorly important on mobile. Mobile coverage is mostly not great except maybe in bigger cities where you get to share the coverage with millions of others. Also mobile providers still throttle connections, bill per GB, etc. So, it matters. E.g. Instagram adopting this could be a big deal. All the major companies are looking to cut bandwidth cost. That's also what's driving progress for video codecs. With 4K and 8K screens becoming more common, jpeg is maybe not good enough anymore.
That’s not really the case for JPEG XL, where the main parameter of the reference encoder is in fact a target quality. There is a setting to target a certain file size, but it just runs a search on the quality setting to use.
It really needs a giant caveat saying "lossless". I mean, that's still great and impressive, but it clearly doesn't erase the need for a user to switch formats as a lossless format is still not suitable for end users a lot of the time.
(It does have a lossy mode, detailed on another page, but they clearly show it doesn't have the same advantage over other formats there.)
Seems like they're doing a pretty decent job at communication it's lossless to me.
It's actually provably impossible using a simple counting argument. A lossy algorithm can conflate two non-identical images and encode them the same way while a lossless algorithm can't, so on average the output of a lossless algorithm is necessarily larger than a lossy one because it has to encode more possible outputs.
Well, yeah, obviously if you want to shoot yourself in the foot that is always possible. But for any lossless algorithm it is trivial to derive a corresponding lossy algorithm that will compress better than the lossless one.
Which is a completely different problem from taking a fixed lossy algorithm and trying to beat it.
I think you mean "unless the lossy algorithm has a perverse encoding."
But your argument proves too much. It's true that no lossless algorithm can beat a lossy algorithm if averaged across all possible images, for precisely the reason you say. But for the same reason, no lossless algorithm can beat the "compression algorithm" of no compression at all, like BMP but without redundancy in the header, averaged across all possible images.
The unknown question — which I think we don't even have a good estimate of — is how much incompressible information is in the images we're interested in compressing. Some of that information is random noise, which lossy algorithms like JPEG can win by discarding. (Better algorithms than JPEG can even improve image quality that way.) But other parts of that information are the actual signal we want to preserve; as demonstrated by Deep Dream, JPEG doesn't have a particularly good representation of that information, but we have reason to believe that on average it is a very small amount of information, much smaller than JPEG files.
If it turns out that the actually random noise is smaller than the delta between the information-theoretic Kolmogorov complexity of the signal in the images we're interested in compressing and the size of typical JPEG files for them, then a lossless algorithm could beat JPEG, as typically applied, on size. Of course, a new lossy algorithm that uses the same model for the probability distribution of images, and simply discards the random noise rather than storing it as a lossless algorithm must do, would do better still.
We are seeing this in practice for audio with LPCNet, where lossy compression is able to produce speech that is more easily comprehensible than the original recording: https://people.xiph.org/~jm/demo/lpcnet_codec/
It seems unlikely to me that the ideal predictor residual, or random noise, in an average photo (among the photos we're interested in, not all possible Library-of-Borges photos) is less than the size of a JPEG of that photo. But we don't have a good estimate of the Kolmogorov complexity of typical photos, so I'm not comfortable making a definite statement that this is impossible.
For example, one unknown component is how random camera sensor noise is. Some of it is pure quantum randomness, but other parts are a question of different gains and offsets (quantum efficiencies × active sizes and dark currents) in different pixels. It might turn out that those gains and offsets are somewhat compressible because semiconductor fabrication errors are somewhat systematic. And if you have many images from the same camera, you have the same gains and offsets in all of the images. If your camera is a pushbroom-sensor type, it has the same gains and offsets in corresponding pixels of each row, and whether it does or not, there's probably a common gain factor per pixel line, related to using the same ADCs or being digitized at the same time. So if your lossless algorithm can model this "random noise" it may be able to cut it in half or more.
> It's true that no lossless algorithm can beat a lossy algorithm if averaged across all possible images
I concede the point. I thought that this argument applied to the average of any input set, not just the set of all possible images, but then I realized that it's actually pretty easy to come up with counterexamples e.g. N input images, which can be losslessly compressed to log2(N) bits, which will easily outperform jpg for small N. That's not a very realistic situation, but it does torpedo my "proof".
So you were right, and I was wrong.
If I understand your example correctly, it only has the lossless algorithm beating JPEG by cheating: the lossless algorithm contains a dictionary of the possible highly compressible images in it, so the algorithm plus the compressed images still weighs more than libjpeg plus the JPEGs. But there are other cases where the lossless algorithm doesn't have to cheat in this way; for example, if you have a bunch of images generated from 5×5-pixel blocks of solid colors whose colors are generated by, say, a known 68-bit LFSR, you can code one of those images losslessly in about 69 bits, but JPEG as typically used will probably not compress them by an atypical amount. Similarly for images generated from the history of an arbitrary 1-D CA, or—to use a somewhat more realistic example—bilevel images of pixel-aligned monospaced text in a largish font neither of whose dimensions is a multiple of 8, say 11×17. All of these represent images with structure that can be straightforwardly extracted to find a lossless representation that is more than an order of magnitude smaller than the uncompressed representation, but where JPEG is not good at exploiting that structure.
What the first graph shows me is that this format is slightly better than WebP in one aspect, ignoring half the other features of it. Considering WebP is already far more ahead in terms of adoption, I'm happy sticking with that instead. It's natively supported in most browser and many graphics softwares, which is the real uphill battle for a new file format.
FLIF relies on progressive loading instead of lossy compression, and still beats BPG and WebP except on the very low end.
And importantly, that 32-bit checksum is a pretty good checksum; a truncated XXH64(): https://tools.ietf.org/html/rfc8478#section-3 That's about as high-quality as you can expect from 32 bits. It's not, say, Fletcher-32.
It weights 77kB gzipped which is a no-no in my book. Jesus, my current Mithril SPA weights 37kB gzipped. Not the JS bundle, the complete application.
77kb gzipped is massive considering it’s only doing one thing. If I want my website to load in ~100 milliseconds or less (or even 1 second or less!), I absolutely do need to pay attention to all the libraries I add.
I can and do criticize software developers for making hideously bloated websites because they don’t pay attention to what they add. Not only are a lot of modern websites wasteful, they’re painful or outright useless on slow mobile connections—not a problem for software developers on fibre networks, beefy dev machines, etc.
In other news, I also chose to forgo adding a 15 lb. fuel pump to my go-kart, despite the fact that every NASCAR team in the world uses one, and my go-kart drives just fine. I should go tell the NASCAR teams that fuel pumps are a terrible idea. I clearly know something they don't.
I sometimes render 50mb PNGs. The execution of decoder is insignificant compared to everything else.
Also, my go-kart also gets more miles to the gallon than NASCAR team cars.
Fair point but you have to admit that is a very niche use case.
Have you compared the execution to native performance? Even decoding a 50MB JPEG in Chrome with turbojpeg is going to be a hard pill to swallow.
I haven't benchmarked this specific thing, because I've never used it. But I might, because it sounds like a fun thing to do.
I wonder if tANS  could be used instead of arithmetic coding for (presumably) higher performance. Although I guess the authors must be aware of ANS. Would be interesting to hear why it wasn't used instead.
: Tabled asymmetric numeral systems
Basically, if you want to be able to easily view the image on a mobile device or on the web, the format needs their blessing.
Also, a GPL-licensed encoder will in no way stop incompatible extensions.
Having it as a dynamic library isn't that problematic a hoop is it?
>That is, if your lawyers will even let you try.
This may be a genuine issue. Many have a strict "nothing related to GPL" policy driven at least partly by misunderstanding or paranoia.
But any changes to the encoder itself remain under LGPL and get published. This is the point.
My intuition was informed by choosing FLAC for my music collection ~15 years ago, and that working out fantastically. If a better format does come along, or if I change my mind, I can always transcode.
Also, why not PNG?
I don't think PNG provided meaningful compression, due to the greyscale. If FLIF didn't exist, I certainly could have used PNG, for being nicer than PGM. But using FLIF seemed like a small compromise to pay for going lossless.
JPEG would have sufficed, but JPEG artifacts have always bugged me. I also considered JPEG2000 for a bit, which left me with a concern of how stable/future-proof are the actual implementations. Lossless is bit-perfect, so that concern is alleviated.
The article claims 43% improvement over typical PNGs, if you have a lot of images that's pretty significant.
EDIT: Their reference JS polyfill implementation is LGPLv3 as well, which may further harm adoption.
I was unable to find an APL2 JS polyfill decoder. Does one exist?
If I study the source and then make a single change to my own algorithm to incorporate the secret sauce in order to test its efficacy using my existing test suites, have I infected my own codebase with LPGLv3?
How can I test this code in the real world with real users if I’m not allowed to redistribute it? Would I be required to pay users as contractors to neutralize the redistribution objection?
EDIT: Neutralizing LGPLv3 would be necessary to combine this code with GPLv2 code and many other OSF-approved open source licenses, which is why that particular line of reasoning is interesting to me.
If you’re not distributing the result to other people, literally nothing you described matters at all.
As for integrating ideas, as long as you don’t copy actual lines of code, simply learning ideas and techniques from any OSS doesn’t cause license infection.
That’s not evaluating a codec though, is it? You’ve gone far beyond the scope of this thread.
I'm gonna make enemies....
That's FUD and BSD bullshit.
LGPL is not GPL. You can link what the fuck you want to a LGPL library included your proprietary code as long as you link dynamically.
Gtk+ uses LGPL, GLib, uses LGPL, GNOME use LGPL, Qt uses LGPL, Cairo uses LGPL and millions of applications link into these ones without problems.
I'm not aware of any case in history where a company got attacked, or frightened to use a LGPL library in their codebase.
It would be good to stop the FUD excepted if you need to feed your lawyer.
Do you have a source for that ?
Could you please provide supporting evidence for your view that is a consensus? Where is the documentation explaining your “static link” proposal and why it’s been endorsed by lawyers as a permitted approach? What was the response from the GPL team when they were asked to comment on this circumvention of their license? Will they be patching GPLv3 to correct it?
Consensus is what is admitted by most actors, specially including the main interested actor: The FSF and the GNU project. And I do not remember reading any communication of the FSF or the GNU project saying it is forbidden.
Currently the license text is very explicit about what you can do and not do if you link statically.
> 0) Convey the Minimal Corresponding Source under the terms of this License, and the Corresponding Application Code in a form suitable for, and under terms that permit, the user to recombine or relink the Application with a modified version of the Linked Version to produce a modified Combined Work, in the manner specified by section 6 of the GNU GPL for conveying Corresponding Source.
Which means: You have to provide a way for your user to recompile statically their application with a new version of the library. Meaning, you have to provide a tarball with your object.o files if someone ask for it.
It is even documented explicitly by the FSF:
You could ship those elements in a signed DMG, but it would be up to the user to assemble them, and so the user would not be able to codesign them under your developer ID.
Since they would not be able to compile and run the app as your developer ID — it would have to be theirs, since they're the ones linking it — the app won't have access to any ID-linked features such as iCloud syncing, Apple push notifications services, and so forth.
So that would imply that it's outright impossible to use this "static linking" bypass on MacOS when code signing is enabled, since you can't falsify the original signature's private key in order to sign the executable and gain access to signature-enforced platform capabilities.
I guess it would probably work fine for Linux folks, and you could always sign it with your own ID, but this certainly is not "compatible with signed executables" — to answer the original question asked upthread.
The LGPL is hardly "restrictive". If you aren't modifying the code, and for a codec if you are I have questions, then you've nothing to worry about.
Regarding license compatibility, which actually existing browser are you worried about the licensing terms of?
Firefox is by Mozilla, and the browser team ships code under MPL2 (afaik) which permits dual-licensing with GPL2 - but not, as far as I can work out personally, either variant of GPLv3; is MPL2 permissible to dual-license with LGPLv3, or would they be required not to implement an encoder due to incompatibility? (Mozilla seems invested in the AV1 codec, so it’s safe to assume they would be interested in a lossless frame encoder with higher efficiency than lossy options.)
Chromium seems to accept any mix of BSD, (L)GPLv2, and (L)GPLv3 at a brief glance at their codebase, which is quite surprising. (I wonder if shipping Chrome Browser knowingly includes libxz under the GPLv3 terms. If so, that ought to have certain useful outcomes for forcing their hand on Widevine.)
None of these questions would be of any relevance with a less restrictive license, whether BSD or CC-BY or even GPLv2.
I could totally see a universe where something like FLIF becoming a de-facto standard in image archiving, particularly for extremely large images. 33% smaller than PNG is a measurably savings in image archiving.
I forgot to mention, there are plenty of non-archival formats that get traction even with incomplete browser support. The MKV container format is only partially supported in browsers (with WebM), but it's still a popular format for any kind of home video.
Along the same lines, I suspect BMP+LZMA would likely be beaten by BMP+PPM or BMP+PAQ, the current extreme in general-purpose compression.
Not very surprising imho. PNG was never a strong compression format to begin with.
Secondly, this is the Lesser GPL, which means that only modifications to the FLIF implementation itself have to be free. It can still be linked in proprietary programs as long as they don't make any modifications.
Ah, thanks for the correction.
I went through a legal headache years ago releasing some software and our lawyers strongly urged against *GPL and pushed us to Apache because we would have had severely limited our opportunity for Fortune 500 companies. Apparently many prohibit anything with the letters GPL in the license, despite that the software was free and we were charging for services. It was a long headspinning debate and due to mounting legal fees we went with Apache.
Any changes you make to the encoder library itself would be LGPL, but if all you are doing is calling the library from other code then that is not an issue.
If you make a change to the library, even as part of a larger project, nothing else but that change is forced to be LGPL licensed. If your update is a bit of code you use elsewhere, as long as you own that code it does not force the elsewhere to be LGPL - while you are forced to LGPL the change to the library there is no stipulation that you can't dual license and use the same code under completely different terms outside the library.
As an unencumbered open source technology, it should breeze through legal's OK pretty quickly, and getting it integrated could certainly take a bit of time, but should just be part of the FLIF roadmap itself, if the idea is to actually get this adopted.
You don't set out to come up with "one format to rule them all" and then not also go "by implementing libflif and sending patches upstream to all the open source browsers that we want to see it used in" =)
- multi-layer (overlays), with named layers
- high bit depth
- metadata (the JXL file format will support not just Exif/XMP but also JUMBF, which gives you all kinds of already-standardized metadata, e.g. for 360)
- arbitrary extra channels
All that with state-of-the-art compression, both lossy and lossless.
TIFF acknowledges EXIF and IPTC as first class data. PNG added EXIF data as an official extension a couple of years ago and I know ExifTool does support it but I'd want to check all applications in a workflow for import/edit/export support before trusting it.
TIFF supports multiple pages (images) per file and also multilayer images (ala photo-editing).
TIFF supports various colour spaces like CMYK and LAB. AFAIK, PNG only supports RGB/RGBA so for image or print professionals, that could be a non-starter.
So I get why PNG can't warm photographers' hearts yet. Witness still the most common workflows: RAW->TIFF & RAW->JPEG.
If you compare with "same size JPG" and set the truncation to 80%, JPG appears to win out in terms of clarity of the image.
See the responsive part in https://flif.info/example.html
We can imagine decoding taking into account battery save mode, bandwidth save mode...
> A FLIF image can be loaded in different ‘variations’ from the same source file, by loading the file only partially. This makes it a very appropriate file format for responsive web design. Since there is only one file, the browser can start downloading the beginning of that file immediately, even before it knows exactly how much detail will be needed. The download or file read operations can be stopped as soon as sufficient detail is available, and if needed, it can be resumed when for whatever reason more detail is needed — e.g. the user zooms in or decides to print the page.
Simply search "webp" in the codebase, and you can add another format like that.
I wonder if the Chromium developers would accept a patch for it?
I would appreciate a reference implementation in Rust. Or, if not intended for immediate linking, in something like OCaml or ATS. Clarity and correctness are important in a reference implementation, and they are harder to achieve using C.
Yeah "daily", C++ standard evolves every 3 years minimum and most API are still C++11 meaning 9 years old. "Daily" right ?
This is FUD. Without even mentioning that any C++lib can expose a C API.