FLIF decoder adds interpolation to make incomplete scans look nicer than in PNG, but that's a feature of the decoder, not the file format, so there's nothing stopping existing PNG decoders from copying that feature.
Note that it's generally not desirable to have FLIF used on the Web. A decent-quality JPEG will load at full resolution quicker than it takes FLIF to show a half-resolution preview.
FLIF is a lossless format, and lossless is a very hard and costly constraint. Images that aren't technically lossless, but look lossless to the naked eye can be half the size.
e.g. Monkey image from https://uprootlabs.github.io/poly-flif/ is 700KB in FLIF, but 300KB in high-quality JPEG at q=90 without chroma subsampling (i.e. settings good even for text/line-art), and this photo looks fine even at 140KB JPEG (80% smaller than FLIF).
So you want FLIF for archival, editing and interchange of image originals, but even the best lossless format is a waste of bytes when used for distribution to end users.
- interpolation is not the only difference between PNG and FLIF in terms of progressive decoding. Another difference is that instead of doing the interlacing on RGB pixels, it does it on YCoCg pixels with priority given to Y, so intermediate steps are effectively chroma subsampled (or in other words you get luma faster at higher resolutions).
- lossless is indeed (currently?) too costly for photographic material on the web, because we can indeed afford some loss and still look good. However, you can still have a lossy encoder that uses a lossless format as a target: e.g. instead of encoding the deltas at full precision, you could throw away some least significant mantissa bits (those behave the most like incompressible noise) and still get visually good results.
- In future work: Progressive JPEG with the Huffman encoding replaced by FLIF's MANIAC entropy coding should be an interesting direction...
I wish "lossy-ness" was always an encoder parameter and never an intrinsic property of the container. The world would be a better place if we made this distinction early on. Instead every time a jpeg is edited and resaved the world gets a little more potatoey.
However, as silly as it might seem, I find having a clear distinction between media formats and unique file extension for each a huge usability improvement. A must, basically. I absolutely hate currently existing convention of distributing animated pictures as a webm or mp4 video container w/o audio track. It might seem insignificant: after all, that's what they are — audio-less video files. But because of that I can no longer mv *.gif pictures/animated/. What's even worse, I cannot make my file manager open these "moving pictures" with some other program, because there's no way it could know that it's just looped animation, and not a "real movie".
For the same reason I dislike (even though not that much) formats that can be (and are widely used as) both lossy and loseless while having the same container and file extension.
> In future work: Progressive JPEG with the Huffman encoding replaced by FLIF's MANIAC entropy coding should be an interesting direction...
That would be super cool.
On the other hand, this allows browsers on metered-bandwidth connections to control bandwidth more effectively. Rather than disabling images entirely, this would allow loading a low-resolution version and stopping, and letting the user control whether to load the rest of the image.
Look at https://uprootlabs.github.io/poly-flif/ - set truncation to 80-90% and compare to Same Size JPEG.
Truncated FLIF looks like a pixelated mess, whereas JPEG at the same byte size is almost like the original (note that the site has encoded JPEG to have few progressive scans, so sometimes you get half image perfect and half blocky. This is configurable in JPEG and could be equalized to entire image being ok-ish).
Unfortunately browsers don't do anything smart in terms of requesting progressive images, as there isn't an easy way to figure out how much of an image should be downloaded. So even on a 320x400px low end mobile device, you'll still end up downloading the whole the 2000x2000 header image - the only difference is it'll appear quicker as it is progressively rendered.
Also true for static GIFs, but presumably anyone still using them is not bothered about converting images to better formats.
I agree that video formats are more suitable for many animations. Unless you have an animation where lossy is not desirable (e.g. cartoons or technical animations), and you only have pixels, no vectors. In that case FLIF (or APNG/MNG for that matter) can be a good choice.
>Everyone loves the "responsive loading" feature, but
> that's not even the novel thing about the format
>(JPEG 2000 did it even better — 16 years ago)!
JPEG 2000's approach provides successively finer detail so the entire image can be rendered after only a small percentage of the total data has been received and then it just becomes sharper as data continues to stream in. If the rollout hadn't been so unsuccessful, this would have been a great answer for responsive images since you could have a single image served (and cached on your CDN) and clients could make ranged requests for the first n bytes based on the displayed size.
If you'd ever watched an interlaced GIF arrive over a slow connection, that's exactly how it looks.
Here's what interlacing looks like, with whole areas being empty until the data is received:
With something like JPEG 2000, you have a completely experience because the image renders immediately with less detail and becomes sharper. Using this example from Wikipedia, it'd basically start with the 1:100 frame on the bottom and rerender as data streams in until hitting the final 1:1 quality at the top:
I guess what I'm asking is, if I hit a web page with 20 images @ 100k per image is it going to nail one or more cores at 100% and drain the battery on my portable device. Fantastic compression is great but what are the trade offs?
This is most noticeable in video formats; older devices only have MPEG-1/MPEG-2/MJPEG encoders/decoders (imagine a $20 DVD player or a old digital camera), whereas newer devices can do H.264 and/or VP9 encoding/decoding (new iPhone, new Smart HDTV).
> Encoding and decoding speeds are acceptable, but should be improved
though the above is quite old.
That was the point of my post.
- I have a pallet of bytes, this will cause colors to be stored in a 8 bit integer instead of a 32 bit one (8 for r,b,g,a) - every color now adds alook up to that memory address
- I turn every color sequence possible into a numerator + denominator pair < 256 when possible. I add a length and offset to define how to compute it - when you reach a sequence, you must calculate the number. Find the offset (up to 256) and until the length (ideally > than 4 bytes) is reached get the value of that digit.
These types of calculations seem small, and likely are more often than not. But you add enough of these things up and all of a sudden the cpu must hit 100% for the time of 30+ images
I did read the article twice and thoroughly, all I sawmentioned was "Encoding and decoding speeds are acceptable, but should be improved" . That doesn't address the points I raised in my original post if you were to read it properly.
"Encoding and decoding speeds are acceptable, but should be improved"
But that doesn't address resource usage such as CPU, battery.
It could be that these extensions make effective use of CPU-specific instructions, or that the underlying hardware contains special silicon to decode these images, similar to how there are H.264 decoder chips.
It's therefore possible to do the power consumption analysis, but the results would not really indicate something fundamental about the algorithm but only its current implementation. People are willing to work on improved implementations if there are other factors, like smaller size, which suggest it's worthwhile to investigate.
If PNG had a lossy mode that was even slightly better than JPEG (or exactly as good but with full alpha channel support) it would have eventually supplanted JPEG just as it has now supplanted GIF.
"[...] any prefix (e.g. partial download) of a compressed file can be used as a reasonable lossy encoding of the entire image."
though they also have it listed explicitly in the TODO section:
"- Lossy compression"
However renaming the files from flif to flyf or vice versa would have no effect, they would still be opened and decoded the same. It's merely meant to convey to humans the intention of the person who created the image.
From my understanding, not much focus has been given to lossy endcoding as of yet.
Because it answer most of the common questions about a resubmission: Did it get a lot of exposure? Was it a long time ago? Did it have an interesting discussion that I should read?
The number of comments actually doesn't show if the discussion was interesting of a flamewar. So I sometimes cherrypick some of the most interesting comments and repost a snippet.
(In this case, I think the most important comments are:
- The relation between the progressive and an extension for lossy mode
- The advantages and problems of the (L)GPLv3 licence
But each of them deserve it's own thread here.)
Focusing solely on compression stats can be misleading, it's a balancing act. For example, I'm not sure being 0.7x smaller than PNG is much of a win if FLIF ends up being an order of magnitude slower. Perhaps it is still a win but it needs a bit more nuance in the analysis to reach that conclusion.
Also it wouldn't really be a fair comparison to show this slowly maturing format's decoding time versus that of formats that have had decades of time for people to research and find the best ways for decoding (even hardware tailored). You're comparing the heavy weight champ against the up and comer with a lot of promise. FLIF would get pretty beat up in that comparison, but perhaps only because it isn't even finished yet, and thus hasn't had the time to reach that level of efficiency. From having used it, and played around with it, it seems to be pretty fast. Fast enough that people wouldn't notice a perceptible difference between it and other common formats. I think the current decode time is about 20 MegaBytes per second. Which is faster than most people's internet, and considerably larger than most people's images (on the web).
Once it's bitstream has been finalized and it starts to get some use, we'll see decoders improve their performance. The initial plan was to lock the bitstream for a 2016 release, and then in a year or two release a new bitstream with further improvements. Similar to GIFv87a (1987 version) and GIFv89a (1989 version with improvements and new features). However this may have changed as the FLIF 2016 release was set for late December, then February, and has been pushed back again.
In due time we'll see more accurately how the format compares to the existing champs in terms of performance.
I wonder how different the web would be  if all data was sent in binary instead of some (text, HTML, JS) being sent in text format. Or do some servers and clients gzip the text automatically before sending it. I've heard of stuff like that (and seen HTTP headers related to it), but haven't looked deeply into it.
 I mean in terms of speed. But there could be tradeoffs, because, though sending (smaller) binary data would be faster, the parts of it that represented text would have to converted to/from text at either end.
I assume that is how https://uprootlabs.github.io/poly-flif/ is adding support. I might have to have a browse around that project's code when I have some free time.
I doubt it would be practical for production use, but you never know...
I've seen proposals to add hashes to links. This way, a browser might see a link to some JS on a new URL, but with the hash, it might find it already has that JS file in its cache from when it downloaded it at a different URL.
What the parent spoke about is adding some attribute to an anchor tag which specifies the hash of the resource so the browser can do safe cross domain caching without needing to do any request whatsoever.
And secondly, am I getting this right that you are in favor of dumbing everything down while also sacrificing performance on the way because someone could break something?
Anecdotally, I used to ship jQuery from a CDN for one project, I had to stop for that project I forget why... But I do remember that to my surprise, it turned out that when I shipped my own version of jQuery from my own domain, I got 10% less client side errors reported back to me in Sentry. The world is full of ways to shoot off your own foot, some of them even start out with someone showing you how they don't shoot their foot off.
(+) Does this share etymology with Polyfilla?
To be fair, given the speed at which storage space has grown over the years, it's not really something to worry about in the context of archiving material for future generations (which is very different than being able to quickly download something on the internet now, for example)
Note: I work on a company dedicated to archiving of libraries, museums and archives...
But I guess you handle it similar to how Archive.org does it, with a large TIFF as a poorly compressed but lossless "base case" and other compression formats for the web?
One critical sidenode: it seems FLIF is still not as good as JPEG when used as lossy compression (this is something the benchmarks do not show well).
For example, go to http://uprootlabs.github.io/poly-flif/, choose the monkey image, choose 'comparing with same size JPG', and set truncation to 60% or more.
Also, I'm not sure how efficient en- and decoding is for FLIF.
If the image quality of the 60% truncated jpeg is acceptable then you can get the same quality but half the size using FLIF at 80%.
Any news on what the processing overhead is like for viewing rather than creating the files? Is it less than PNG?
It's a completely different algorithm, surprisingly, based around "predictors" from nearby previously-decoded pixels. Quite like Floyd-Steinberg dithering.
Likewise you can turn any lossless compressor into a lossy one, by modifying the pixels that are the hardest to compress. E.g. if there is a random red pixel in a group of blue pixels, you can make it blue, and save up to 3 bytes. Or you can discard color information that humans aren't very sensitive too anyway, like JPEG does. All lossless means is that the compression isn't required by the format itself.
It is smaler than a straight to PNG file but nowhere near the size of the original JPG.
The resulting png's are much smaller. Though not necessarily as small as JPEG, it's in the same ballpark.
However I am afraid that without support from biggest companies the format will never gain popularity. Just think how long it took to make PNG a web standard. And animated PNGs? Died unnoticed. To make things worse, GIF, a stinking leftover from '90, is still in use (even on HN!).
FLIF does not have an problem preventing it from pointing to a common Rust image library. (which is how I originally read it).
From what I gather it is patent free and the implementation is GPLv3?
Does this mean someone else could make a compatible encoder/decoder with a less restrictive license?
However (link got changed?) I now see that the decoder is also available under the Apache 2.0 license, so that is useful.
If you modify it, you must provide source code for the version that was distributed. Not the source code for anything else.
... unless I totally misunderstand LGPL
The driving force behind the GPL-series of licenses is to maintain the GNU/Stallman freedoms , including "freedom to run the program as you wish, for any purpose" (0) and "to study how the program works, and change it so it does your computing as you wish" (1)
It is widely believed that any software implementing DRM on it's runtime code is incompatible with the (L)GPLv3, in particular signed firmware distributions or software distribution systems such as the Mac/iOS App Stores or Steam. The (L)GPLv3 was actually written with this in mind, with some of the authors calling it Tivoization  in reference to Tivo's locked down firmware.
The relevant legal jargon is in section 6 of the GPL , which the LGPL is built on top of, and states:
“Installation Information” for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source.
Again I am not a lawyer, but in other words, you must cough up your signing keys (which for Mac/iOS devs is incidentally a breach of your Apple Developer contract) in order to legally distribute signed software that uses LGPL libraries (-).
[-] It seems it should be OK to distribute non-DRM protected software, but for general consumer software this kind of distribution seems to be on it's way out in a hurry.
LGPL v3 does, so it is likely not safe or legal to use that software in iOS or other drm scenarios.
You can find a lot of debate about this, I think mostly because people assume the two versions to be the same and do not specify which they are wanting to use.
Again I have no legal background and I recommend getting real legal advice as the issue is quite complex.
As you can see we went for Apache2 (decoder) and LGPLv3 (encoder) shortly after FOSDEM.
Why not TIFF? 30 years old, already built into nearly every graphics application, supports everything this proposes and more. Plus it is already supported in Safari.
FLIF has very good compression, it will only download the minimum amount of data required to display the image at it's current resolution. It supports 32-Bit images/animations/transparency. It does animation playback in realtime while it's downloading the file. So as it plays the resolution just increases, you aren't waiting for the next frame to load, only improve. FLIF can also handle the incredibly high resolution images that TIFF is usually used for, and can even do tiled rendering, where it only loads the chunks of the image you are zoomed in at into memory at that resolution. Meaning it takes less time to render those chunks on gigantic images and it uses less memory to do so.
FLIF is an incredibly powerful format that offers a lot. It has archival, scientific, and web purposes. It likely won't be useful to those in 3D or Gaming as files like TGA are better suited for faster reading/loading where as FLIF is a little slower to decompress than PNG. But then again, game devs have been optimizing resource usage for a long time giving models that are further away lower poly counts and lower quality bitmaps/textures. So having one flif image to work at any distant may be of use for them, where they truncate the file at different lengths depending on distance from the camera.
There is a lot to be explored with this format, and it isn't even finalized yet.
I see this commonly in different business and medical applications where information may need to be accessible for years or decades after creation. There is a pretty even mix of TIFF and PDF/A in these environments and because of the legal liabilities I don't see them being displaced.
This format isn't complete yet, if layering is important to you, you should join the gitter.im/FLIF-hub/FLIF chat room.
Why do I care about having a "dot flif" file when the new codec is the important improvement?
Is '*.flif' actually a good thing?
Personally I don't think so, mainly because without a mechanism to drive adoption in browsers, the format offers very little value to anyone over existing formats. TIFF would be a decent solution to that problem. ... And the fact that FireFox has marked TIFF support as WONTFIX is actually pretty indicative of how unlikely they are to support a new format such as FLIF. Sad but true. From what I already see of the FireFox support thread for FLIF, its not going to happen any time soon.
No hope for WebM in Safari. WebM for Edge is in development.
Test drive site here: https://dev.windows.com/en-us/microsoft-edge/testdrive/demos...
They have it marked as low priority, even tho it is a standard present in all other browsers.
I think that roadmap displays a solid representation that Edge is going to be our next lowest common denominator for webdesign/development for the next few years... I really don't care about its native ES6 support.
Maybe SVG 2.0...
This is the link of doom for me: https://wpdev.uservoice.com/forums/257854-microsoft-edge-dev...
The Y logo in the top left of this page is a GIF.
All the logos at http://www.ycombinator.com/ plus the six images below More Quotes are PNGs.
And if FLIF achieves such great lossless compression, imagine what amazing compression you can achieve with lossy compression.