Hacker News new | past | comments | ask | show | jobs | submit login
FLIF – Free Lossless Image Format (flif.info)
562 points by mattiemass on Mar 7, 2016 | hide | past | web | favorite | 148 comments

Everyone loves the "responsive loading" feature, but that's not even the novel thing about the format (JPEG 2000 did it even better — 16 years ago)! The novel feature of this format is better entropy coding.

FLIF decoder adds interpolation to make incomplete scans look nicer than in PNG, but that's a feature of the decoder, not the file format, so there's nothing stopping existing PNG decoders from copying that feature.

Note that it's generally not desirable to have FLIF used on the Web. A decent-quality JPEG will load at full resolution quicker than it takes FLIF to show a half-resolution preview.

FLIF is a lossless format, and lossless is a very hard and costly constraint. Images that aren't technically lossless, but look lossless to the naked eye can be half the size.

e.g. Monkey image from https://uprootlabs.github.io/poly-flif/ is 700KB in FLIF, but 300KB in high-quality JPEG at q=90 without chroma subsampling (i.e. settings good even for text/line-art), and this photo looks fine even at 140KB JPEG (80% smaller than FLIF).

So you want FLIF for archival, editing and interchange of image originals, but even the best lossless format is a waste of bytes when used for distribution to end users.

I agree with all of your comments. Some remarks though: (Disclaimer: I'm the author of FLIF)

- interpolation is not the only difference between PNG and FLIF in terms of progressive decoding. Another difference is that instead of doing the interlacing on RGB pixels, it does it on YCoCg pixels with priority given to Y, so intermediate steps are effectively chroma subsampled (or in other words you get luma faster at higher resolutions).

- lossless is indeed (currently?) too costly for photographic material on the web, because we can indeed afford some loss and still look good. However, you can still have a lossy encoder that uses a lossless format as a target: e.g. instead of encoding the deltas at full precision, you could throw away some least significant mantissa bits (those behave the most like incompressible noise) and still get visually good results.

- In future work: Progressive JPEG with the Huffman encoding replaced by FLIF's MANIAC entropy coding should be an interesting direction...

> However, you can still have a lossy encoder that uses a lossless format as a target

I wish "lossy-ness" was always an encoder parameter and never an intrinsic property of the container. The world would be a better place if we made this distinction early on. Instead every time a jpeg is edited and resaved the world gets a little more potatoey.

Maybe I am mistaken, but there's no real solution here: either you open a lossy image, edit it and save as lossless (which you easily can do), gaining in size but keeping the quality as is at this point, or (if you want to get smaller image still even after editing lossy image) you must re-encode it as lossy and make image a little more potatoey.

However, as silly as it might seem, I find having a clear distinction between media formats and unique file extension for each a huge usability improvement. A must, basically. I absolutely hate currently existing convention of distributing animated pictures as a webm or mp4 video container w/o audio track. It might seem insignificant: after all, that's what they are — audio-less video files. But because of that I can no longer mv *.gif pictures/animated/. What's even worse, I cannot make my file manager open these "moving pictures" with some other program, because there's no way it could know that it's just looped animation, and not a "real movie".

For the same reason I dislike (even though not that much) formats that can be (and are widely used as) both lossy and loseless while having the same container and file extension.

That's one of the reasons why I like the fact that some websites treat them as "gifv" files, even though they are just mp4.

No, when the format is implicitly lossless then editing and saving is lossless. Discarding more information to make the file compress better is done separately, with a different tool even.

Thanks! Sorry to rain on your parade :)

> In future work: Progressive JPEG with the Huffman encoding replaced by FLIF's MANIAC entropy coding should be an interesting direction...

That would be super cool.

> Note that it's generally not desirable to have FLIF used on the Web. A decent-quality JPEG will load at full resolution quicker than it takes FLIF to show a half-resolution preview.

On the other hand, this allows browsers on metered-bandwidth connections to control bandwidth more effectively. Rather than disabling images entirely, this would allow loading a low-resolution version and stopping, and letting the user control whether to load the rest of the image.

Again, this is not new in FLIF, and it isn't a strength of the format. You're describing exactly what can already be done with progressive JPEG better.

Look at https://uprootlabs.github.io/poly-flif/ - set truncation to 80-90% and compare to Same Size JPEG.

Truncated FLIF looks like a pixelated mess, whereas JPEG at the same byte size is almost like the original (note that the site has encoded JPEG to have few progressive scans, so sometimes you get half image perfect and half blocky. This is configurable in JPEG and could be equalized to entire image being ok-ish).

I wrote a Chrome plugin that does exactly this. I was living in an apartment with a heavily metered connection, and wanted a way to block large header images and 40mb GIFs. The extension let's you set a maximum image size, and uses the HTTP Range header to request only that much of the image - it's pretty naive but it works. Here is an example from Facebook, with the limit set to 10kB:


Unfortunately browsers don't do anything smart in terms of requesting progressive images, as there isn't an easy way to figure out how much of an image should be downloaded. So even on a 320x400px low end mobile device, you'll still end up downloading the whole the 2000x2000 header image - the only difference is it'll appear quicker as it is progressively rendered.


I would love to see this. Instead of network operators throttling bandwidth and/or re-compressing images, set a sane default and let the user have ultimate control.

It shouldn't be used to replace JPEGs, but PNG files are widely used and would be good candidates for FLIF files instead.

Also true for static GIFs, but presumably anyone still using them is not bothered about converting images to better formats.

FLIF also supports animation, so it could be used to replace animated GIF too.

Presumably if you're using animated GIF you care about compatibility with exotic/older browsers, so at best you'd consider H.264/MP4 or SVG or Flash animations.

Yes, being old/compatible is about the only advantage GIF has ;)

I agree that video formats are more suitable for many animations. Unless you have an animation where lossy is not desirable (e.g. cartoons or technical animations), and you only have pixels, no vectors. In that case FLIF (or APNG/MNG for that matter) can be a good choice.

  >Everyone loves the "responsive loading" feature, but
  > that's not even the novel thing about the format 
  >(JPEG 2000 did it even better — 16 years ago)! 
Interlaced GIFs did it ~30 years ago.

That's not quite the same: an interlaced GIF loads with gaps and still requires a fair amount of data to be loaded before it can start displaying.

JPEG 2000's approach provides successively finer detail so the entire image can be rendered after only a small percentage of the total data has been received and then it just becomes sharper as data continues to stream in. If the rollout hadn't been so unsuccessful, this would have been a great answer for responsive images since you could have a single image served (and cached on your CDN) and clients could make ranged requests for the first n bytes based on the displayed size.

JPEG 2000's approach provides successively finer detail so the entire image can be rendered after only a small percentage of the total data has been received and then it just becomes sharper as data continues to stream in.

If you'd ever watched an interlaced GIF arrive over a slow connection, that's exactly how it looks.

I've watched plenty of GIFs trickle in – over a 1200 bps modem back in the day – and it's not the same. An interlaced GIF displays every other line first, leaving a visible comb pattern between the lines until the rest of the data loads which is visually quite distracting unless the background is transparent or a compatible color with the image content.

Here's what interlacing looks like, with whole areas being empty until the data is received:


With something like JPEG 2000, you have a completely experience because the image renders immediately with less detail and becomes sharper. Using this example from Wikipedia, it'd basically start with the 1:100 frame on the bottom and rerender as data streams in until hitting the final 1:1 quality at the top:


A lossless format would be great on cameras as the current RAW format has image data which is uncompressed and thus space consuming. Having a really good lossless compression there can save on space and writes.

Most camera raw image files (not RAW, since it's not an extension or a standard) are compressed, and have been for many years now. Some of them (notably the recent Sony standard ARW files, though it's a selectable option in some other brands) even use lossy compression for parts of the data. Yes, the files can take up a lot of space, but not nearly as much as a compressed 16-bit-per-channel RGB TIFF.

Thanks for the note about chroma subsampling. I was already sometimes wondering why some line-art JPEGs had artifacts and others (almost) not. Is there an easy command-line argument to, say, cjpeg to do this? (The manual page doesn't seem to be clear on this?)

cjpeg -sample 1x1 does the trick. Also, I recommend using mozjpeg if you already aren't.

Do any of these newer/experimental schemes, such as this one, take into account other factors such as CPU load before declaring themselves as "better". For example this project seems pretty cool, but there's no data on how CPU bound, memory bound, I/O bound its decompression algorithm is.

I guess what I'm asking is, if I hit a web page with 20 images @ 100k per image is it going to nail one or more cores at 100% and drain the battery on my portable device. Fantastic compression is great but what are the trade offs?

New codecs almost always use more CPU at first because they have to do pure-software decoding. However, if the bandwidth savings are good enough, and usage is ubiquitous enough, eventually the new format will be implemented in hardware decoding chips, which will bring power usage back down.

This is most noticeable in video formats; older devices only have MPEG-1/MPEG-2/MJPEG encoders/decoders (imagine a $20 DVD player or a old digital camera), whereas newer devices can do H.264 and/or VP9 encoding/decoding (new iPhone, new Smart HDTV).

From the page,

> Encoding and decoding speeds are acceptable, but should be improved

From elsewhere:


though the above is quite old.

Not trying to be obtuse, but that's just a subjective measure of how quick or not their algo is. It doesn't address (nor in the linked HN discussion) how efficient it is in terms of burning up CPU and battery.

The link has more concrete (non-subjective) timings.

If it takes a long time, it probably means that there is a lot to calculate. If theres a lot to calculate, probably means the cpu is running full speed to get through it

Oh sure, I understand that, but many compression articles often focus just on compression ratios (such as this one seems to do), but have no mention of the tradeoffs to obtain these results, or compare themselves against existing and well establish algorithms.

That was the point of my post.

Ah, my intention radar was off. I assumed you were making an observation but didnt jump to the probable conclusion

It says very clearly a number of times that it's better in terms of compression ratio.

Reason why your downvoted is because compression can often add to computation. An exanple would be

- I have a pallet of bytes, this will cause colors to be stored in a 8 bit integer instead of a 32 bit one (8 for r,b,g,a) - every color now adds alook up to that memory address

- I turn every color sequence possible into a numerator + denominator pair < 256 when possible. I add a length and offset to define how to compute it - when you reach a sequence, you must calculate the number. Find the offset (up to 256) and until the length (ideally > than 4 bytes) is reached get the value of that digit.

These types of calculations seem small, and likely are more often than not. But you add enough of these things up and all of a sudden the cpu must hit 100% for the time of 30+ images

I don't care about my score. I could have guessed why though. People not reading things properly. I was taking point with the observation that the site said it was better. I pointed out that it didn't say that, just that the compression ratio was better.

> People not reading things properly

I did read the article twice and thoroughly, all I sawmentioned was "Encoding and decoding speeds are acceptable, but should be improved" . That doesn't address the points I raised in my original post if you were to read it properly.

It says:

"Encoding and decoding speeds are acceptable, but should be improved"

But that doesn't address resource usage such as CPU, battery.

It's hard for a single author/developer to fully explore solution space.

For example, it could be that Javascript acquires special primitives to decode these images, or that they be a common Javascript engine extension, with a polyfill fallback.

It could be that these extensions make effective use of CPU-specific instructions, or that the underlying hardware contains special silicon to decode these images, similar to how there are H.264 decoder chips.

It's therefore possible to do the power consumption analysis, but the results would not really indicate something fundamental about the algorithm but only its current implementation. People are willing to work on improved implementations if there are other factors, like smaller size, which suggest it's worthwhile to investigate.

Appreciate the time spent answering this.

Incredible work. My only comment is that the progressive loading example reveals that their algorithm seems to have desirable properties for lossy compression as well. Why not make FLIF support lossy and lossless? It's hard enough to get a new image format standardized as it is; offering a lossy mode would effectively give us a two-for-one deal.

If PNG had a lossy mode that was even slightly better than JPEG (or exactly as good but with full alpha channel support) it would have eventually supplanted JPEG just as it has now supplanted GIF.

Quoting a fragment from the final section:

"[...] any prefix (e.g. partial download) of a compressed file can be used as a reasonable lossy encoding of the entire image."

though they also have it listed explicitly in the TODO section:

"- Lossy compression"

Current plan is to keep lossless and lossy versions of the format in the same bitstream for encoders/decoders and to differentiate the output files encoded to be lossy as ".flyf" (Free LossY image Format), and those that are encoded as lossless as ".flif" (Free Lossless Image Format).

However renaming the files from flif to flyf or vice versa would have no effect, they would still be opened and decoded the same. It's merely meant to convey to humans the intention of the person who created the image.

From my understanding, not much focus has been given to lossy endcoding as of yet.

There was a previous discussion on HN about this: https://news.ycombinator.com/item?id=10317790

5 months ago. This is my first time seeing this item.

My point is not that it shouldn't be there again, it's that some potentially interesting stuffs have already been discussed, so the previous thread is probably worth a look too.

You should put that into your first post :) Most of the time I see post like this meaning it was a double post.

My preferred format is: https://news.ycombinator.com/item?id=10317790 (1254 points, 157 days ago, 366 comments)

Because it answer most of the common questions about a resubmission: Did it get a lot of exposure? Was it a long time ago? Did it have an interesting discussion that I should read?

The number of comments actually doesn't show if the discussion was interesting of a flamewar. So I sometimes cherrypick some of the most interesting comments and repost a snippet.

(In this case, I think the most important comments are:

- The relation between the progressive and an extension for lossy mode

- The advantages and problems of the (L)GPLv3 licence

But each of them deserve it's own thread here.)

This looks promising! They ought to include time-to-decode in the performance numbers, though: a smaller compressed size doesn't matter if the process of loading and displaying the image takes more time overall. A graph like the ones on this page would be awesome: http://cbloomrants.blogspot.com/2015/03/03-02-15-oodle-lz-pa...

This! Especially with the web being increasingly consumed on low-powered mobile devices.

It's also not just about being faster than the network because the browser is probably simultaneously doing page layout, javascript parsing, image decompression and a ton of other things. Also more time spent decoding images is a drain on the battery.

Focusing solely on compression stats can be misleading, it's a balancing act. For example, I'm not sure being 0.7x smaller than PNG is much of a win if FLIF ends up being an order of magnitude slower. Perhaps it is still a win but it needs a bit more nuance in the analysis to reach that conclusion.

The FLIF format isn't finished yet. There are optimizations still being made both in compression ratio and decoding performance. Encoding performance is a lower priority.

Also it wouldn't really be a fair comparison to show this slowly maturing format's decoding time versus that of formats that have had decades of time for people to research and find the best ways for decoding (even hardware tailored). You're comparing the heavy weight champ against the up and comer with a lot of promise. FLIF would get pretty beat up in that comparison, but perhaps only because it isn't even finished yet, and thus hasn't had the time to reach that level of efficiency. From having used it, and played around with it, it seems to be pretty fast. Fast enough that people wouldn't notice a perceptible difference between it and other common formats. I think the current decode time is about 20 MegaBytes per second. Which is faster than most people's internet, and considerably larger than most people's images (on the web).

Once it's bitstream has been finalized and it starts to get some use, we'll see decoders improve their performance. The initial plan was to lock the bitstream for a 2016 release, and then in a year or two release a new bitstream with further improvements. Similar to GIFv87a (1987 version) and GIFv89a (1989 version with improvements and new features). However this may have changed as the FLIF 2016 release was set for late December, then February, and has been pushed back again.

In due time we'll see more accurately how the format compares to the existing champs in terms of performance.

If browsers supported APIs to allow "native" image/video/audio codecs to be written in JS, we could support new formats like this without needing any co-operation from the (very conservative) browser vendors. I wrote a proposal for this here: https://discourse.wicg.io/t/custom-image-audio-video-codec-a...

Please, no! The web already makes my phone heat up enough as it is.

The proposal I linked to suggests writing decoders using WebAssembly, so they should heat up your phone no more than the native image decoders in the browser itself.

Just had a wild thought:

I wonder how different the web would be [1] if all data was sent in binary instead of some (text, HTML, JS) being sent in text format. Or do some servers and clients gzip the text automatically before sending it. I've heard of stuff like that (and seen HTTP headers related to it), but haven't looked deeply into it.

[1] I mean in terms of speed. But there could be tradeoffs, because, though sending (smaller) binary data would be faster, the parts of it that represented text would have to converted to/from text at either end.

almost all servers support compression nowadays, and most clients as well. The HTTP headers and such won't get compressed, but there's a very good chance that every page and text asset you get has been compressed in transit. Some servers even have the option of precompressing assets and delivering them without having to stream through a compression algorithm, but that only works with static assets.

Interesting, that second point. Wonder how it is done. Via some background process that keeps monitoring asset dirs for new files, or does the user have to run a tool for that whenever they add new assets.

However you choose to do so, i.e. usually via your deployment tool or something similar. One reference point: http://nginx.org/en/docs/http/ngx_http_gzip_static_module.ht...

Thanks. Will check it.

Seems like a good way to get a bazillion new formats that we won't be able to read anymore in 5 years.

But that's another advantage. Each image would have to be distributed alongside it's javascript decoder. Even if the format is abandoned, the decoder will still be there, and it will still work on that webpage.

Which means you need JavaScript just to look at the file. Does web archive store js along with the content?

Yes – modern web archiving tools capture JavaScript, CSS, etc. The basic support where they just download URLs found in HTML has been around for years but the current best-practice is to use a real browser engine (e.g. PhantomJS, Firefox) which has either been integrated to store directly or simply is using a proxy like https://github.com/internetarchive/warcprox which adds every requested resource to a WARC file (https://en.wikipedia.org/wiki/Web_ARChive).

If they don't, they should.

As a hacky workaround you can have the JS decompressor replace img tags with a canvas and draw on that, or create a data or blob URI and apply that to the img tag (I'm not sure which would be least efficient).

I assume that is how https://uprootlabs.github.io/poly-flif/ is adding support. I might have to have a browse around that project's code when I have some free time.

I doubt it would be practical for production use, but you never know...

That's how the BPG demo worked.

So you would need to load up the JS decompressor every single time you load a webpage? Or is there another way to do it efficiently ?

The browser would cache a copy.

I've seen proposals to add hashes to links. This way, a browser might see a link to some JS on a new URL, but with the hash, it might find it already has that JS file in its cache from when it downloaded it at a different URL.

That's what Etag is for.

No it's not. Etag is completely different. It's a tag that the webserver sends together with the response and which the browser sends to the server which can then return a 304 status code to indicate that the resource has not changed and the browser can use the cached copy.

What the parent spoke about is adding some attribute to an anchor tag which specifies the hash of the resource so the browser can do safe cross domain caching without needing to do any request whatsoever.

Just what we need... More ways to shoot our own feet off while trying to improve performance.

First I don't immediately see how to shoot your feet off with this but maybe you can elaborate a bit on why you think so.

And secondly, am I getting this right that you are in favor of dumbing everything down while also sacrificing performance on the way because someone could break something?

I'm hardly in favour of dumbing things down, or sacrificing performance. I'm just frequently exposed to issues that are 'magically fixed' by purging the entire browser cache. And I'm just unnerved by the idea of this hash link scheme, we have enough bugs and gaps in the existing system without additional complexity, and that system is pretty damn simple conceptually, yet the bugs remain.

Anecdotally, I used to ship jQuery from a CDN for one project, I had to stop for that project I forget why... But I do remember that to my surprise, it turned out that when I shipped my own version of jQuery from my own domain, I got 10% less client side errors reported back to me in Sentry. The world is full of ways to shoot off your own foot, some of them even start out with someone showing you how they don't shoot their foot off.

That's the downside - but if a JS decompressor library is served from a CDN, it only has to load it on the first site that uses it, like using jQuery from a CDN. Service Workers can also serve content like this from an offline cache, so it doesn't have to hit the server again on later visits.

May be adding support using an extension/addon would work.

Probably the best way to get this implemented is to encourage people to do the "polyfill"(+) solution of rendering to canvases. Browser implementers tend to follow actual usage, so the more widespread it is the more you can argue fait accompli.

(+) Does this share etymology with Polyfilla?

re: Pollyfilla Yes, I believe that's the idea.

16 bit per channel and on future support for CMYK. Looks like a interesting alternative to TIFF for digital preservation. Sadly, the actual recommended format is TIFF (so, waste of storage space) -> http://www.loc.gov/preservation/resources/rfs/stillimg.html

> (so, waste of storage space)

To be fair, given the speed at which storage space has grown over the years, it's not really something to worry about in the context of archiving material for future generations (which is very different than being able to quickly download something on the internet now, for example)

Well, when you do service archiving material to the web, the storage cost isn't irrelevant Also, big files means more network traffic between our webservers and the storage servers. Plus this format have other interesting features, like tiled rendering or the progressive download. Sadly, we must handle huge TIFFs and generate jpeg miniatures and tilesets from it, if not serving it online would be very painful.

Note: I work on a company dedicated to archiving of libraries, museums and archives...

Ah ok, cool! But then you're talking about trying to meet both goals I mentioned at the same time ;)

But I guess you handle it similar to how Archive.org does it, with a large TIFF as a poorly compressed but lossless "base case" and other compression formats for the web?

Pretty amazing! Particularly nice is that an alpha channel and animation are also possible.

One critical sidenode: it seems FLIF is still not as good as JPEG when used as lossy compression (this is something the benchmarks do not show well).

For example, go to http://uprootlabs.github.io/poly-flif/, choose the monkey image, choose 'comparing with same size JPG', and set truncation to 60% or more.

Also, I'm not sure how efficient en- and decoding is for FLIF.

In the example you suggest using the hairs on the left side of the monkey's face have some significant artifacts. The quality of the jpeg is about the same as the FLIF file with a truncation of 80%, at which point the filesize is less than half the size of jpeg at 60%.

If the image quality of the 60% truncated jpeg is acceptable then you can get the same quality but half the size using FLIF at 80%.

I don't understand your argument. The images are the same size. A 235KB jpeg of the monkey is almost indistinguishable from lossless, while in a 470KB flif, there are already unacceptable artifacts.

Not sure what you're trying to say exactly, but at the same file size ('Compare with same size JPEG' + 60% truncation), FLIF is obviously far worse than JPEG.

FLIF really is awesome :) Here's an analysis that compares FLIF to other common lossless image formats such as: PNG, WebP and BPG. http://cloudinary.com/blog/flif_the_new_lossless_image_forma...

Hmm, this has real potential as an archiving format for video, too.

Any news on what the processing overhead is like for viewing rather than creating the files? Is it less than PNG?

Does anybody understand how lossless JPEG works? To my mind, the whole point of JPEG is to get rid of high-frequency components.


It's a completely different algorithm, surprisingly, based around "predictors" from nearby previously-decoded pixels. Quite like Floyd-Steinberg dithering.

This isn't how it actually works, but one way to turn a lossy decoder into a lossy one, is to send a diff of the actual pixels vs the encoded ones. Since the lossy compression will be close to correct, the diff will mostly be small values or 0's. Which is much easier to compress.

Likewise you can turn any lossless compressor into a lossy one, by modifying the pixels that are the hardest to compress. E.g. if there is a random red pixel in a group of blue pixels, you can make it blue, and save up to 3 bytes. Or you can discard color information that humans aren't very sensitive too anyway, like JPEG does. All lossless means is that the compression isn't required by the format itself.

Kind of. Have you tried running a PNG compressor on a JPEG file?

It is smaler than a straight to PNG file but nowhere near the size of the original JPG.

Well of course, JPEG artifacts aren't necessarily going to be easier for png to compress. You need to make modifications designed for png's algorithm. There are some tools that do this:




The resulting png's are much smaller. Though not necessarily as small as JPEG, it's in the same ballpark.

pngquant is pretty awesome, especially for screenshots. For example a screenshot of my terminal running dd, it reduces the size from 88K to 17Kbytes.

I'd like to see some comparisons with palettized PNGs. All the demo images for poly-flif use more than 256 colors, but diagrams and line art sometimes use 256 colors or less, which means they can be stored in an 8-bit palettized PNG losslessly. People often forget about this when optimizing PNG sizes, and most graphics software saves as RGB by default even when the image will fit in 8-bit palettized.

FLIF can also do palettes, without arbitrary limits on the palette size. It will automatically use palettes if the image has sparse colors (and "color buckets" if it has too many colors for a palette but still relatively few). It tends to be better than PNG in terms of compression, also for palette images. I haven't looked at optimizing palette ordering yet though, so there is probably some more margin for improvement here.

Apparently it relies on a novel new "middle out" compression algorithm.

Here is the paper[1] regarding the "middle out" or "tip to tip efficiency" algorithm.


"Middle out" is not "tip-to-tip efficiency." Though they were indeed invented (discovered?) by the same team at approximately the same time, and may also be linked (depending on one's scientific, mathematical and/or philosophical definition of 'linked') they are independent algorithms solving radically different problems.

Some people have WAY too much time on their hands...

Seems cool. Slightly off topic but I hate when someone names a file format with "format" or "file" in the name. Isn't it a bit redundant to include format in the format? Something that has always bothered me about PDF.

Thanks Jon for your work on this!

When is this going to be available for DICOM medical images? :)


Looks amazing! Really impressive results. Very cool that progressive loading is composed into the format.

However I am afraid that without support from biggest companies the format will never gain popularity. Just think how long it took to make PNG a web standard. And animated PNGs? Died unnoticed. To make things worse, GIF, a stinking leftover from '90, is still in use (even on HN!).

Looks neat, but recently I discovered farbfeld[1] and I think I'll be sticking with that for the time being. I'm starting to believe data-specific compression algorithms are the wrong way to go.


For browser support (Servo), FLIF has an issue pointing to Rust's common image library: https://github.com/FLIF-hub/FLIF/issues/142

For clarification, on the FLIF GitHub "Issues page", there is a ticket indicating an intention to build FLIF into FireFox's rendering engine Gecko, and also Servo (which will eventually replace Gecko). This ticket is a placeholder declaring Servo's interest in FLIF once it is finalized.

FLIF does not have an problem preventing it from pointing to a common Rust image library. (which is how I originally read it).

This is probably the 3rd "replacement" for JPEG I have seen on HN in the last few years. None of these formats have been supported by common browsers. When will this stuff start making its way to the desktop?

It's good that browsers aren't too quick to add support for new image formats, otherwise we'd have a lot of bloatware for non-optimal image formats that few people use.

I'm curious about patent/ licensing restrictions.

From what I gather it is patent free and the implementation is GPLv3?

Does this mean someone else could make a compatible encoder/decoder with a less restrictive license?

It's lgpl3, which is less restrictive than gpl3.

... but still too restrictive for many (most?) commercial applications.

However (link got changed?) I now see that the decoder is also available under the Apache 2.0 license, so that is useful.

If you're just using it, it isn't restrictive at all is it? You just have to say that you used it.

If you modify it, you must provide source code for the version that was distributed. Not the source code for anything else.

... unless I totally misunderstand LGPL

I am not a lawyer, but, yes, the LGPLv3 in particular is in fact quite restrictive relative to what is believed (LGPLv2 much less so).

The driving force behind the GPL-series of licenses is to maintain the GNU/Stallman freedoms [1], including "freedom to run the program as you wish, for any purpose" (0) and "to study how the program works, and change it so it does your computing as you wish" (1)

It is widely believed that any software implementing DRM on it's runtime code is incompatible with the (L)GPLv3, in particular signed firmware distributions or software distribution systems such as the Mac/iOS App Stores or Steam. The (L)GPLv3 was actually written with this in mind, with some of the authors calling it Tivoization [2] in reference to Tivo's locked down firmware.

The relevant legal jargon is in section 6 of the GPL [3], which the LGPL is built on top of, and states:

“Installation Information” for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source.

Again I am not a lawyer, but in other words, you must cough up your signing keys (which for Mac/iOS devs is incidentally a breach of your Apple Developer contract) in order to legally distribute signed software that uses LGPL libraries (-).

[-] It seems it should be OK to distribute non-DRM protected software, but for general consumer software this kind of distribution seems to be on it's way out in a hurry.

[1] http://www.gnu.org/philosophy/free-sw.html

[2] https://en.wikipedia.org/wiki/Tivoization#GPLv3

[3] http://www.gnu.org/licenses/gpl-3.0.en.html

I had no idea. So iOS devs cannot use LGPL software at all because of this?

LGPL v2.0/v2.1 do not have the "tivoization" clause, so it should be safe to use in iOS if you are following the rest of the license provisions.

LGPL v3 does, so it is likely not safe or legal to use that software in iOS or other drm scenarios.

You can find a lot of debate about this, I think mostly because people assume the two versions to be the same and do not specify which they are wanting to use.

Again I have no legal background and I recommend getting real legal advice as the issue is quite complex.

Kudos on this @jonsneyers! I've been looking at it ever since we talked at FOSDEM. Glad to see you getting some press on the work and good luck with Uproot Labs!

Hi, redbeard is it? I think you misremember something, I didn't start working at Uproot Labs but at Cloudinary.

As you can see we went for Apache2 (decoder) and LGPLv3 (encoder) shortly after FOSDEM.

Impressive benchmarks, but how would this compare to lossless VP9, VP10 or h264, h265 image compression?

Honest question. Seriously not trying to dismiss the work.

Why not TIFF? 30 years old, already built into nearly every graphics application, supports everything this proposes and more. Plus it is already supported in Safari.


Well TIFF has very poor compression leading to large file sizes compared even to PNG. And thus no one uses it on the web.

FLIF has very good compression, it will only download the minimum amount of data required to display the image at it's current resolution. It supports 32-Bit images/animations/transparency. It does animation playback in realtime while it's downloading the file. So as it plays the resolution just increases, you aren't waiting for the next frame to load, only improve. FLIF can also handle the incredibly high resolution images that TIFF is usually used for, and can even do tiled rendering, where it only loads the chunks of the image you are zoomed in at into memory at that resolution. Meaning it takes less time to render those chunks on gigantic images and it uses less memory to do so.

FLIF is an incredibly powerful format that offers a lot. It has archival, scientific, and web purposes. It likely won't be useful to those in 3D or Gaming as files like TGA are better suited for faster reading/loading where as FLIF is a little slower to decompress than PNG. But then again, game devs have been optimizing resource usage for a long time giving models that are further away lower poly counts and lower quality bitmaps/textures. So having one flif image to work at any distant may be of use for them, where they truncate the file at different lengths depending on distance from the camera.

There is a lot to be explored with this format, and it isn't even finalized yet.

TIFF can do tiled images too and there is nothing stopping someone from adding a TIFF extension for better compression or anything else imaginable.

TIFF compression is rubbish. If you just want a lossless image, PNG is everywhere now. The main (possibly only?) reason to use FLIF over PNG is the better compression.

Because isn't open source, and is a proprietary format from Adobe ?

While not Open Source TIFF has age advantages in that close to anything related to it is unencumbered by patents, or will soon cease to be if any do remain. Also TIFF is an open container like MKV and can be used for far more than just simple pictures.

>Also TIFF is an open container like MKV and can be used for far more than just simple pictures.

I see this commonly in different business and medical applications where information may need to be accessible for years or decades after creation. There is a pretty even mix of TIFF and PDF/A in these environments and because of the legal liabilities I don't see them being displaced.

Not to mention TIFF supports layers. From a skim of the homepage, this doesn't.

FLIF allows for animation, and it is very possible to interpret each frame of the animation as it's own layer. This would year the same lossless quality as a TIFF at a much smaller file size. Though FLIF isn't intended to be used for this purpose, neither was PNG, and that didn't stop Fireworks from abusing that format and adding layering. The same could be done by someone who wants to add a little meta data to the file to let layered flif opening applications know that the frame are to be interpreted in that manner.

This format isn't complete yet, if layering is important to you, you should join the gitter.im/FLIF-hub/FLIF chat room.

A bigger question here is why not build FLIF as a document format using an established container? Why not use something established, mainstream and non controversial like TIFF as the container and FLIF as the codec. I'm kind of sick of this "new file format and file extension for no good reason" crap. MKV as a container and I dont care about codec because FFMPEG/VLC/etc support all the codecs is great, it hides all the complexity and enables codec experimentation without hassle. FLIF is an image encoding/decoding scheme that would be completely compatible inside a TIFF image if implemented correctly.

Why do I care about having a "dot flif" file when the new codec is the important improvement?

Is '*.flif' actually a good thing?

Personally I don't think so, mainly because without a mechanism to drive adoption in browsers, the format offers very little value to anyone over existing formats. TIFF would be a decent solution to that problem. ... And the fact that FireFox has marked TIFF support as WONTFIX is actually pretty indicative of how unlikely they are to support a new format such as FLIF. Sad but true. From what I already see of the FireFox support thread for FLIF, its not going to happen any time soon.

Awesome! So when can I have it in browsers?

Encoding and decoding speeds are acceptable, but should be improved

You must be from the Ministry of Encoding.

hello pied piper

But what's the Weissman score?

I wonder if IE will adopt it? Firefox and Chrome are very responsive, Microsoft not so much.

Firefox is very conservative when it comes to image formats - more so than any of the other browsers.. See WebP[1], JPEG2000[2] and JPEG-XR[3]. I'd be very surprised if they supported this.

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=856375

[2] https://bugzilla.mozilla.org/show_bug.cgi?id=36351

[3] https://bugzilla.mozilla.org/show_bug.cgi?id=500500

That's a fairly convincing argument, I definitely need to concede the point here. Appreciate your insight and helpful links!

Safari is much bigger problem than Edge/IE.

No hope for WebM in Safari. WebM for Edge is in development.

This is about an image format not a video format, so you probably mean WebP. WebP isn't supported in Firefox, Safari or IE, so it's not like Safari is the lone browser blocking new image formats.

It really depends on what it is - Edge is missing some rather fundamental modern CSS features (filters immediately springs to mind).

I apologise, I meant blend modes:


They have it marked as low priority, even tho it is a standard present in all other browsers.

I think that roadmap displays a solid representation that Edge is going to be our next lowest common denominator for webdesign/development for the next few years... I really don't care about its native ES6 support.

Maybe SVG 2.0...

This is the link of doom for me: https://wpdev.uservoice.com/forums/257854-microsoft-edge-dev...

How often do you need lossless images on the web? Almost never.

All those animated GIFs around the web, plus the images were JPEG artefacts would be noticed. Examples:

The Y logo in the top left of this page is a GIF.

All the logos at http://www.ycombinator.com/ plus the six images below More Quotes are PNGs.

Forgot PNG is lossless :P

If the compression is good enough, always. Lossy compression was born out of necessity, after all.

It was not born out of necessity. It was born because of the way your brain percieves the images. There is no need in adding detail if your brain can live happily without it.

And if FLIF achieves such great lossless compression, imagine what amazing compression you can achieve with lossy compression.

Let's take a step back. Why did developers start to think about how the brain perceives details and how to trick it in the first place? Out of a necessity to save bandwidth and/or storage space.

You have a point. But I also like fast, snappy websites :P

relevant xkcd: https://xkcd.com/927/

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact