
FLIF – Free Lossless Image Format - mattiemass
http://flif.info/index.html
======
pornel
Everyone loves the "responsive loading" feature, but that's not even the novel
thing about the format (JPEG 2000 did it even better — 16 years ago)! The
novel feature of this format is better entropy coding.

FLIF decoder adds interpolation to make incomplete scans look nicer than in
PNG, but that's a feature of the decoder, not the file format, so there's
nothing stopping existing PNG decoders from copying that feature.

Note that it's generally not desirable to have FLIF used on the Web. A decent-
quality JPEG will load at full resolution quicker than it takes FLIF to show a
half-resolution preview.

FLIF is a lossless format, and lossless is a very hard and _costly_
constraint. Images that aren't technically lossless, but _look lossless_ to
the naked eye can be half the size.

e.g. Monkey image from [https://uprootlabs.github.io/poly-
flif/](https://uprootlabs.github.io/poly-flif/) is 700KB in FLIF, but 300KB in
high-quality JPEG at q=90 without chroma subsampling (i.e. settings good even
for text/line-art), and this photo looks fine even at 140KB JPEG (80% smaller
than FLIF).

So you want FLIF for archival, editing and interchange of image originals, but
even the best lossless format is a waste of bytes when used for distribution
to end users.

~~~
JoshTriplett
> Note that it's generally not desirable to have FLIF used on the Web. A
> decent-quality JPEG will load at full resolution quicker than it takes FLIF
> to show a half-resolution preview.

On the other hand, this allows browsers on metered-bandwidth connections to
control bandwidth more effectively. Rather than disabling images entirely,
this would allow loading a low-resolution version and stopping, and letting
the user control whether to load the rest of the image.

~~~
pornel
Again, this is not new in FLIF, and it isn't a strength of the format. You're
describing exactly what can already be done with progressive JPEG _better_.

Look at [https://uprootlabs.github.io/poly-
flif/](https://uprootlabs.github.io/poly-flif/) \- set truncation to 80-90%
and compare to Same Size JPEG.

Truncated FLIF looks like a pixelated mess, whereas JPEG at the same byte size
is almost like the original (note that the site has encoded JPEG to have few
progressive scans, so sometimes you get half image perfect and half blocky.
This is configurable in JPEG and could be equalized to entire image being ok-
ish).

------
teh_klev
Do any of these newer/experimental schemes, such as this one, take into
account other factors such as CPU load before declaring themselves as
"better". For example this project seems pretty cool, but there's no data on
how CPU bound, memory bound, I/O bound its decompression algorithm is.

I guess what I'm asking is, if I hit a web page with 20 images @ 100k per
image is it going to nail one or more cores at 100% and drain the battery on
my portable device. Fantastic compression is great but what are the trade
offs?

~~~
dave2000
It says very clearly a number of times that it's better in terms of
compression ratio.

~~~
formula1
Reason why your downvoted is because compression can often add to computation.
An exanple would be

\- I have a pallet of bytes, this will cause colors to be stored in a 8 bit
integer instead of a 32 bit one (8 for r,b,g,a) - every color now adds alook
up to that memory address

\- I turn every color sequence possible into a numerator + denominator pair <
256 when possible. I add a length and offset to define how to compute it -
when you reach a sequence, you must calculate the number. Find the offset (up
to 256) and until the length (ideally > than 4 bytes) is reached get the value
of that digit.

These types of calculations seem small, and likely are more often than not.
But you add enough of these things up and all of a sudden the cpu must hit
100% for the time of 30+ images

~~~
dave2000
I don't care about my score. I could have guessed why though. People not
reading things properly. I was taking point with the observation that the site
said it was better. I pointed out that it didn't say that, just that the
compression ratio was better.

~~~
teh_klev
> People not reading things properly

I did read the article twice and thoroughly, all I sawmentioned was _"
Encoding and decoding speeds are acceptable, but should be improved"_ . That
doesn't address the points I raised in my original post if you were to read it
properly.

------
sjwright
Incredible work. My only comment is that the progressive loading example
reveals that their algorithm seems to have desirable properties for lossy
compression as well. Why not make FLIF support lossy and lossless? It's hard
enough to get a new image format standardized as it is; offering a lossy mode
would effectively give us a two-for-one deal.

If PNG had a lossy mode that was even slightly better than JPEG (or exactly as
good but with full alpha channel support) it would have eventually supplanted
JPEG just as it has now supplanted GIF.

~~~
akavel
Quoting a fragment from the final section:

 _" [...] any prefix (e.g. partial download) of a compressed file can be used
as a reasonable lossy encoding of the entire image."_

though they also have it listed explicitly in the TODO section:

 _" \- Lossy compression"_

~~~
jaredcheeda
Current plan is to keep lossless and lossy versions of the format in the same
bitstream for encoders/decoders and to differentiate the output files encoded
to be lossy as ".flyf" (Free LossY image Format), and those that are encoded
as lossless as ".flif" (Free Lossless Image Format).

However renaming the files from flif to flyf or vice versa would have no
effect, they would still be opened and decoded the same. It's merely meant to
convey to humans the intention of the person who created the image.

From my understanding, not much focus has been given to lossy endcoding as of
yet.

------
p4bl0
There was a previous discussion on HN about this:
[https://news.ycombinator.com/item?id=10317790](https://news.ycombinator.com/item?id=10317790)

~~~
baldfat
5 months ago. This is my first time seeing this item.

~~~
p4bl0
My point is not that it shouldn't be there again, it's that some potentially
interesting stuffs have already been discussed, so the previous thread is
probably worth a look too.

~~~
baldfat
You should put that into your first post :) Most of the time I see post like
this meaning it was a double post.

~~~
gus_massa
My preferred format is:
[https://news.ycombinator.com/item?id=10317790](https://news.ycombinator.com/item?id=10317790)
(1254 points, 157 days ago, 366 comments)

Because it answer most of the common questions about a resubmission: Did it
get a lot of exposure? Was it a long time ago? Did it have an interesting
discussion that I should read?

The number of comments actually doesn't show if the discussion was interesting
of a flamewar. So I sometimes cherrypick some of the most interesting comments
and repost a snippet.

(In this case, I think the most important comments are:

\- The relation between the progressive and an extension for lossy mode

\- The advantages and problems of the (L)GPLv3 licence

But each of them deserve it's own thread here.)

------
panic
This looks promising! They ought to include time-to-decode in the performance
numbers, though: a smaller compressed size doesn't matter if the process of
loading and displaying the image takes more time overall. A graph like the
ones on this page would be awesome:
[http://cbloomrants.blogspot.com/2015/03/03-02-15-oodle-lz-
pa...](http://cbloomrants.blogspot.com/2015/03/03-02-15-oodle-lz-pareto-
frontier.html)

~~~
davej
This! Especially with the web being increasingly consumed on low-powered
mobile devices.

It's also not just about being faster than the network because the browser is
probably simultaneously doing page layout, javascript parsing, image
decompression and a ton of other things. Also more time spent decoding images
is a drain on the battery.

Focusing solely on compression stats can be misleading, it's a balancing act.
For example, I'm not sure being 0.7x smaller than PNG is much of a win if FLIF
ends up being an order of magnitude slower. Perhaps it is still a win but it
needs a bit more nuance in the analysis to reach that conclusion.

------
AshleysBrain
If browsers supported APIs to allow "native" image/video/audio codecs to be
written in JS, we could support new formats like this without needing any co-
operation from the (very conservative) browser vendors. I wrote a proposal for
this here: [https://discourse.wicg.io/t/custom-image-audio-video-
codec-a...](https://discourse.wicg.io/t/custom-image-audio-video-codec-
apis/1270)

~~~
ekianjo
So you would need to load up the JS decompressor every single time you load a
webpage? Or is there another way to do it efficiently ?

~~~
billpg
The browser would cache a copy.

I've seen proposals to add hashes to links. This way, a browser might see a
link to some JS on a new URL, but with the hash, it might find it already has
that JS file in its cache from when it downloaded it at a different URL.

~~~
imaginenore
That's what Etag is for.

~~~
eis
No it's not. Etag is completely different. It's a tag that the webserver sends
together with the response and which the browser sends to the server which can
then return a 304 status code to indicate that the resource has not changed
and the browser can use the cached copy.

What the parent spoke about is adding some attribute to an anchor tag which
specifies the hash of the resource so the browser can do safe cross domain
caching without needing to do any request whatsoever.

~~~
techdragon
Just what we need... More ways to shoot our own feet off while trying to
improve performance.

~~~
eis
First I don't immediately see how to shoot your feet off with this but maybe
you can elaborate a bit on why you think so.

And secondly, am I getting this right that you are in favor of dumbing
everything down while also sacrificing performance on the way because someone
could break something?

~~~
techdragon
I'm hardly in favour of dumbing things down, or sacrificing performance. I'm
just frequently exposed to issues that are 'magically fixed' by purging the
entire browser cache. And I'm just unnerved by the idea of this hash link
scheme, we have enough bugs and gaps in the existing system without additional
complexity, and that system is pretty damn simple conceptually, yet the bugs
remain.

Anecdotally, I used to ship jQuery from a CDN for one project, I had to stop
for that project I forget why... But I do remember that to my surprise, it
turned out that when I shipped my own version of jQuery from my own domain, I
got 10% less client side errors reported back to me in Sentry. The world is
full of ways to shoot off your own foot, some of them even start out with
someone showing you how they don't shoot their foot off.

------
Zardoz84
16 bit per channel and on future support for CMYK. Looks like a interesting
alternative to TIFF for digital preservation. Sadly, the actual recommended
format is TIFF (so, waste of storage space) ->
[http://www.loc.gov/preservation/resources/rfs/stillimg.html](http://www.loc.gov/preservation/resources/rfs/stillimg.html)

~~~
vanderZwan
> (so, waste of storage space)

To be fair, given the speed at which storage space has grown over the years,
it's not really something to worry about in the context of archiving material
for future generations (which is very different than being able to quickly
download something on the internet _now_ , for example)

~~~
Zardoz84
Well, when you do service archiving material to the web, the storage cost
isn't irrelevant Also, big files means more network traffic between our
webservers and the storage servers. Plus this format have other interesting
features, like tiled rendering or the progressive download. Sadly, we must
handle huge TIFFs and generate jpeg miniatures and tilesets from it, if not
serving it online would be very painful.

Note: I work on a company dedicated to archiving of libraries, museums and
archives...

~~~
vanderZwan
Ah ok, cool! But then you're talking about trying to meet both goals I
mentioned at the same time ;)

But I guess you handle it similar to how Archive.org does it, with a large
TIFF as a poorly compressed but lossless "base case" and other compression
formats for the web?

------
kutkloon7
Pretty amazing! Particularly nice is that an alpha channel and animation are
also possible.

One critical sidenode: it seems FLIF is still not as good as JPEG when used as
lossy compression (this is something the benchmarks do not show well).

For example, go to [http://uprootlabs.github.io/poly-
flif/](http://uprootlabs.github.io/poly-flif/), choose the monkey image,
choose 'comparing with same size JPG', and set truncation to 60% or more.

Also, I'm not sure how efficient en- and decoding is for FLIF.

~~~
onion2k
In the example you suggest using the hairs on the left side of the monkey's
face have some significant artifacts. The quality of the jpeg is about the
same as the FLIF file with a truncation of 80%, at which point the filesize is
less than half the size of jpeg at 60%.

If the image quality of the 60% truncated jpeg is acceptable then you can get
the same quality but half the size using FLIF at 80%.

~~~
kutkloon7
I don't understand your argument. The images are the same size. A 235KB jpeg
of the monkey is almost indistinguishable from lossless, while in a 470KB
flif, there are already unacceptable artifacts.

------
orlyb
FLIF really is awesome :) Here's an analysis that compares FLIF to other
common lossless image formats such as: PNG, WebP and BPG.
[http://cloudinary.com/blog/flif_the_new_lossless_image_forma...](http://cloudinary.com/blog/flif_the_new_lossless_image_format_that_outperforms_png_webp_and_bpg)

------
thenomad
Hmm, this has real potential as an archiving format for video, too.

Any news on what the processing overhead is like for viewing rather than
creating the files? Is it less than PNG?

------
matzipan
Does anybody understand how lossless JPEG works? To my mind, the whole point
of JPEG is to get rid of high-frequency components.

~~~
Houshalter
This isn't how it actually works, but one way to turn a lossy decoder into a
lossy one, is to send a diff of the actual pixels vs the encoded ones. Since
the lossy compression will be close to correct, the diff will mostly be small
values or 0's. Which is much easier to compress.

Likewise you can turn any lossless compressor into a lossy one, by modifying
the pixels that are the hardest to compress. E.g. if there is a random red
pixel in a group of blue pixels, you can make it blue, and save up to 3 bytes.
Or you can discard color information that humans aren't very sensitive too
anyway, like JPEG does. All lossless means is that the compression isn't
required by the format itself.

~~~
tambourine_man
Kind of. Have you tried running a PNG compressor on a JPEG file?

It is smaler than a straight to PNG file but nowhere near the size of the
original JPG.

~~~
Houshalter
Well of course, JPEG artifacts aren't necessarily going to be easier for png
to compress. You need to make modifications designed for png's algorithm.
There are some tools that do this:

[https://pngmini.com/lossypng.html](https://pngmini.com/lossypng.html)

[https://pngquant.org/](https://pngquant.org/)

[https://tinypng.com/](https://tinypng.com/)

The resulting png's are much smaller. Though not necessarily as small as JPEG,
it's in the same ballpark.

~~~
bartvk
pngquant is pretty awesome, especially for screenshots. For example a
screenshot of my terminal running dd, it reduces the size from 88K to
17Kbytes.

------
mrob
I'd like to see some comparisons with palettized PNGs. All the demo images for
poly-flif use more than 256 colors, but diagrams and line art sometimes use
256 colors or less, which means they can be stored in an 8-bit palettized PNG
losslessly. People often forget about this when optimizing PNG sizes, and most
graphics software saves as RGB by default even when the image will fit in
8-bit palettized.

~~~
jonsneyers
FLIF can also do palettes, without arbitrary limits on the palette size. It
will automatically use palettes if the image has sparse colors (and "color
buckets" if it has too many colors for a palette but still relatively few). It
tends to be better than PNG in terms of compression, also for palette images.
I haven't looked at optimizing palette ordering yet though, so there is
probably some more margin for improvement here.

------
tomtheguvnor
Apparently it relies on a novel new "middle out" compression algorithm.

~~~
monkmartinez
Here is the paper[1] regarding the "middle out" or "tip to tip efficiency"
algorithm.

[1][http://www.scribd.com/doc/228831637/Optimal-Tip-to-Tip-
Effic...](http://www.scribd.com/doc/228831637/Optimal-Tip-to-Tip-Efficiency)

~~~
delinka
"Middle out" is not "tip-to-tip efficiency." Though they were indeed invented
(discovered?) by the same team at approximately the same time, and may also be
linked (depending on one's scientific, mathematical and/or philosophical
definition of 'linked') they are independent algorithms solving radically
different problems.

------
r0m4n0
Seems cool. Slightly off topic but I hate when someone names a file format
with "format" or "file" in the name. Isn't it a bit redundant to include
format in the format? Something that has always bothered me about PDF.

------
ekianjo
Thanks Jon for your work on this!

------
gsmethells
When is this going to be available for DICOM medical images? :)

[http://dicom.nema.org](http://dicom.nema.org)

------
wmu
Looks amazing! Really impressive results. Very cool that progressive loading
is composed into the format.

However I am afraid that without support from biggest companies the format
will never gain popularity. Just think how long it took to make PNG a web
standard. And animated PNGs? Died unnoticed. To make things worse, GIF, a
stinking leftover from '90, is still in use (even on HN!).

------
cyborgx7
Looks neat, but recently I discovered farbfeld[1] and I think I'll be sticking
with that for the time being. I'm starting to believe data-specific
compression algorithms are the wrong way to go.

[1][http://tools.suckless.org/farbfeld/](http://tools.suckless.org/farbfeld/)

------
mitchtbaum
For browser support (Servo), FLIF has an issue pointing to Rust's common image
library: [https://github.com/FLIF-
hub/FLIF/issues/142](https://github.com/FLIF-hub/FLIF/issues/142)

~~~
jaredcheeda
For clarification, on the FLIF GitHub "Issues page", there is a ticket
indicating an intention to build FLIF into FireFox's rendering engine Gecko,
and also Servo (which will eventually replace Gecko). This ticket is a
placeholder declaring Servo's interest in FLIF once it is finalized.

FLIF does not have an problem preventing it from pointing to a common Rust
image library. (which is how I originally read it).

------
ipunchghosts
This is probably the 3rd "replacement" for JPEG I have seen on HN in the last
few years. None of these formats have been supported by common browsers. When
will this stuff start making its way to the desktop?

~~~
ktRolster
It's good that browsers aren't too quick to add support for new image formats,
otherwise we'd have a lot of bloatware for non-optimal image formats that few
people use.

------
matheweis
I'm curious about patent/ licensing restrictions.

From what I gather it is patent free and the implementation is GPLv3?

Does this mean someone else could make a compatible encoder/decoder with a
less restrictive license?

~~~
mistercow
It's lgpl3, which is less restrictive than gpl3.

~~~
matheweis
... but still too restrictive for many (most?) commercial applications.

However (link got changed?) I now see that the decoder is also available under
the Apache 2.0 license, so that is useful.

~~~
ericfrederich
If you're just using it, it isn't restrictive at all is it? You just have to
say that you used it.

If you modify it, you must provide source code for the version that was
distributed. Not the source code for anything else.

... unless I totally misunderstand LGPL

~~~
matheweis
I am not a lawyer, but, yes, the LGPLv3 in particular is in fact quite
restrictive relative to what is believed (LGPLv2 much less so).

The driving force behind the GPL-series of licenses is to maintain the
GNU/Stallman freedoms [1], including "freedom to run the program as you wish,
for any purpose" (0) and "to study how the program works, and change it so it
does your computing as you wish" (1)

 _It is widely believed that any software implementing DRM on it 's runtime
code is incompatible with the (L)GPLv3_, in particular signed firmware
distributions or software distribution systems such as the Mac/iOS App Stores
or Steam. The (L)GPLv3 was actually written with this in mind, with some of
the authors calling it Tivoization [2] in reference to Tivo's locked down
firmware.

The relevant legal jargon is in section 6 of the GPL [3], which the LGPL is
built on top of, and states:

“Installation Information” for a User Product means any methods, procedures,
_authorization keys_ , or other information required to install and execute
modified versions of a covered work in that User Product from a modified
version of its Corresponding Source.

Again I am not a lawyer, but in other words, you must cough up your signing
keys (which for Mac/iOS devs is incidentally a breach of your Apple Developer
contract) in order to legally distribute signed software that uses LGPL
libraries (-).

[-] It seems it should be OK to distribute non-DRM protected software, but for
general consumer software this kind of distribution seems to be on it's way
out in a hurry.

[1] [http://www.gnu.org/philosophy/free-
sw.html](http://www.gnu.org/philosophy/free-sw.html)

[2]
[https://en.wikipedia.org/wiki/Tivoization#GPLv3](https://en.wikipedia.org/wiki/Tivoization#GPLv3)

[3]
[http://www.gnu.org/licenses/gpl-3.0.en.html](http://www.gnu.org/licenses/gpl-3.0.en.html)

~~~
ericfrederich
I had no idea. So iOS devs cannot use LGPL software at all because of this?

~~~
matheweis
LGPL v2.0/v2.1 do not have the "tivoization" clause, so it should be safe to
use in iOS if you are following the rest of the license provisions.

LGPL v3 does, so it is likely not safe or legal to use that software in iOS or
other drm scenarios.

You can find a lot of debate about this, I think mostly because people assume
the two versions to be the same and do not specify which they are wanting to
use.

Again I have no legal background and I recommend getting real legal advice as
the issue is quite complex.

------
thebeardisred
Kudos on this @jonsneyers! I've been looking at it ever since we talked at
FOSDEM. Glad to see you getting some press on the work and good luck with
Uproot Labs!

~~~
jonsneyers
Hi, redbeard is it? I think you misremember something, I didn't start working
at Uproot Labs but at Cloudinary.

As you can see we went for Apache2 (decoder) and LGPLv3 (encoder) shortly
after FOSDEM.

------
fsiefken
Impressive benchmarks, but how would this compare to lossless VP9, VP10 or
h264, h265 image compression?

------
eddieh
Honest question. Seriously not trying to dismiss the work.

Why not TIFF? 30 years old, already built into nearly every graphics
application, supports everything this proposes and more. Plus it is already
supported in Safari.

[https://en.wikipedia.org/wiki/Tagged_Image_File_Format](https://en.wikipedia.org/wiki/Tagged_Image_File_Format)

~~~
jaredcheeda
Well TIFF has very poor compression leading to large file sizes compared even
to PNG. And thus no one uses it on the web.

FLIF has very good compression, it will only download the minimum amount of
data required to display the image at it's current resolution. It supports
32-Bit images/animations/transparency. It does animation playback in realtime
while it's downloading the file. So as it plays the resolution just increases,
you aren't waiting for the next frame to load, only improve. FLIF can also
handle the incredibly high resolution images that TIFF is usually used for,
and can even do tiled rendering, where it only loads the chunks of the image
you are zoomed in at into memory at that resolution. Meaning it takes less
time to render those chunks on gigantic images and it uses less memory to do
so.

FLIF is an incredibly powerful format that offers a lot. It has archival,
scientific, and web purposes. It likely won't be useful to those in 3D or
Gaming as files like TGA are better suited for faster reading/loading where as
FLIF is a little slower to decompress than PNG. But then again, game devs have
been optimizing resource usage for a long time giving models that are further
away lower poly counts and lower quality bitmaps/textures. So having one flif
image to work at any distant may be of use for them, where they truncate the
file at different lengths depending on distance from the camera.

There is a lot to be explored with this format, and it isn't even finalized
yet.

~~~
eddieh
TIFF can do tiled images too and there is nothing stopping someone from adding
a TIFF extension for better compression or anything else imaginable.

------
IgorPartola
Awesome! So when can I have it in browsers?

------
thealistra
Encoding and decoding speeds are acceptable, but should be improved

~~~
have_faith
You must be from the Ministry of Encoding.

------
aheeki
hello pied piper

------
mattybrennan
But what's the Weissman score?

------
chris_wot
I wonder if IE will adopt it? Firefox and Chrome are very responsive,
Microsoft not so much.

~~~
sccxy
Safari is much bigger problem than Edge/IE.

No hope for WebM in Safari. WebM for Edge is in development.

~~~
Loque
It really depends on what it is - Edge is missing some rather fundamental
modern CSS features (filters immediately springs to mind).

~~~
timsneath
Actually, Edge has CSS filters as of last November:
[https://wpdev.uservoice.com/forums/257854-microsoft-edge-
dev...](https://wpdev.uservoice.com/forums/257854-microsoft-edge-
developer/suggestions/6261306-filters?ref=userfacing)

Test drive site here: [https://dev.windows.com/en-us/microsoft-
edge/testdrive/demos...](https://dev.windows.com/en-us/microsoft-
edge/testdrive/demos/css3filters/)

~~~
Loque
I apologise, I meant blend modes:

[https://dev.windows.com/en-us/microsoft-
edge/platform/status...](https://dev.windows.com/en-us/microsoft-
edge/platform/status/backgroundblendmode)

They have it marked as low priority, even tho it is a standard present in all
other browsers.

I think that roadmap displays a solid representation that Edge is going to be
our next lowest common denominator for webdesign/development for the next few
years... I really don't care about its native ES6 support.

Maybe SVG 2.0...

This is the link of doom for me:
[https://wpdev.uservoice.com/forums/257854-microsoft-edge-
dev...](https://wpdev.uservoice.com/forums/257854-microsoft-edge-
developer/filters/top)

------
gr3yh47
relevant xkcd: [https://xkcd.com/927/](https://xkcd.com/927/)

------
praeivis
[https://xkcd.com/927/](https://xkcd.com/927/)

