
FLIF – Free Lossless Image Format - scotteh
https://flif.info/
======
jjcm
Always cool to see new visual compression libraries hit the scene. That said I
think the hardest part isn't the math of it, but the adoption of it.

Likely the format with the best chance of overthrowing the jpg/gif/png
incumbents is AVIF. Since it's based on AV1, you'd get hardware acceleration
for decoding/encoding once it starts becoming a standard, and browser support
will be trivial to add once AV1 has wide support.

Compression wise AVIF is performing at about the same level as FLIF (15-25%
better than webp, depending on the image), and is also royalty free. The leg
it has upon FLIF is the Alliance for Open Media[1] is behind it, which is a
consortium of companies including: "Amazon, Apple, ARM, Cisco, Facebook,
Google, IBM, Intel Corporation, Microsoft, Mozilla, Netflix, Nvidia, Samsung
Electronics and Tencent."

I'm really excited for it and I hope it actually gets traction. It'd be lovely
to have photos / screenshots / gifs all able to share a common format.

[1]
[https://en.wikipedia.org/wiki/Alliance_for_Open_Media](https://en.wikipedia.org/wiki/Alliance_for_Open_Media)

~~~
BurningCycles
>Likely the format with the best chance of overthrowing the jpg/gif/png
incumbents is AVIF

I used to think the same as well, however I now think Jpeg XL is poised to be
the 'winner' among next gen image codecs. It's royalty free, great lossy and
lossless compression which is said to beat the competition, as well as
providing a perfect upgrade path for existing jpeg's as it can losslessly
recompress them into the jpeg XL format with a ~20% size decrease (courtesy of
the PIK project).

It's slated for standardisation within a couple of weeks, it will be very
interesting to see large-scale comparisons of this codec against the likes of
AVIF and HEIF.

~~~
gardaani
I hope that JPEG XL will be simpler than the competitors. If it's compression
ratio is similar to AVIF and it can do HDR, then I'll all for it!

AVIF (and its image sequences) seems to be fairly complicated. Here's few
comments [1] about it:

 _" Given all that I'm also beginning to understand why some folks want
something simpler like webp2 :)"_

 _" Despite authoring libavif (library, not the standard) and being a big fan
of the AV1 codec, I do occasionally find HEIF/AVIF as a format to be a bit "it
can do anything!", which is likely to lead to huge or fragmented/incomplete
implementations."_

Anyway, instead of adding FLIF, AVIF, BPG, and lots of similar image formats
to web browsers, I think only one good format is enough and JPEG XR might be
it. After something has been added to web browsers, it can't be removed.

Safari hasn't added support for WebP (which is good, there's no need for WebP
after AVIF/JPEG XR is out) and it hasn't added support for HEIF (which is
weird, considering Apple is using it on iOS), but maybe they know that there's
no need to rush.

[1]
[https://bugs.chromium.org/p/chromium/issues/detail?id=960620...](https://bugs.chromium.org/p/chromium/issues/detail?id=960620#c33)

~~~
jhabdas
Browsers don't need the codecs anyway. They just need Wasm and a decoder.
Clients can do the work[1] until the hardware supports the goods.

[1] [https://git.habd.as/comfusion/fractal-
forest/src/branch/mast...](https://git.habd.as/comfusion/fractal-
forest/src/branch/master/Dockerfile)

~~~
bscphil
The example you linked to is pretty telling, because not only do the BPG
images decode more slowly than natively supported images, the javascript
decoding approach apparently breaks the browser's (Firefox's) color
management. I think native support is needed for newer codecs to be viable for
more than simple demos.

~~~
spider-mario
Chrome appears to interpret the canvas as sRGB and to convert from that, but
that means that images decoded that way are effectively limited to sRGB until
the canvas API allows specifying other colorspaces.

~~~
bscphil
In Firefox, these canvas images appear stretched into the full gamut of the
monitor (oversaturated), even though I have a color profile and have full
color management enabled in about:config.

------
sand500
In terms of getting it into chrome:
[https://bugs.chromium.org/p/chromium/issues/detail?id=539120](https://bugs.chromium.org/p/chromium/issues/detail?id=539120)

> Keep in mind that the author of the FLIF works on the new FUIF
> ([https://github.com/cloudinary/fuif](https://github.com/cloudinary/fuif))
> which will be part of the JPEG XL. So, probably FLIF will be deprecated
> soon. And as far as JPEG XL is also based on Google PIX, there is a high
> probability that Google will support this new format in their Blink engine.

[https://jpeg.org/jpegxl/index.html](https://jpeg.org/jpegxl/index.html)

~~~
jiofih
Is JPEG XL also suited to replace PNG like FLIF?

~~~
andrius4669
yeah. it's partially based on
[https://github.com/cloudinary/fuif](https://github.com/cloudinary/fuif) so if
you use that part of JPEG XL you'll get something similar out of it.

~~~
janwas
Yes indeed, for the lossless mode (based on tech by the same author) we're
seeing about 45-82% of GIF size, and 60-80% of (A)PNG depending on content.

~~~
JyrkiAlakuijala
Also other famous compression gurus, including Alex Rhatushnyak and Lode
Vandevenne contributed to the lossless side of JPEG XL.

------
jonsneyers
FLIF author here. I have been working on FUIF and JPEG XL the past two years.
FUIF is based on FLIF but is better at lossy. JPEG XL is a combination of
Google's PIK (~VarDCT mode) and FUIF (~Modular mode). You'll be able to mix
both codecs for a single image, e.g. VarDCT for the photo parts, Modular for
the non-photo parts and to encode the DC (1:8) in a super-progressive way.

I'm very excited about JPEG XL, it's a great codec that has all the technical
ingredients to replace JPEG, PNG and GIF. We are close to finalizing the
bitstream and standardizing it (ISO/IEC 18181). Now let's hope it will get
adoption!

------
tangm
It looks like the the creator (Jon Sneyers) has since (2019) made another
image format more focused on the lossy compression, FUIF[0], which itself has
been subsumed by the JPEG XL format[1]. I hope the "JPEG" branding doesn't
make folks think that JPEG XL isn't also a lossless format!

[0] [https://github.com/cloudinary/fuif](https://github.com/cloudinary/fuif)
[1] [https://jpeg.org/jpegxl/index.html](https://jpeg.org/jpegxl/index.html)

~~~
pnako
JPEG XL is either lossy compression for large pictures, or something named by
people who suck at branding.

~~~
andrewzah
Made by the same folks who brought you JPEG 2000. /s

~~~
pmarreck
I was a fan/proponent of that! Whatever happened to wavelet compression?

~~~
loeg
Patents.

~~~
ksec
Most of them had expired. Wavelet is something that looks good in theory but
in practice couldn't beat the millions man hours work put into standard DCT.

------
yason
Yet, I think it's really hard to change what has stuck. The gains have to be
enormous to warrant the hassle of trying to keep publishing in and supporting
a new image format until it just works for everyone.

The reason we have PNG and JPEG is that they are, all in all, more than good
enough. Yes, the dreaded "good enough" argument surfaces again stronger than
ever. They are also easy to understand, i.e. use JPEG for lossy photos and PNG
for pixel-exact graphics. But most importantly they both compress
substantially in comparison to uncompressed images (like TIFF) and both have
long ago reached the level of compression where improving compression is
mostly about diminishing gains.

As there's less and less data left to compress further the compression ratio
would need to go higher and higher for the new algorithm to make even a dent
in JPEG or PNG in any practical sense.

Also, image compression algorithms try to solve a problem that has been
gradually made less and less important each year with faster network
connections. Improvements in image compression efficiency are way outrun by
improvements in the network bandwidth in the last 20 years. The available
memory and disk space have grown enormously as well.

For example, it's not so much of a problem if a background image of a website
compresses down to 500Kb rather than 400Kb because the web page itself is 10M
and always takes 10 seconds to load regardless of which decade it is. If you
could squeeze a half-a-megabyte off the website's image data the site wouldn't
effectively be any faster because of that (but maybe marginally so to allow
the publisher to add another half-a-megabyte of ads or other useless crap
instead.

~~~
jillesvangurp
The reason we have jpeg is because png is not good enough for photos and
people prefer the lossy compression of jpeg over using png. The reason other
lossy formats are struggling is because they are still lossy. This promises to
basically be good enough for just about anything. That sounds like a big
promise but if true, there's very little stopping major browser implementing
support for this. I'd say progressive decompression sounds like a nice feature
to have for photo websites.

Compression is still majorly important on mobile. Mobile coverage is mostly
not great except maybe in bigger cities where you get to share the coverage
with millions of others. Also mobile providers still throttle connections,
bill per GB, etc. So, it matters. E.g. Instagram adopting this could be a big
deal. All the major companies are looking to cut bandwidth cost. That's also
what's driving progress for video codecs. With 4K and 8K screens becoming more
common, jpeg is maybe not good enough anymore.

~~~
GoblinSlayer
File size matters for networks, not compression. Compressors have an interface
where you specify desired file size and the program tries to produce a file of
that size. With better compression algorithm the image will be just of a
better quality, time to download and cost per GB will be the same.

~~~
spider-mario
> Compressors have an interface where you specify desired file size and the
> program tries to produce a file of that size.

That’s not really the case for JPEG XL, where the main parameter of the
reference encoder is in fact a target quality. There is a setting to target a
certain file size, but it just runs a search on the quality setting to use.

------
Latty
The section "Works on any kind of image" is really misleading, as it mentions
JPEG as a lossy format (alongside JPEG 2000) then says "FLIF beats anything
else in all categories."

It really needs a giant caveat saying "lossless". I mean, that's still great
and impressive, but it clearly doesn't erase the need for a user to switch
formats as a lossless format is still not suitable for end users a lot of the
time.

(It does have a lossy mode, detailed on another page, but they clearly show it
doesn't have the same advantage over other formats there.)

~~~
azinman2
It literally stands for "FLIF - Free Lossless Image Format", and the first
sentence is "FLIF is a novel lossless image format which outperforms PNG,
lossless WebP, lossless BPG, lossless JPEG2000, and lossless JPEG XR in terms
of compression ratio."

Seems like they're doing a pretty decent job at communication it's lossless to
me.

~~~
kragen
It would be reasonable to interpret the shorter boast as making the very
surprising claim that it beats anything else, including lossy JPEG, in all
categories of performance, including compression ratio. As it turns out, they
don't intend to claim that, because it's not true. It's _probably_ impossible
for a lossless file format to do that, even for commonly occurring images.
(It's certainly possible for a lossless image format to do that for images
that have a lot of redundancy in a form that JPEG isn't sophisticated enough
to recognize.)

~~~
lisper
> It's probably impossible for a lossless file format to do that, even for
> commonly occurring images.

It's actually _provably_ impossible using a simple counting argument. A lossy
algorithm can conflate two non-identical images and encode them the same way
while a lossless algorithm can't, so on average the output of a lossless
algorithm is necessarily larger than a lossy one because it has to encode more
possible outputs.

~~~
kragen
Yes, of course it's impossible to losslessly compress all possible images by
even a single bit. But how predictable is the content of _typical_ images? How
much structure do they have? They certainly have a lot more structure than PNG
or even JPEG exploits. Some of the “loss” of lossy JPEG is just random sensor
noise, which places a lower bound on the size of a losslessly compressed
image, but we have no idea how much.

~~~
lisper
Doesn't matter. Whatever you do to compress losslessly you can always do
better if you're allowed to discard information. And the structure is part of
the reason. For lossless compression you have to keep all the noise, all the
invisible details. With lossy compression you're allowed to discard all that.

~~~
kragen
Yes, that's what I said.

~~~
lisper
Just for the record:
[https://news.ycombinator.com/item?id=22272837](https://news.ycombinator.com/item?id=22272837)

------
bawolff
The very obvious thing missing from the site is decode and encode benchmarks.
Its very context dependent but if it had a long decode time, that could
outweigh the bandwidth savings.

~~~
kccqzy
That's exactly what I thought. About ten years everyone is rushing to
distribute large downloads in xz format. These days some have started to move
away from it just because how slow it is to compress and decompress.

~~~
yjftsjthsd-h
That's part of the benefit of zstd, as I understand it; near-xz compression
ratios but _much_ faster.

~~~
loeg
Compression is only mildly faster or on par with xz, but decompression is (at
similar ratios) vastly faster. Which really helps consumers of compressed
blobs.

~~~
yjftsjthsd-h
Yeah, that's a useful distinction. So it's excellent for ex. packages (hence
[https://www.archlinux.org/news/now-using-zstandard-
instead-o...](https://www.archlinux.org/news/now-using-zstandard-instead-of-
xz-for-package-compression/)), but iffy for write-once-read-hopefully-never
backups. (Although I've heard it suggested that this might flip again the
moment you start test-restoring backups regularly)

~~~
loeg
Agreed. I think I'd take zstd over xz for WORN backups anyway, just because
it's pretty reliable at detecting stream corruption. (Then again, I suggest
generating par2 or other FEC against your compressed backups so that's not a
problem.)

~~~
terrelln
Zstd has a 32-bit checksum over the uncompressed data, which is enabled by
default on the CLI.

~~~
loeg
(I know you know this, as one of the principals; this is for other readers.)

And importantly, that 32-bit checksum is a pretty _good_ checksum; a truncated
XXH64():
[https://tools.ietf.org/html/rfc8478#section-3](https://tools.ietf.org/html/rfc8478#section-3)
That's about as high-quality as you can expect from 32 bits. It's not, say,
Fletcher-32.

------
pier25
They have a polyfill for browsers:

[https://github.com/UprootLabs/poly-flif](https://github.com/UprootLabs/poly-
flif)

It weights 77kB gzipped which is a no-no in my book. Jesus, my current Mithril
SPA weights 37kB gzipped. Not the JS bundle, the complete application.

~~~
hug
While weight is a factor in professional car racing, and while I've built a
go-kart that is significantly lighter in weight than a professional racing
car, it is not a particularly impressive achievement that I have managed to do
so, and I wouldn't think to criticize a race team for the weight of their
vehicle.

~~~
andrewzah
Good thing we’re discussing JavaScript library sizes and not cars in random
contexts, then.

77kb gzipped is massive considering it’s only doing one thing. If I want my
website to load in ~100 milliseconds or less (or even 1 second or less!), I
absolutely do need to pay attention to all the libraries I add.

I can and do criticize software developers for making hideously bloated
websites because they don’t pay attention to what they add. Not only are a lot
of modern websites wasteful, they’re painful or outright useless on slow
mobile connections—not a problem for software developers on fibre networks,
beefy dev machines, etc.

~~~
hug
So it turns out that the use-case for a polyfill of a 77KB image decoder isn't
particular suited to a site you want to load in sub 100ms. Oddly, though,
that's not the only usecase in the world, and there are circumstances where
saving ~30% on every image load turns out to be significantly more efficient
than not loading a 77KB JS library.

In other news, I also chose to forgo adding a 15 lb. fuel pump to my go-kart,
despite the fact that every NASCAR team in the world uses one, and my go-kart
drives just fine. I should go tell the NASCAR teams that fuel pumps are a
terrible idea. I clearly know something they don't.

~~~
pier25
You are missing the point. It's not only about the transferred bytes. You have
to actually execute 77kB to display an image. Every image.

~~~
hug
I am most assuredly not missing the point. Your use-case is not my use-case.

I sometimes render 50mb PNGs. The execution of decoder is insignificant
compared to everything else.

Also, my go-kart also gets more miles to the gallon than NASCAR team cars.

~~~
pier25
> I sometimes render 50mb PNGs. The execution of decoder is insignificant
> compared to everything else.

Fair point but you have to admit that is a very niche use case.

Have you compared the execution to native performance? Even decoding a 50MB
JPEG in Chrome with turbojpeg is going to be a hard pill to swallow.

~~~
hug
I do know it's a niche use case. And I for sure wouldn't recommend that every
man and his dog use the polyfill. I'm not even sure it's right for the
scenario I'm thinking of. But I can definitely contrive a case where it is.

I haven't benchmarked this specific thing, because I've never used it. But I
might, because it sounds like a fun thing to do.

------
ChrisLomont
Jon Sneyers, the creator of FLIF speaking about Jpeg XL

[https://www.youtube.com/watch?v=lqi5U6dxeZU](https://www.youtube.com/watch?v=lqi5U6dxeZU)

------
vardump
> FLIF is based on MANIAC compression. MANIAC (Meta-Adaptive Near-zero Integer
> Arithmetic Coding) is an algorithm for entropy coding developed by Jon
> Sneyers and Pieter Wuille. It is a variant of CABAC (context-adaptive binary
> arithmetic coding), where instead of using a multi-dimensional array of
> quantized local image information, the contexts are nodes of decision trees
> which are dynamically learned at encode time.

I wonder if tANS [0] could be used instead of arithmetic coding for
(presumably) higher performance. Although I guess the authors must be aware of
ANS. Would be interesting to hear why it wasn't used instead.

[0]: Tabled asymmetric numeral systems
[https://en.wikipedia.org/wiki/Asymmetric_numeral_systems#tAN...](https://en.wikipedia.org/wiki/Asymmetric_numeral_systems#tANS)

~~~
eln1
Jon Sneyers is currently working on JPEG XL, which uses ANS:
[https://www.spiedigitallibrary.org/conference-proceedings-
of...](https://www.spiedigitallibrary.org/conference-proceedings-of-
spie/11137/2529237/JPEG-XL-next-generation-image-compression-architecture-and-
coding-tools/10.1117/12.2529237.full?SSO=1)

------
RcouF1uZ4gsC
Seems really cool. It seems that the biggest gatekeepers for image formats are
Apple and Google because of iPhone/Safari and Android/Chrome.

Basically, if you want to be able to easily view the image on a mobile device
or on the web, the format needs their blessing.

~~~
gtirloni
Well, Chromium is open source so it's open to anyone to implement it.

~~~
Polylactic_acid
Its about as useful as sending a politician an email. Technically the option
exists but its not of much use.

------
parvenu74
I want to root for it but I think the LGPL license ruins it as long as there
are BSD or MIT licenses alternatives that are good enough. Firefox might
implement it but I think there’s zero change that Chromium or Safari add
support.

~~~
nine_k
I think that LGPL for _the encoder_ is exactly the right choice. A format's
strength is in uniform support; taking a MIT-licenced encoder and making an
improved incompatible format won't be great for end users.

~~~
DagAgren
Is it better that no tools will actually be able to export the format?

Also, a GPL-licensed encoder will in no way stop incompatible extensions.

~~~
theon144
Uh, LGPL explicitly doesn't prevent even proprietary software from using the
encoder?

~~~
DagAgren
Not explicitly, but in practice, it does. It is often not viable to jump
through the hoops required to do it. That is, if your lawyers will even let
you try.

~~~
dspillett
_> It is often not viable to jump through the hoops required to do it._

Having it as a dynamic library isn't _that_ problematic a hoop is it?

> _That is, if your lawyers will even let you try._

This may be a genuine issue. Many have a strict "nothing related to GPL"
policy driven at least partly by misunderstanding or paranoia.

~~~
DagAgren
Having it as a dynamic library is not enough. It has to be a dynamic library
_tgat you can replace_. This doesn’t work when, for instance, distributing
through many app stores.

------
mindslight
I use this to store archives of scanned documents. The last thing I want is to
scan something only to later find some subtle image artifact corruption
(remember that case of copy machines modifying numbers by swapping glyphs?). I
store checksums and a static flif binary along with the archive. It's
definitely overkill, but a huge win compared to stacks of paper sitting
around.

My intuition was informed by choosing FLAC for my music collection ~15 years
ago, and that working out fantastically. If a better format does come along,
or if I change my mind, I can always transcode.

~~~
colejohnson66
The issue with copy machines modifying glyphs isn’t a problem with all
algorithms; Really, only that one. Instead of just discarding data like a
lossy algorithm, it would notice similar sections of the image and make them
the same.

Also, why not PNG?

~~~
mindslight
Yeah, I'll admit that specific example wasn't the most relevant. Really I just
want to be able to scan papers and then be confident enough to destroy them
without having to scrutinize the output. Rather than committing to specific
post-processing I settled on just keeping full masters of the 300dpi
greyscale. Even at 5M/page, that's just 100GB for 20k pages.

I don't think PNG provided meaningful compression, due to the greyscale. If
FLIF didn't exist, I certainly could have used PNG, for being nicer than PGM.
But using FLIF seemed like a small compromise to pay for going lossless.

JPEG would have sufficed, but JPEG artifacts have always bugged me. I also
considered JPEG2000 for a bit, which left me with a concern of how
stable/future-proof are the actual implementations. Lossless is bit-perfect,
so that concern is alleviated.

------
ronyfadel
Before Apple/Google/Mozilla adopt in their browsers, I doubt this will get any
traction.

~~~
floatingatoll
With the reference encoder licensed under LGPLv3, I doubt any browser team
will be able to incorporate this work into their product. They would need to
do a full clean room reimplementation simply to study it (since GPLv3 seems
unacceptable to them, and LGPLv3 can’t coexist with GPLv2, and so forth and so
on). It’s really unfortunate that the FLIF team chose such a restrictive
license :(

EDIT: Their reference JS polyfill implementation is LGPLv3 as well, which may
further harm adoption.

~~~
yellowapple
The decoder-only version of the reference implementation (libflif_dec) uses
the Apache 2.0 license (specifically for this reason, I'd assume). Browsers
shouldn't need to encode FLIF images very often, so decoder-only would be fine
for that use case.

~~~
floatingatoll
Evaluating the efficacy of the new image codec is not possible using only a
decoder.

I was unable to find an APL2 JS polyfill decoder. Does one exist?

~~~
sjwright
How would the _evaluation_ of any codec be hampered by any open source
license?

~~~
Polylactic_acid
Not even the most restrictive copyleft license hampers evaluation. You are
literally free to do whatever you want with the program on your own hardware.
Its only when you start redistributing that the license kicks in.

~~~
floatingatoll
What degree of rewriting would be necessary to neutralize the LGPLv3 license
restrictions? Would I be sued if I used the same logic flow but handcrafted
every statement from a flow diagram generated from the source code?

If I study the source and then make a single change to my own algorithm to
incorporate the secret sauce in order to test its efficacy using my existing
test suites, have I infected my own codebase with LPGLv3?

How can I test this code in the real world with real users if I’m not allowed
to redistribute it? Would I be required to pay users as contractors to
neutralize the redistribution objection?

Etc, etc.

EDIT: Neutralizing LGPLv3 would be necessary to combine this code with GPLv2
code and many other OSF-approved open source licenses, which is why that
particular line of reasoning is interesting to me.

~~~
sjwright
Your question makes no sense. If a license is so restrictive to you that you
can’t deploy it to testers, why would you be wanting to evaluate it in the
first place?

If you’re not distributing the result to other people, literally nothing you
described matters at all.

As for integrating ideas, as long as you don’t copy actual lines of code,
simply learning ideas and techniques from any OSS doesn’t cause license
infection.

That’s not evaluating a codec though, is it? You’ve gone far beyond the scope
of this thread.

------
dang
Related from 2016:
[https://news.ycombinator.com/item?id=12626451](https://news.ycombinator.com/item?id=12626451)

[https://news.ycombinator.com/item?id=11238190](https://news.ycombinator.com/item?id=11238190)

2015:
[https://news.ycombinator.com/item?id=10317790](https://news.ycombinator.com/item?id=10317790)

------
userbinator
I wonder how it compares with simply LZMA'ing (i.e. 7zip) a BMP. In my
experience that has always been significantly smaller than PNG (which is
itself a low bar --- deflate/zlib is a simple LZ+Huffman variant which is
nowhere near the top of general-purpose lossless compression algorithms.)

Along the same lines, I suspect BMP+LZMA would likely be beaten by BMP+PPM or
BMP+PAQ, the current extreme in general-purpose compression.

~~~
PetahNZ
Do you have any examples of zipped bmp's compared to png. Seem strange if that
was indeed better.

~~~
Dylan16807
It shouldn't be that surprising. PNG has a maximum window size of 32KB. That
means you could use a small set of identical tiles to make an image, and PNG
would have to store a new copy every row, because the previous row is out of
range.

------
canistel
Sorry to ask a very ametuerish question, but how is lossless compression of an
image different from regular run of the mill compression (zip, 7z)? Is there
any sort of underlying pattern or feature which is unique to image data that
is leveraged/exploited for lossless image compression?

~~~
artificialidiot
Yes. If you examine PNG format, it actually uses pixels around a pixel to
predict its value and compress the difference, which is much closer to 0
values. It actually uses zlib to compress just like gzip.

------
Camillo
Looks great. What about decoding and encoding speed?

~~~
MonadIsPronad
The TODO list mentions optimization, so probably not on par with the status
quo yet, I'd guess.

------
TheRealPomax
Pretty cool, but where are the links to the issues on mozilla, chromium,
webkit, and edge's bug trackers to add native support for it?

As an unencumbered open source technology, it should breeze through legal's OK
pretty quickly, and getting it integrated could certainly take a bit of time,
but should just be part of the FLIF roadmap itself, if the idea is to actually
get this adopted.

You don't set out to come up with "one format to rule them all" and then not
also go "by implementing libflif and sending patches upstream to all the open
source browsers that we want to see it used in" =)

------
octorian
I'd be interested in a format that can replace TIFF, without the files being
quite so enormous. However, it seems like all the FLIF tools are assuming
you're coming from PNG-land.

~~~
frandroid
In what way has PNG not already replaced TIFF?

~~~
emptybits
BTW, I know this because I recently embarked on some photo
import/edit/management scripts and wondered the same thing you did: "Why isn't
PNG a thing yet??" There are reasons. A few, from minor to major IMO:

TIFF acknowledges EXIF and IPTC as first class data. PNG added EXIF data as an
official extension a couple of years ago and I know ExifTool does support it
but I'd want to check all applications in a workflow for import/edit/export
support before trusting it.

TIFF supports multiple pages (images) per file and also multilayer images (ala
photo-editing).

TIFF supports various colour spaces like CMYK and LAB. AFAIK, PNG only
supports RGB/RGBA so for image or print professionals, that could be a non-
starter.

So I get why PNG can't warm photographers' hearts yet. Witness still the most
common workflows: RAW->TIFF & RAW->JPEG.

------
SethTro
The progressive loading video is great!

~~~
Polylactic_acid
This is something I wish was used on the web. Imagine instead of creating a
high compression image for use on a web page and then having a link for the
full res one, you could just say "Load this image at 60%" and if users right
click and save it would download the entire image.

------
martin-adams
It sure looks impressive. I think it's important to remember that comparing it
to a lossy format will show it's disadvantages. For example on this demo:

[https://uprootlabs.github.io/poly-flif/](https://uprootlabs.github.io/poly-
flif/)

If you compare with "same size JPG" and set the truncation to 80%, JPG appears
to win out in terms of clarity of the image.

------
perenzo
Introducing better compression of images and animations would be another small
step fighting climate change. Less data, less transfer, less energy
consumption!

[https://www.dw.com/en/is-netflix-bad-for-the-environment-
how...](https://www.dw.com/en/is-netflix-bad-for-the-environment-how-
streaming-video-contributes-to-climate-change/a-49556716)

~~~
flir
But more energy required to decompress, surely? (Not counting the compression
step because I assume that's marginal at scale).

~~~
maximegarcia
Not necessarily, the decoding can be stopped once you get enough usable
information in regards to the usage of the image (display size ...) with the
same source image. That's neat!

See the responsive part in
[https://flif.info/example.html](https://flif.info/example.html)

We can imagine decoding taking into account battery save mode, bandwidth save
mode...

------
aidenn0
Could you end up with a lossy format just by truncating a FLIF? The Adam7
comparison shows it as looking reasonable at about 5% transfer.

~~~
chime
I think yes. From their site:
[https://flif.info/responsive.html](https://flif.info/responsive.html)

> A FLIF image can be loaded in different ‘variations’ from the same source
> file, by loading the file only partially. This makes it a very appropriate
> file format for responsive web design. Since there is only one file, the
> browser can start downloading the beginning of that file immediately, even
> before it knows exactly how much detail will be needed. The download or file
> read operations can be stopped as soon as sufficient detail is available,
> and if needed, it can be resumed when for whatever reason more detail is
> needed — e.g. the user zooms in or decides to print the page.

------
sovok_x
Checklist for how far it should go to be accepted format, applicable to JPEG
XL too:

\- SIMD-optimised encoder/decoder for major programming languages;

\- stable photoshop and gimp import/export plugins, supporting layers,
animation and transparency features;

\- metadata reading/writing library for major programming languages;

\- firefox and chrome integration at least.

------
pulse7
It would be great to use this format for offline Wikipedia in Kiwix. This
would substantially reduce its size...

------
londons_explore
Chromium has a pluggable interface for image formats. Add it to the Chromium
source tree, and you get support in Opera, Chrome, and Edge.

Simply search "webp" in the codebase, and you can add another format like
that.

I wonder if the Chromium developers would accept a patch for it?

------
Camas
[https://web.archive.org/web/20200207000006/https://flif.info...](https://web.archive.org/web/20200207000006/https://flif.info/)

------
nemo136
you could have done "free lossless image compression" to have Flic-(Flac)

------
qwerty456127
I have already converted all my JPEGs to WebP and configured my camera app to
save directly to WebP. I hope one day I will be able to switch to FLIF the
same way.

------
jodrellblank
If only it could focus the progressive download on areas of the picture, so it
details faces in the early part of the file, and background in the later part.

~~~
jonsneyers
JPEG XL will support exactly that! We call it saliency-based progressiveness.

------
zelienople
MacOS viewer does not work at all on any of the test images. Gives error "The
document “5_webp_ll.flif” could not be opened."

------
The_rationalist
The upcoming, ubiquitous next gen codec, JPEG XR is heavily influenced by FLIF
and is co-created by it's founder.

~~~
re
You mean JPEG XL. JPEG XR is an older codec based on Microsoft's HD Photo /
Windows Media Photo.
[https://en.wikipedia.org/wiki/JPEG_XR](https://en.wikipedia.org/wiki/JPEG_XR)

~~~
The_rationalist
Correct.

------
MonadIsPronad
Very cool, hoping to see more of this in the future. Good job team

------
dusted
We need this! :)

------
0xff00ffee
Isn't making it LGPL a problem? That means anything that touches it becomes
LGPL, right?

~~~
pgcj_poster
Firstly, it's not a "problem" when code is released under the GPL. In many
cases this is the best way to protect user freedom.

Secondly, this is the _Lesser_ GPL, which means that only modifications to the
FLIF implementation itself have to be free. It can still be linked in
proprietary programs as long as they don't make any modifications.

~~~
mkl
It can be linked in proprietary programs even if they do make modifications,
they just need to release the modified source code (of the LGPL library). The
trickier obstacle is that it is required to be able to replace the LGPL
library with another version in the proprietary program. I.e. the LGPL library
must be dynamically linked, or the linkable compiled object code for the rest
of the proprietary program must be provided so the program can be relinked
statically.

------
sd314
Not a nice idea to do reference implementation in C++ instead of C!

~~~
nine_k
One can use C++ in radically different ways.

I would appreciate a reference implementation in Rust. Or, if not intended for
immediate linking, in something like OCaml or ATS. Clarity and correctness are
important in a _reference_ implementation, and they are harder to achieve
using C.

~~~
sd314
Best practices and rules in C++ are changing on a daily basis as the language
is still evolving. On the other hand, C is much more readable for many
programmers and researchers even with a little programming experience.
Moreover, C is more portable and helps the reference implementation be quickly
adapted for production or being used by the other compatible languages.

~~~
adev_
> Best practices and rules in C++ are changing on a daily basis as the
> language is still evolving.

Yeah "daily", C++ standard evolves every 3 years minimum and most API are
still C++11 meaning 9 years old. "Daily" right ?

This is FUD. Without even mentioning that any C++lib can expose a C API.

------
foolrush
Little mention of alpha encoding, something PNG absolutely botched.
Unsurprising.

~~~
mark-r
I've never noticed a problem with alpha in PNG, how did they botch it?

