It's Zöpfli. Gopferteckel.
(The second word is a somewhat soft cuss word - But don't try to use it as a non-native).
Also, you can't "zopfli" something - it's a noun! You "zöpf" - or, since we're in the alemannic german space; "zöpfle".
I understand why the umlaut gets dropped, but it super bugs me too - especially as it's as easy as dropping an 'e' next to the letter if you don't have an easy way to add an umlaut.
Keeping things slightly off-topic for a moment - I love Swiss German. I lived in Bavaria and speak reasonably fluent German, so I can understand bits of it, but it's mostly incomprehensible to me, but in a delightful way. Its 'sing-song' nature and fun pronunciation are the best.
I also found in Switzerland people were far more willing to tolerate and respond to my German than in Germany (where people tend to switch to English), which I very much appreciated when I was still learning the language.
The problem here is that the naming isn't consistent. Sometimes it's Love and sometime Löve. I never know what to call it, but I think the whole idea of naming a software package with diacritics is just asking for trouble. If they would have transcribed it to Loeve it would at least be obvious what's going on.
This is perhaps exacerbated by my native tongue treating ö not as an o with umlaut, but as a completely different character distinct from o and at another place in the alphabet. To me it's like naming a package Blam but half the time referring to it as Blym instead.
(Other languages such as Dutch or French use diacritics to mark a vowel as not-dipthongised, but that doesn't apply here either.)
I'll try not to take that as an insult to speakers of Swedish, Finnish, Icelandic, Estonian, Hungarian, Turkish, and any of the other dozen languages which make use of the character.
Also regarding our US-centric naming conventions wouldn't it be very unconventional to include a non-ASCII character in the name of an imported library or called function?
They should have googled it.
Why not a <div> with a border-radius & background color? Seems you could achieve the same thing without another HTTP request (1 for each unique avatar), no need to zopfli 45,000 unique files.
Can someone elaborate on this? Why do smaller files decompress faster?
> However, remember that decompression is still the same speed, and totally safe
Wait, what? Didn't we just establish that it's faster to decompress?
With constant rate, smaller loads are completed faster. And the rate is constant but the load is smaller.
Googling finds a benchmark  which shows Zopfli decompression rate as intermediate between deflate 1 and deflate 9. On two our of three tests, Zopfli produces smaller files but is slower to decompress than deflate 9. So "the smaller the file, the faster the decompression" isn't correct.
 PDF: https://cran.r-project.org/web/packages/brotli/vignettes/bro...
A smaller compressed file is usually going to contain fewer instructions for the decompressor to execute, so there's less work to do to reconstruct the original file.
The same reason why "no code" is faster than optimized code. ;)
Not bad, but if you use newer image formats, you can do better.
Lossless WebP brings it down to 429,696 bytes (using -lossless -m 6 -q 100)
FLIF with default options (which means interlaced for progressive decoding) takes it down to 322,858 bytes.
Non-interlaced FLIF reduces it further to 302,551 bytes.
User uploads a provided PNG, you perform the quickest compression compression but then queue up a Zopfli compression. Up front you're only returning the less compressed file but after time you begin serving up the lesser bandwidth file.
If the uploaded file or associated post is deleted then you can wipe it from the queue.
> PNGOUT also performs automatic bit depth, color, and palette reduction where appropriate.
Assuming the Zopfli numbers were created by recompressing the original, I wonder if there's any further savings to be had by recompressing the output of PNGout?
Alternatively, PNGcrush can also do the same sort of lossless bit depth and palette reduction, so I'd be curious about the combination of PNGcrush + Zopfli as well.
(Why the down vote?)
That's not free anymore, but it's technologically easily possible to drastically reduce the amount of resources we use. What holds us back is that it's hard to get other people to do stuff like supporting new file formats or even have a better output in their image manipulation program.
No arguing needed. It's a different format. It uses a different extension ("woff2" instead of "woff"), a different magic number, a different Universal Type Identifier string, and a different @font-face format.
> it's a relatively easy change
But it's not. Even WOFF2 isn't supported everywhere (according to http://caniuse.com/#feat=woff2). And that's a format that's fairly recent (the W3C Recommendation doc for WOFF 1.0 is dated December 2012) and only has a handful of implementations to begin with.
PNG is a format that's a lot older and is decodable by practically everyone. Even if there's only a handful of distinct implementations (and I don't actually know how many implementations there are), and even if every single implementation updated immediately, it would still take an incredibly long time for it to get deployed widely-enough to actually use as a general-purpose format.
It's also worth pointing out that web fonts have built-in fallback behavior (e.g. if a browser can't handle a WOFF2 font, you can provide a WOFF font as a backup), and they're also things that are typically set up once (so generating multiple font formats is reasonable). Images don't really have fallback behavior. On the web, the WHATWG HTML Living Standard defines a <picture> element that provides fallback but it's not supported everywhere. Outside of the web there's typically no way to do fallback either (if your browser can render an image but nothing else can, saving that image to disk isn't very useful, and sending it to someone else isn't very useful either). Also, while font files are created very rarely, images are created very frequently, and most people aren't going to want to create "PNG2" images if they also have to create PNGs and deal with fallback (just look at WebP, which was released 5 years ago and still AFAIK is not used by very many people outside of Google).
If you're a designer/developer on Mac I would strongly recommend ImageOptim (https://imageoptim.com/). It supports Zopfli and has a simple drag-n-drop user interface.
Personally I've found this tool is faster and does a better job than most others and it's free:
For some image types, lossy png has the huge advantage over jpg at the same file size that they have no jpg artifacts.
584.677 zopflipng -m
580.180 zopflipng -m --lossy_transparent
576.637 pngwolf --max-stagnate-time=0 --max-time=300 --normalize-alpha --strip-optional
190.598 pngquant --speed 1
179.638 pngquant + pngwolf
Theoretically one could use one of the indexed PNG formats and only change the palette. I don't think that those avatar images use too much number of colors (even with anti-aliasing) so 8 bit indexed PNG should be more than enough.
That would solve basically everything wrong with today's decades old formats.
$ brew install zopfli
That's assuming you have Homebrew installed, if not: http://brew.sh
Listen, I'm all about inventive ways to lighten the yoke of static media on the web today.
But, in two important ways, this is not "literally free bandwidth":
1) The weaker: Despite the tone of obviousness in this article, it acknowledges that the choice of which technology to use is not made for you: there are edge cases where other methodologies are indeed superior. So, far from being free, these sorts of solution do have a time cost.
2) The stronger: We live in a world where, on a great day, the user's realized downstream bandwidth is 20% their LAN connection; their upstream 5% or less.
Connecting to a next-door neighbor via a conventional web application served through a typical corporate ISP probably means pushing packets a thousand miles or more, only for them to come back into our community.
Complicating this issue: our name service and certificate distribution are implemented in a way that is reasonably called "incorrect."
Our ISPs have a "speak when spoken to mentality" about connectivity, and competition is rare.
A solution bragging "literally free bandwidth" needs to service this concern - let me transfer a piece of media to a next door neighbor utilizing the other 95% of my network interface upstream capacity. That I'll call free bandwidth.