

WebP, JPEG XR, and Progressive JPG Support with Auto Content Negotiation - zacman85
http://blog.imgix.com/post/90838796454/webp-jpeg-xr-progressive-jpg-support-w-auto

======
billyhoffman
There are 2 huge problems with this blog post.

First, the source image is a photograph, _saved as a PNG24_ file! They then
show how JPEG XR and WebP file sizes compare.

This is a worthless comparison. Spoiler alert: the lossless PNG24 isn't very
good are storing photographic data, because its not designed to.

Had imgix known how to do a proper image comparison study, they would haved
used a PNG24, generate a normal, non-progressive JPEG, at the "Save for web"
quality setting of 70. That's you baseline. They should then generate the JPEG
XR, WebP, and Progressive JPEG off the source PNG24, and compare those sizes
to the size of the baseline, regular quality 70 JPEG.

I have seen great performance benefits from using WebP where it makes sense,
and I discuss them in this video about Warby Parker [0]. But the imgix guys
are going about this the wrong way to explain the benefits.

Second, the use of Content Negotiation is a terrible idea as well. You don't
want to serve different file types from the same URL. Because then, the web
server uses the Accept header, and potentially the User-Agent header, to
determine the response. This means it must send a Vary: Accept or Vary:Accept,
User-Agent header in this response, which renders the response essentially
uncachable for shared cached. I discuss this problem here [1], but in the
context of the User-Agent header.

Its clear the imgix is trying to help people, which is awesome. But its also
clear from their advice and analysis they really don't understand what they
are talking about, or can't express themselves properly. Either way, this is
bad performance information, and we really don't need any more of that.

[0] - [http://zoompf.com/blog/2013/07/how-fast-is-warby-
parker](http://zoompf.com/blog/2013/07/how-fast-is-warby-parker)

[1] - [http://zoompf.com/blog/2012/02/lose-the-wait-http-
compressio...](http://zoompf.com/blog/2012/02/lose-the-wait-http-compression)

~~~
Sephr
Vary: Accept does _not_ make resources uncacheable. If you are experiencing
this problem, either your caching headers are misconfigured or your client is
behaving incorrectly.

~~~
acdha
You're forgetting that Accept headers vary among clients — each non-bit-for-
bit identical version will be cached separately. Vary:User-Agent takes that
problem and raises it exponentially. You can play games trying to normalize
things but that makes life hard using a CDN and increases the risk of buggy
proxies creating very hard to diagnose problems.

The alternative of creating unique URLs is incredibly simple and has worked
perfectly since the start of the web. Content negotiation is an interesting
idea but it's just not worth the support cost.

~~~
zacman85
We deliver images out of a CDN where we already have handled the proper
request normalization. There is no support cost to implementing content
negotiation in this case unless you want to put us behind a proxy. At that
point, we can work with you to vary correctly without incurring the complexity
you are focusing on.

\- Chris

~~~
acdha
Normalizing the values used in Vary at the CDN level is definitely the right
way to go. However, that still leaves problems with transparent proxies at
large companies, ISPs, mobile carriers, etc. unless you also have something
like Cache-Control:private which is correctly handled.

------
liveoneggs
[http://cloudinary.com/](http://cloudinary.com/) has a similar auto-detect
feature but sells it mostly as saving space with webp for chrome

