For example FirefoxOS has a UA string of the from User-Agent:Mozilla/5.0 Mobile Gecko/28 Firefox/32.0'. There's an example in bugzilla of a major site that, not only depends on the file details of that format, but will send the mobile version of the site if and only if the number after Firefox is even and in the range 18 < = x <= 34.
I'm really not sure cons of User-Agent cleanup (or complete deprecation) outweigh the pros. At the very least I think it's not obvious and is debatable.
Also, some privacy improvements by having less data for browser fingerprinting.
Also, if we'd consider dropping User-Agent header altogether, less ways to detect an UA, so a bit harder to show "sorry, your browser in unsupported, this site is IE^H^H Chrome-only"
Obviously, just UA header isn't even remotely sufficient for any of those reasons, but would be a good start.
JPEG2000 has been available in some browsers for years and years now, but I've yet to see any site selectively sending it to supporting browsers to save bandwidth, or anybody advocating for doing that. JPEG2000 first became available in browsers 10+ years ago when connection speeds were very much slower than today, so the benefit of using it was even greater back then, yet it saw very limited adoption.
Also I note that the saving from using webp compared to the optimised jpeg is only 18kb, meanwhile the page wastes 40kb of bandwidth on a Google analytics script. What Google gives with one hand it takes away with the other.
Apart from Safari (https://en.wikipedia.org/wiki/JPEG_2000#Application_support), I'm not sure it's supported at all.
Anyway, it would clearly have had one very significant advantage: support for transparency.
In this case PageSpeed generated JPEG at significantly higher quality than WebP, so most of the file size difference is due to quality difference, not codec compression efficiency (distortion measured with DSSIM for JPEG is .009 and for WebP is .015)
If you make the comparison fair by creating JPEG (using mozjpeg) at the same quality as WebP it's 22539 vs 20158 bytes (11% saving).
Testing now with this image, to bring the DSSIM-measured distortion on the JPEG up to 0.015  I need to encode it at quality=69, which gets me a file size of 24317  bytes to the 20158  I had for the WebP. This is 17.1% improvement for WebP over JPEG, as opposed to the 47.7% improvement I found before. This is all with libjpeg-turbo as included by PageSpeed 1.9.
Running mozjpeg (jpegtran) with no arguments built at commit f46c787 on this image, I get 23870 bytes with no change to the SSIM . This is a 15.6% webp-over-jpeg improvement.
It looks like:
1) We should run some timing tests on the mozjpeg encoder, and if it's in the same range as the WebP encoder or not too much worse switch PageSpeed from libjpeg-turbo to mozjpeg.
2) We should check that quality-80 with WebP is correct for getting similar levels of distortion as quality-85 with JPEG. Is this image just a poor case for WebP or is it typical and something's wrong with our defaults?
 Technically, 0.014956 with JPEG compared to 0.015060 for the WebP.
If you only ran mozjpeg's jpegtran on a file created with another JPEG library, you won't get benefit of trellis quantization. Try creating JPEGs with mozjpeg's cjpeg (and -sample 2x2 to match WebP's limitation).
Here are the files I've been testing (one is same size, one is same quality based on my DSSIM tool v0.5):
Talking to some people here, they think your DSSIM tool  isn't what I should use. Specifically, they said it runs blur and downscale steps aren't part of the SSIM metric. They suggested using Mehdi's C++ implementation , which I understand yours is a rewrite of.
Presumably you think I should use your tool instead? What makes the (D)SSIM numbers from yours a better match for human perception than those from Mehdi's? Or should they be giving the same numbers?
I have two issues with Mehdi's implementation:
* It works on raw RGB data, which is a poor model for measuring perceptual difference (e.g. black to green range is very sensitive, but green to cyan is almost indistinguishable, but numerically they're the same in RGB). Some benchmarks solve that by testing grayscale only, but that allows encoders to cheat by encoding color as poorly as they want to.
* It's based on OpenCV and when I tested it I found OpenCV didn't apply gamma correction. This makes huge difference on images with noisy dark areas (and photos have plenty of it underexposed areas). Maybe it's a matter of OpenCV version or settings — you can verify this by dumping `ssim_map` and seeing if it shows high difference in dark areas that look fine to you on a well-calibrated monitor.
I've tried to fix those issues by using gamma-corrected Lab colorspace and include score from color channels, but tested at lower resolution (since eye is much more sensitive to luminance).
However, I have tested my tool against TID2008 database and got overall score lower than expected for SSIM (0.73 instead of 0.80), but still better than most other tools they've tested.
"To emit html that references either a JPEG or a WebP depending on the browser, you need some way that the server can tell whether the browser supports WebP. Because this feature is so valuable, there is a standard way of indicating support for it: include image/webp in the Accept header. Unfortunately this doesn't quite work in practice. For example, Chrome v36 on iOS broke support for WebP images outside of data:// urls but was still sending Accept: image/webp. Similarly, Opera added image/webp to their Accept header before they supported WebP lossless. And no one indicates in their Accept whether they support animated WebP."
It's not a good argument for crafting html that bypasses content negotiation.
There's some progress with "Vary: Accept" support, but "Vary: User-Agent" is probably never going to be supported.
....what the heck is an animated WebP
what the heck is an animated WebP
It does clarify that it's at least not quite WebM, but it doesn't explain how far away it is.
Why did it happen in the first place? Where did the UA sniffing fall apart?
A poorly designed RegExp looking for "60" in Safari's UA without a word boundary or EOL check. The new Safari UA includes "600" in the relevant section, and suddenly SharePoint sites were sending markup intended for (now) ancient browsers - browsers that weren't even on the compatibility list for SharePoint in the first place.
UA sniffing does need to go away for determining what structure your markup sends to the user agent.
I suspect that one of their regular expressions has "\m" where they meant "\n".
User-Agent sniffing really does need to die. Part of me wonders what use UA headers serve beyond sniffing and analytics. EDIT: or the Accept (and content-type) header, but that's a next to worthless header anymore :-\
Let's see what Chrome sends when it requests an image:
Alas this isn't really exploited enough, since you'd end up with ridiculous request sizes:
Accept-Encoding is the one that's used most often. Compression on the server side is very useful to reduce latency, and for static resources it is often free - servers can pre-compress them.
Accept-Language is sadly under-utilised, for the same reason we send dates from the server rather than using the user's clock - people often have misconfigured machines, or are using a friend's computer and don't want to change the language.
Isn't the problem more that the browser can't be given
multiple sources and types and allow it to chose what
it believes is best?
<img srcset="medium.jpg 1000w image/jpeg,
medium.webp 1000w image/webp,
large.jpg 2000w image/jpeg,
large.webp 2000w image/webp">
Browser support isn't really there yet but it's coming and since it has a built-in fallback you can use it now without requiring JS workarounds.
The debate will never end but it's nice to understand the perspectives.
PageSpeed, plus other WPO tools and complex sites, want the first view to be fast and small. We measure our effectiveness on first-view speedup. The most effective way to get this is by UA sniffing. We know about the other mechanisms and use them too at times, but they don't produce results that are as good for metrics we care about. However we are pretty serious about doing a good job about robust UA parsing; we put more energy into getting this right than a web developer should be expected to expend. The downside (as Microsoft points out) is that when we have to make an update we can't push it to all our users instantly. We should consider mechanisms to dynamically load an updated UA-mapping from a server we control when PageSpeed starts up, but haven't started such an effort.
Browser vendors have their own legitimate motivations. Microsoft IE11 developers don't want to be punished for the shortcomings of earlier versions of IE, and want users of IE11 to get the best, most modern web sites possible, not the IE6 version. Microsoft justifiably wants servers to send mobile versions of sites to Windows Phones. I totally sympathize with their perspective. Chrome was in the same boat when it first launched so it has all kinds of masquerading in its UA and unfortunately still does. Same story, different decade.
Webp may be great for displaying in a web page, but it's TFU if I save it to my hard drive and try to view it with default viewing or editing software.
It may not be a proprietary format, but it might as well be.
I'll admit that support for the format isn't perfect, but at least in your case it can be remedied pretty easily with a quick search . That said, format adoption has to start somewhere. It'd be pretty sad if we were still using GIFs for all of our lossless images because PNGs were never allowed to catch on before they could become universally supported.
Is webp added in qt5?
BTW: gthumb opens it A-OK. So, maybe the issue isn't the openness of the format, but the openness of your platform?
Webp is in a lot of the same boat as webm legally. It may be "open", whatever the fuck that means anymore, but that doesn't make it patent free or not needing of any licensing concerns. VP8/9/etc... aren't some public domain web format. Even webm infringed on H264 patents as of 2013. Do you really think that is an encouraging thing for lawyers of companies with legal liability to want to risk?
Where's your proof?
I'm surprised preview doesn't do WebP, that's too bad.
Does anyone know if there is an authoritative database of user agent string, their associated browsers and the features of those browsers?
I've spent a lot of time optimizing pages for video. Some browsers work just fine, some work but look like crap, so you use a flash/silverlight plugin, then a new version comes out and looks less crappy.
I also have a lot of code I turn off just for IE7...the code works, but doesn't render properly. There is no way to detect "renders ugly"
So, "YES", feature detection is the future...but it isn't quite there yet (about 98%).
So for the time being, the most effective method to optimize your website for the widest variety of clients, is to employ both User Agent parsing (server side) and feature detection (client side).
You can just cache the result in localStorage; you take a few-millisecond hit once.
Here's how the feature detection is done:
HOWEVER, in this case (webp), it's likely better to detect it on the server:
https://github.com/igrigorik/webp-detect (More info: http://www.stucox.com/blog/client-side-vs-server-side-detect...)
The only time you really should be looking at the User Agent is if you're trying to do something like show a "Download for (Chrome|Firefox|etc)" button.
Doing the check itself is fast. It's just decoding a very small image in memory. Storing the result of that check persistently just adds more cost.
It would be awesome if this idea could be extended to HTML. I'd love to see content negotiation in an Accept header, or some other kind of feature-list header that the browser sends to notify my server of what features it supports. I could then pass that data down to JS or, since the browser already knows that information, why not have a `navigator.features` object that developers can look in and see exactly what is supported? Would eliminate the need for Modernizr or performing your own feature detection...and save a lot of code.
The only way to do this is browser sniffing.. although minimal.. for the most part if "phone" or "mobile" are in the browser I can send the lighter phone version... otherwise they initially get the desktop view. It isn't perfect, and to be honest, I'm doing more detection than just that.
There's also issues with where/when I can cash rendered output. And when/where I can cash certain results. For example the record id's for a given search will cache the first 10k id's so that paging becomes very fast, with the results limited to that set. Other queries will also be cached.
It really depends on what you are trying to do... growing from a few thousand users to millions takes some effort, planning, and thoughtfulness that isn't always the same.
The way I would design an app for a few hundred internal users isn't the same as for a public site, isn't the same for a very busy public site. You have to make trade offs or collapse.
Why should a phone user on a strained 3g connection get the content for a desktop browser.... By the same token, I HATE when I get a search result for a page on my phone, and that site redirects me to m.site.com/ (no path after /)... I don't just design/develop web based applications, I'm also a user...
I mean, I sure couldn't slap a "Internet Explorer" or "Netscape Navigator" splash screen on my shareware, but that's what they're doing in the User-Agent field.
Client side JS for feature detection can work well, I have a simple example in the code of http://http-echo.com
Client side feature detection can also be spoofed, but in practice that's bit a problem either.