How do I use it to detect whether I can use gradients?
1) Describe the device that the agent is coming from (operating system)
2) Describe the capabilities of the agent (this browser, those plugins)
One of the things I loathe about the user agent header is the lack of reasonable maximum length, and the inconsistent way in which developers have overloaded the value. Parsing it is difficult (especially given that the length means there is a lot of scope for bad input).
I would love to see user agent be a virtual header comprised of other headers.
The other headers would not be mandatory, but as most browsers would provide them you could reasonably use them in most cases.
These other headers may be things like:
Headers taking uncompressed space it would also be helpful if shorthand names were accepted: c-v for client-version, etc.
This is me thinking aloud, and perhaps it's an idea that has been thought of before and rejected... but by offering User-Agent as a virtual header that is comprised of all of the other headers you maintain some background compatibility whilst providing something easier to parse, use and trust for developers.
In an ideal world web-developers should be testing if individual pieces of functionality exist rather than inferring what is supported based on the browser.
I think JS does a fairly good job of allowing developers to test for functionality, unfortunately CSS does not. I am well aware that it is meant to "fail gracefully" but to a lot of developers they want to supply alternative looks where functionality isn't available and CSS doesn't lend itself to that.
So you wind up inferring CSS support from JS support which is just as broken as inferring JS support from the browser's version/name/platform.
Having said that, the flip side is just as bad, when you get completely rejected from sites because "We haven't tested this site for your browser or operating system". It's a website for crying out loud. I don't get too mad at sites which implement this as long as they provide me a way to continue "at my own risk". However it's the final straw when they flat out refuse to serve me anything other than a page telling me they haven't tested the website for my browser/OS combination... sigh
Edit: This might give some food for thought for AshleysBrain. I like your suggestion, but am curious if we can find a way to send OS and architecture to sites so that they can give me nice download links...
It doesn't make sense to use User Agent string for anything that someone might have a reason to game since it's so trivial to change them.
(I guess this assumes that the huge user-agent that my Chrome is currently sending is necessarily bad, and in the real world maybe no one really cares...)
It's much better to resist adding things to the UA in the first place, since removing anything later on is a huge pain and inevitably breaks things for users. Mozilla has managed to keep the UA relatively minimal (and successfully reduced it a bit in Firefox 4): https://developer.mozilla.org/en/Gecko_user_agent_string_ref...
The two instances of UA spoofing I know of in Chrome are for large sites -- hotmail and Yahoo mail. My vague memory of the hotmail case is that Microsoft agreed to fix their code but said it'd take months to make the push. (http://neugierig.org/software/chromium/notes/2009/02/user-ag... , http://neugierig.org/software/chromium/notes/2009/02/user-ag... )
Even a relatively flexible company like Google gets UA sniffing wrong for many of its domains. At one point (as an author of Chrome and an employee of Google) I tried to track down the right people to get things fixed and ran into more or less the above problems. (The non-Chrome non-Safari webkit browsers these days must spoof Chrome to not fall into some "other" browser bucket.)
2. Graceful degradation. We've sniffed UA's from the minute they were invented. Any change whatsoever would create untold problems for untold millions of people. The UA is just an arbitrary string so… who cares? Very few people (you and I are amongst these "very few") have to be concerned with this compared to the people such a change would affect.
UA string is just one example of unfortunate hacks that evolved in the web protocols. Compared to probably everything else in HTML it's probably just not even worth it to consider fixing it. We'll always need the old string for compatibility, so it's really only to save a few lines of parsing. Compared to the nightmare of parsing rules for HTTP and HTML, it's not even relevant.
Mike Taylor gave a talk about this and more at yesterday's GothamJS conference:
It's one of the most important headers for clients, since if you don't include it you might not get a 200.
bbot@magnesium:~> wget http://en.wikipedia.org/wiki/Japanese_yen
--2012-07-15 13:54:29-- http://en.wikipedia.org/wiki/Japanese_yen
Resolving en.wikipedia.org... 126.96.36.199, 2620:0:861:ed1a::1
Connecting to en.wikipedia.org|188.8.131.52|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 203481 (199K) [text/html]
User-agent blocking is completely braindead. It does nothing at all. The fact that somebody in 2012 can possibly think it works is astounding to me.
All this could have been avoided if Webmasters used <noframes>, but I'm not sure when it was added to HTML.
It's a good depiction of the issues you have with trying to write code once, and have it work the same in many different environments, though. It's just with browsers, rather than operating systems or hardware.
Such is the evolution of the internet.