A lot of people here are calling for the death of UAs, but this would be bad for video because capabilities are inconsistent across the web, and because APIs are incomplete or lie.
For example, the "canPlayType" API for whether a codec is supported returns "probably" or "maybe". So we sometimes need to hardcode which browsers support which new codecs.
Also, there are several bugs in past and present versions of decoder implementations, which are only discovered though manual testing, or by observing quality of service metrics split by browser version (Firefox does not handle bad audio packets as well as Chrome, for example). In IE and old Edge, the video readyState would always be "4" after playback beings, even during buffering, which is a blatant violation of the spec Microsoft refused to fix (as stated in their bugtracker).
Browsers like Safari have subtly different event orders from the HTMLVideoElement which requires a burdensome workaround that us working in video just like to keep to Safari instead of poisoning other implementations.
A final fun quirk is that not all browsers gave accurate HTTP timing information until recently (notably, Safari < 14) which make download timing for the purpose of determining bandwidth very inaccurate. This is why Twitch only recently supported low-latency playback for Safari. There is no other way to ask "do you support accurate timing" other than for an engineer to test it and hardcode an exception.
I feel like these arguments always break down along the same lines:
- People who have recently run into an issue that required browser sniffing
- People who haven’t and feel sure that “things are better now” and the practice is no longer needed.
I’ve gotten into these kinds of scrapes many times, and I can assure you all that philosophical arguments about browser-detection-vs-feature-detection don’t work very well when you’re trying to explain to a client why their web app that worked a week ago doesn’t work today (due to a WebRTC bug in Chrome, for instance) and won’t be fixed until the next Chrome version comes out in six weeks.
To folks advocating “ripping off the bandage” as a solution, I’d note that the wound underneath has not stopped bleeding.
Counterpoint: As a user, often the only way to get a video to play in any browser on linux is to modify the user agent. And then, when you do spoof the user agent, it works just fine.
This is probably due to DRM which is poorly supported by Linux (lazy devs will probably just exile Linux completely). But I do remember some linux-only decoder issues in the past. Decoder and codec support is just poorly signaled overall, even if an API says it supports it, it may not support a specific profile or even support it well.
Some of our video test grid machines are Ubuntu, so from our end Twitch video should work pretty well on Linux :-)
Firefox has a built-in list of websites that need UA tweaks to work in Firefox. (In fact, Firefox's UA override feature was designed by ex-Opera engineers at Mozilla.) You can review the list (and toggle them) in Firefox's about:compat page.
A recent success story: Mozilla recently added a UA tweak to work around Slack's Chrome-only check for video calls. Slack engineers reached out to Mozilla and, just a few days later, Slack removed their Chrome check and now support video calls in Firefox without a UA tweak. :)
A very curious case is that huddles for me work in wayland (under sway), and don't work in X11 (under i3), for the exact same configuration otherwise. I have not investigated what exactly the browser sends.
I was told Slack has some Linux-specific issues that they're working on with the Firefox team, such as some complications around screen sharing depending on the window manager (as mentioned in some other comments below about Wayland vs X11).
I couldn't use those through Safari on a Mac either. Nothing really tells you that it's unsupported. I could search or figure it out but I don't wear my programmer/software tester hat all the time.
Welp, better spend a few minutes downloading the same thing but in a Chrome sandbox.
Can't say I've seen that often, but there was a short period of time when Twitch refused to play on a FreeBSD user-agent. Spoofing as Linux or Windows worked. They did fix it.
Not too long ago browser vendors decided to fuck up cookie backwards compatibility with "samesite" attribute so badly that you are literally forced to do browser sniffing to have your site work in both modern browsers and older versions of safari
Getting a browser vendor to fix something takes a lot of effort, and doesn't immediately alleviate any impact on users. "if (browser.name === 'safari'
&& browser.version < 14)" does.
I go back and forth. I do web development, and I do occasionally rely on the user agent, I completely agree that there are some situations where there isn't really an alternative we can use. It's not necessarily just a problem of specification, sometimes there are browser-version specific bugs that just have to be accommodated, and there is no way to test for those bugs and there is no way to get a signal for those bugs because they're not specified behavior.
However, I also browse the web on Linux, and there are also situations where sites just kind of decide that they do or don't support me based on whether or not I'm in an allow-list that was lazily cobbled together and that hasn't been updated in over a year. This seems to me to be the same category of problem; it should be better, sites should know not to do that, but they don't. And sure, I can solve the problem specifically for myself by going though an annoying process to lie about my user-header, but it's a big damper on all of the other "users" (ie, random family members/friends) that I support who aren't able to do that. And it defeats the primary purpose of the user-agent if I'm lying to sites about it because they block off capabilities for no good reason from agents that they don't recognize. If I'm constantly lying about my browser as the "solution" to that problem, I'm already kind of breaking user-agents in the exact way you're worried about.
And there are also obviously the privacy problems that come along with user-agents, which I'm not going to get into, but they're substantial and there is no way to solve them without making user-agents much less useful for website operators.
So I don't know what the solution is, or even if there is a better solution available than what we currently have, but there are real downsides to the current setup. I think it's silly to pretend that browser vendors won't ever have quirks that make user-agents necessary. But I also think it's equally silly to pretend that website operators will ever use them responsibly, and equally silly to pretend that people won't get locked out of sites for no reason because of them.
There is definitely some naivete in getting rid of user-agents, but there's also some naivete in keeping them, and the arguments for "well, they're still necessary" sometimes ignore just how much wasted effort and hacky crud goes into making the web work with them. At the same time though, there are bugs I've fixed in my day-job that could not be fixed without them. It's just, that doesn't mean the downsides aren't also there and that they don't also matter.
I'm personally interested in how client hints progress. They're not perfect, but they seem to be decent, and for all of my criticism of Chrome's Privacy Sandbox concept, I think this is an area where it makes a lot of sense -- make certain hints possible to check, but have a cost associated. It's still not the exact browser version though, there are still bugs that I wouldn't be able to fix with that system. But there are some bugs that I use user-agents for that this system would work for, and I'm willing to make my job slightly harder if it makes a bunch of other things better. Or at least, I'm willing to see how client hints play out and see how much harder they do or don't make my job.
Yes. Agreed. Sites should be doing their best to avoid having to check the UA String.
For some tightly-controlled environments (medical, aerospace, etc...) where all software components & tooling has to be qualified for use, I can see checking the UA in order to throw up a "your browser is not supported" banner ("not supported usually means "we haven't tested it, so we can't assert this web app will work properly", not so much "our app won't work"). But sites should really avoid responding with different content/html, etc.... for different browsers (though, I have encountered sites that serve images as webp to chrome and jpg to IE...)
This are places where you don't automatically upgrade browsers but keep them at one specific version. No rocket/airplane wants to stop working because something changed.
It is also places which are good at writing contracts that can ensure such things to work, so these places should not be a concern.
It's likely cheap shoddy sites that will be a problem. The ones that think jQuery is too advanced tech.
That's incredibly arrogant, IMO. Imagine the amount of pointless work that would cause world-wide, and the cumulative amount of pain experienced by readers of the web for years.
It's pure value destruction on a global scale, just making things worse to prove a point.
A better solution is to realize that the UA header has been completely broken for decades and do yet another compat hack, whilst transitioning to some more futureproof mechanism.
The web needs a Linus occasionally screaming: don't break things for shits and giggles.
Yes. Your software doesn't exist in isolation; you have users. Your #1 goal is not to make your software ideologically perfect. Your #1 goal is to make your software useful to its users. Sometimes you have to break things, yes, but these events should be strongly justified and impossible in practice to avoid. Bumping a version number in a user-agent string doesn't rise to that level.
Some people relying on fixed width digits will happen anyway, no matter how you send the version. There's decades of experience with this. It has nothing to do with the User-Agent header as such.
With the Chrome "backup plan" literally everyone will have to modify their code to make it work, making it more convoluted in the process. With the "well, just break stuff then" only the small number of people who relied on a fixed width of the version will have stuff break. This seems like a decent enough trade-off, and will almost certainly save programmer time globally.
> With the Chrome "backup plan" literally everyone will have to modify their code to make it work
I don't understand. If you need to detect a >99 version, then yes, you'll need to update your code, but you'll need to do that anyway, to add your >99 logic in the first place. If you don't need to detect that, then no changes are needed, and as a bonus no poorly-coded sites will break.
The point is not necessarily that this hack is too much, but it's yet another hack on a system that is already so full of hacks that parsing it is incredibly complicated and error-prone, requiring more hacks in the future. It's a self-reinforcing loop. Do we ever say enough is enough?
Why is there version numbers for web browsers at all when version bumps are forced? To the average user, it's not a meaningful number at all. Breaking sites as a side effect for bumping a version number that can't not be bumped is a totally insane thing to me.
> If those internal sites are so crucial, then maybe they shouldn't build them like a house of cards
They were built by a PhD candidate in 2012 and the source has been lost to the mists of time. It's still critical to the research group's workflow though, so they switch to Edge instead of rewriting it . A month later they encounter an article on HN asking "Why is Firefox losing marketshare and how would you save it?"
The topic about quality of software has been brought up several times in HN, and here we are. I'm ashamed that in our industry, we are still tripping over pebbles and writing code so utterly shitty that it breaks when a number goes from 2 to 3 digits, of all complex things that happen in our world.
The bar was so extremely low, that I'd be OK letting parsers break due to their short sightedness. Not even in embedded space (where I've got some experience to know that computing resources are scarce) this would pass as understandable.
I really hate this narrative. Why should the browsers give a damn? There should be nothing required of the browsers, the only ones to worry are the idiots writing faulty UA parsers and baking in their assumptions.
Last I heard, Chrome was phasing out user-agent strings anyways[0]. There's already a feature flag, and turning it on makes the user-agent "generic" so that specific browser and OS versions are rounded off and end with 0's. In other words, relying on browser version numbers is deprecated by Google fiat.
In all seriousness can we please replace the user agent, and add a new header that is meant to supersede it. We can then freeze the current user agent at v99 with all the other rubbish that's still in it and put real correct date in a standard format in a new header.
I know there are arguments that a new version will be abused just as much, but I think we should be able to find ways to elevate that by, for example, limiting the length of different fields so doubling up of information to mask the truth or detect as two different browsers isn't possible.
We could even design this new system specifically to be fingerprint proof, just report the essentials which is pretty much just the browser engine (webkit/blink/gecko) and version. don't even need the actual browser and os.
Everyone knows it's so easy to migrate a HTTP header that has been in use since basically forever (in internet time). It'll never be possible to completely change it to something else, just imagine all the software that assumes that header to be there.
And for in-browser stuff (via JS), you already have an alternative that works perfectly fine, `window.navigator`, that looks something like this:
Looking at this, the actually useful information is still heavily obscured and hides in the same "userAgent" string - the "Firefox/97.0" at the end. One would expect this information to be in appName, appCodeName and appVersion by semantics - but those fields are useless, arguably misleading, with "Netscape" having seized to exist a long time ago.
think there should be a separate 'browserVersion' field that is numeric which should be checked. then again, browsers are such complex beasts with multiple components mixed and matched together by different vendors, it would be great to have separate user agent component fields for those.
I think you’ll find all kinds of ‘libraries’ for doing that, especially in small embedded devices with a web front-end.
I also fear there’s software that correctly extracts all the digits of the version in the user agent, but then compares it as a string with a given version. That would get you
Slack's message buttons (such as "Add reaction" or "Reply in thread") stopped working for Firefox versions >= 100 and <= 519. They mysteriously started working again for versions >= 520. The version string "100" compared as smaller than "52" in Slack's code, which activated some kind of webcompat workaround intended for Firefox versions < 52 that breaks on more recent Firefox versions.
True, but user-agent was 1996(?). By then, we weren't talking about limited mainframe storage devices.
It just baffles that someone can look at a monotonically increasing number, defined and incremented by a third party, and say "That will never be 3 digits."
I'd wager it's mostly fixed string indexes, regular expressions ("\d{2}"), and things like that. Maybe also some people with a "varchar(2)" column in the database.
A lot of software is written by junior programmers, not infrequently in a hurry. It's an easy mistake to make and if you've been bitten by it once you'll remember it, but many people have never been bitten by it.
Well, keep in mind that major version numbers used to change slowly. In the first ten years Firefox existed, it went from version 0.1 to version 5.0. At that pace, it would have taken two hundred years to hit a three-digit major version. In my opinion, browser vendors brought this upon themselves by changing what "major version" meant.
> To be fair, in 1970 those 2 digits were very expensive.
More than just "very expensive". Often, you were limited to exactly 80 columns for each record. If you had for instance three dates on one record, using 2 digits instead of 4 digits saved 6 columns, which is over 7% of the available space.
I wouldn't be so confident that the code is coming from a library. We take code distribution for granted these days, but some fraction of the web today was built when the only way to get anything done was hand-rolled perl scripts.
I admit it's crap, but I once added a temporary fix to a system of mine that blocked clients with a user agent matching "Chrome 54" (at the string level) due to a bot (I know it's not ideal, but it worked in the moment, and genuine users were no longer using such an old version). I doubt Chrome 540 would turn up any time soon, but I can picture other folks having done similar things with "Chrome 12", say, and wondering why things fail with Chrome 120(!)
That's the speculated reason why Microsoft had to skip Windows 9: because in the late 90s, software were checking for «starts with "Windows 9"» to identify Windows 95 and 98... and some of those checks still exist and would had identified Windows 9 as Windows 95/98.
USB went from 1.0, to 1.1, to 2.0, to 3.0. 3.0 got renamed 3.1 Gen 1, and 3.1 Gen 2 got introduced. Then 3.2 came around, which renamed 3.1 Gen 2 into 3.2 Gen 2x1, and introduced 3.2 Gen 1x2 and 3.2 Gen 2x2. USB 4.0 introduces USB 4.0 Gen 2x2 and USB 4.0 Gen 3x2.
This is ignoring the battery charging, alternate modes, etc. Just the main USB versions.
A pet peeve of mine is when corporations try to use "One" as a brand. Not only is it horribly uncreative, it's been done countless times by countless companies and it never sticks. They're trying to get across the idea that this one product does everything, or that different products/offerings are now unified. But "One" is lazy and unoffensive in the corporate world.
Steve Ballmer wanted a "One Microsoft", SkyDrive being renamed to OneDrive was partly due to this, and was hoping people would call the XBox One "The One".
I haven't parsed a user agent in about 15 years—what is the actual use case for doing so these days? it seems like it's only become less and less valuable of a thing to do over time.
I have to reduce size/resolution of my canvas for iPad as mobile safari gets really slow with html canvases around 10k x 10k size. It's tricky because mobile safari has exactly the same UA as normal safari.
If you want to show the user a list of active sessions (and let them revoke one), it makes sense to try and parse the UA and show them a more readable device name.
Me neither. But some do, especially if they 100% need features working that are half-supported/different for different browsers.
Though, I don't find many examples of it and checking individual features for determining whether an API is available for example (doesn't apply to everything, yes I know) right, non-hacky way to go.
Regarding the dropping of the user agent string, are folk not concerned that it would lead to future web development coalescing towards a Chrome "world view"?
Development seems to be moving in that direction already (see market share per browser and the number of "Chrome only" sites). How would removing UAs continue or hinder that trend?
I just wonder if it would lead to further cementing the Chrome-first approach to web dev - in the context of browser sniffing in JS libs and even backend code. I guess this doesn't drop support for JavaScript navigator properties so maybe JS is fine.
Huh, this is the first time I have heard about interventions. Checking about:compat on Firefox for Android reveals (perhaps unsurprisingly) multiple fixes for Google sites. I wonder what constitutes deploying an intervention versus letting the site break. I understand that part of the reason Google sites are being fixed by Mozilla is because they are popular.
For example, the "canPlayType" API for whether a codec is supported returns "probably" or "maybe". So we sometimes need to hardcode which browsers support which new codecs.
Also, there are several bugs in past and present versions of decoder implementations, which are only discovered though manual testing, or by observing quality of service metrics split by browser version (Firefox does not handle bad audio packets as well as Chrome, for example). In IE and old Edge, the video readyState would always be "4" after playback beings, even during buffering, which is a blatant violation of the spec Microsoft refused to fix (as stated in their bugtracker).
Browsers like Safari have subtly different event orders from the HTMLVideoElement which requires a burdensome workaround that us working in video just like to keep to Safari instead of poisoning other implementations.
A final fun quirk is that not all browsers gave accurate HTTP timing information until recently (notably, Safari < 14) which make download timing for the purpose of determining bandwidth very inaccurate. This is why Twitch only recently supported low-latency playback for Safari. There is no other way to ask "do you support accurate timing" other than for an engineer to test it and hardcode an exception.