User agent sniffing is poor practice anyways. Try to use the feature and react according to the browser's abilities. Analytics and similar can sniff all day though.
To push browsers to get rid of UA string, we should all use a UA string extension that uses the same string like "DOG-SHIT". That way it'll start showing up in analytics.
And if you're trying to date the "data science" girl, spam the app/website with UA string like "hi-amy-will-you-go-out-with-me--sincerely-jack-who-sits-behind-you."
So it turned out that Chrome 93-95 had a weird issue with the video element: you couldn't replace the source attribute reliably. Unfortunately, that's something that mattered to us, so I had a hack in place that tested the browser version. It's still there in case someone hasn't updated their browser. Testing the feature was simply not possible. It was either that or just let (a large) part of our users get stuck on a frozen screen.
Yup, the "test for feature availability" attitude doesn't properly account for features which are available yet buggy.
I recently ran into an issue where a media-heavy blog post of mine would sporadically crash iOS Safari. I ended up narrowing it down to my use of ImageBitmap to accelerate rendering images into 2D canvases on the page.
My fix ended up being to modify my ImageBitmap feature check to pretend it wasn't supported on Safari.
Feature testing is a lot harder when attempting to work around CSS bugs in coughcough Safari, unless you use weird things like @supports (<insert feature that vCurrent mobile Safari doesn't support but all other current browsers do>) and then hope it is fixed in a future Safari.
Does anyone know of (or work on) an app still doing user-agent sniffing to determine browser features (as in, the sniffing code has been touched in the last ~decade)? If so, why do it that way?
Lots of `isIOS` and `isSafari` tests in my code. Safari 15 for example cannot correctly apply orientation metadata, and will choke on very big images when using `createImageBitmap` in a thread. No way to find out.
I could probably detect the orientation issue with a small dataURI of an image, but the big image issue is more difficult.
Client Hints are the recommended replacement for UA strings. I believe the default behavior for Chromium browsers is already changing (looking for schedule link).
"recommended"? That's just something that Chrome team created and no other browser vendor is interested in supporting it. Google seems to think that they can use their market dominance to force the idea.
If it’s an improvement, why is that a bad thing? I do expect them to lead if they’re in a dominant position. Sitting idly would be the wasted opportunity.
The issue is that you often want to work around bugs in a browser. For instance, Safari technically supports a lot of features, but very often there are slight inconsistencies with other browsers. Would Safari’s capability be something like “capture-stream-with-bugs”?
In retrospect, wasn't putting "Mozilla" in your user-agent string when you aren't the Mozilla Foundation or project a trademark violation, that they could have enforced?
I’m on mobile so kinda limited search capacity, but it looks like IE was using “Mozilla” in its UA string for a couple years before the trademark was published (granted it was filed a few months prior to that IE release, and granted I have no idea how to read uspto.gov with any confidence that I understand it, and also granted I have no idea how much prior usage of trademarks overlaps with concepts like prior art for patents and copyright).
I get that. I’m just questioning (with some admitted ignorance) whether that matters if the pretense predates the trademark registration. At least for related copyright and patent law, and at least in the US (which influences a lot of IP law), prior usage can carve out exemptions, limit or even invalidate the claimed IP.
I just don’t know how much of this applies to trademark because, well, I never got curious about it until now. But I would assume it’s been reviewed by people far more qualified than myself, given the very litigious context around IE in the intervening period. My bet is it’s moot for trademark because it would’ve been unenforceable at the time of registration, or that it’s such low stakes that no one cared, or that it’s such high stakes that technical leadership fought to retain the status quo.
You can't use trademark in that way if they would need it for compatibility. Nintendo made games have a "nintendo" title scroll on Gameboy and checked for it as "genuine cartridge" protection in a special chip, and sued third parties for trademark violations if they included the title scroll to get around the protection chip.
Courts ruled Nintendo couldn't use trademark in that way, that it was laundering in more rights than trademark provides.
ah, righto. thanks for specific case history example. and indeed i'm glad of this.
It's weird in this case because it's not Mozilla itself that was requiring Mozilla in the user-agent string for a site to allow or support the browser, but third-parties, who Mozilla didn't encourage or particularly want to do that, as far as I know.
But it's still probably true that it would not be enforceable under trademark for this and other reasons, okay.
I wonder if Firefox has ever considered taking "Mozilla" _out_ of their user-agent! At this point it's probably impossible. We're just stuck with this mess.
There definitely was a case involving the GB logo in ROM. The internet seems to have forgotten it, though. Whenever I've tried to look it up in the past the results always point to the unrelated Game Genie lawsuit.
No, the idea was right from the beginning that you could name (additionally) other user agents conforming to the same or lower specs as your client.
(Without this, it would have been nearly impossible to introduce a new browser. Mind that browsers were much more dissimilar in the early years. E.g., Netscape Navigator could access virtual hosts, while NCSA Mosaic, which didn't know about the host header field, could not. While some browsers came with JS, others, like Viola Web, followed a different scheme, more similar to HyperCard, etc. Therefore, a server may have well checked the user agent string in order to decide how to serve what content.)
On the other hand, the whole reason UA strings are such a mess is because they have a long history of confusing humans who program computers. And being used sometimes intentionally for that purpose. And being used sometimes intentionally to mitigate that.
I know it would be a temporary disaster but I think at this point I wish Safari, FF, Chrome and Edge would just decide on a flag day to get rid of user agents.
From March 17th, 2023 (or whatever) all user agent strings are now “WebBrowser/1.0” until the end of time.
I think, there's still use for it. E.g., I recall when Chrome 32 would fail to play a newly created Web Audio BufferSource, unless the playback call came from inside a decodeAudioData() callback. At the same time, you couldn't use this approach to playback sound from e.g. Safari iOS, since the callback lacked the user interaction blessing, resulting in muted audio. Try-catch wasn't an option either, since, as soon as the playback failed over any attempt to play that node in the traditional way over a null-buffer exception, the source node was already expired and won't play at all. The only way to handle this situation was user agent sniffing in order to detect Chrome and to handle it as an edge case. (Mind that the API was still the same and there was no way to feature detect this behavior.) There's no guarantee that something like this won't happen again.
That just sounds like a browser bug though. Client side code shouldn't try to work around it, the website should just notify all users that Chrome is broken and won't play audio properly
That's ridiculously naïve. You can't just decide to let your product be broken for 80% of your users because it's technically their browser that's wrong.
I see these kinds of suggestions from technologists surprisingly frequently, and I always wonder if it's a serious suggestion and if the person has ever worked on anything with non-programmers as users.
At that time, about every audio library was broken for 80+% of the users. If you happened to maintain such a library, you received bug reports for sure (on your library). Chrome claimed their new implementation was inside specs, so it was unclear, if there would be a fix. As it happened, the specs were amended/clarified and Chrome returned to the previous behavior a few versions later.
> While the double cookie approach is what is recommended by Google, the ASP.NET team concluded otherwise - they found that going with a user-agent sniffing approach would be the safer approach. User-agent sniffing is hard to get right. It's hard to cover all the relevant browsers, and it's hard to do so in a way where you can be reasonably sure it won't break in user-agents of the future. But when someone gets it right, that becomes a readily copiable solution that anyone can use.
Isn't the issue here that a breaking change was introduced in a new standard version that only some browsers implement.
Why can this not be solved with http version numbers instead of a method that allows Google to maintain a seperate standard that it forces developers to switch between with user agent strings?
That's not possible: it assumes a perfect implementation. As I wrote in another comment: Chrome 93-95 had a weird issue with the video element. You can't test that in JavaScript. The feature exists, it just doesn't work correctly. You sometimes have to work around bugs in certain versions of certain browsers.
> Just force the switch to better methods.
I always thought software developers should have the needs of users in mind, but it turns out it's the other way around.
* Why do you think we can make our users upgrade? We're small, just one of dozens of websites they use per day. And in our line of business, this usually means losing clients (who, BTW, are usually not the end-users).
* How would we even be able to tell them they should upgrade if we can't test the version of their browser?
* It was Chrome 93 to 95. The bug existed for three months in all versions of Chrome. They couldn't update.
Last time I had resort to UA sniffing Firefox had issues with clikable horizontal/vertical lines on SVG. There was no version that supported it properly. So I should just display "Switch to Chrome" banner for Firefox users? Is it that simple?
HTTP/2 is more like a transport envelope around HTTP/1, the goal of http/2 was to optimize transmission and avoid breaking http/1 semantics for servers and clients.
That don't work with render only bug, safari has tons of them. And in that case, JS is useless. Because everything reports correct, only the screen messed up.
Another example, Android webview 69 on certain devices has a bug that will take down the whole app. There is no way you can workaround it without ua sniffing. Because a single try will take the whole app away.
It can be difficult to check for bugs in this manner. Especially if the feature you want to use worked fine before, then has one or two version where it crashes, and then works again.
If you feature test for browser crashes, people are going to think it's your site, not their browser.
This is the correct answer. Allow 3rd party analytics or whatever to do as they choose, but your javascript should pretty much never believe a UA string.
This would be sufficient if features were a yes/no binary of supported or not, but sometimes browsers implement features but get them wrong, or incompletely.
Then the features aren't supported in the browser and you should file a bug report to the vendor. Having the developers that use a tool work around the bugs in that tool is a recipe for having millions of repeated lines of code that don't actually solve the problem upstream.
The problem being that browser vendors tend to implement Web APIs early, before the API and the respective specifications are frozen and are still likely to vary. They may even implement a certain (still valid) interpretation of the specs, which then turns out to be not exactly what everybody else eventually implements, but may stick to this for some time. (There are examples for about every browser vendor on the planet for this. Without this, if everybody had waited for finalized specs, we wouldn't have web sockets, web databases, or Web RTC for years now.) How do you handle this without version numbers?
"Webmasters" were told to use XHTML and they massively cried how that they just can't write correct code so they absolutely need sloppy parsers that would analyze the mess of unclosed tags and weirdly placed elements and try to understand the intent.
And the industry gave up. This is essentially the same story.
I guess, this is targeted at Apple's butterfly key switches, which were used in their 2015-2019 MacBooks and prone to fail due to dust and other small particle intrusions.
Looks like that's been replace by "Reduce User-Agent request header
Reduce (formerly, "freeze") the amount of information available in the User-Agent request header. See https://www.chromium.org/updates/ua-reduction for more info. – Mac, Windows, Linux, ChromeOS, Android, Fuchsia, Lacros
#reduce-user-agent"
User agent strings aren't reliable for fingerprinting users or devices already. Shouldn't be using them as high fidelity IOCs. But yeah, some still do. Much better ways to fingerprint users or devices on HTTP/websites.
Yeah, all browser developers have tried to essentially freeze the UA, if nothing else to stop keying site behaviour off random subsets of it. For example here is my UA:
Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.2 Safari/605.1.15
Which is all sorts of frozen (old os version, intel on an arm device, etc)
Are they talking about only MacBook Pros with Intel chips since they reference Butterfly keyboards or is that sarcasm that sort of feels out of place ?
I strongly suspect they’re referring to the aspects of the UA string specific to their own device. I.e. they used an MBP (whose vintage can be intuited from its notorious keyboard problems and) which has evidently stopped receiving OS updates past 10.15. Edit to add: I also suspect they thought this was a clever way to avoid some nerd inevitably chiding them to upgrade their OS, but it’s probably too clever to achieve that.
Mozilla froze the macOS version in User-Agent strings past Catalina too, also due to web compatibility issues (particularly with older versions of the Unity game engine).
That honestly seemed a little unnecessary and made the whole thing confusing. I read it as: "This is only an issue on a MacBook, assuming that your s key sticks and the date is December 29th, 2022".
I suspect that the only part of that which is true is that the bug only occurs on macOS. The date or a sticky s keys isn't a prerequisite.
What would be more interesting is: Why isn't this an issue on Windows?
I presume that would unfortunately break websites in _other_ ways, if they're UA-sniffing Firefox in unconventional ways. At that point they might as well remove it altogether instead of drastically changing the UA string.
Stop sending User-Agent. Replace it with nothing. Not client hints. Nothing.
If sites start rejecting Firefox clients because of that, then instead change it to send exactly the same User-Agent (and other software identification) that Chrome sends, and commit to exactly faking Chrome's signals in the future.
The UA header never had any business existing to begin with. Servers guessing client capabilities from the software they're running, or trying to work around client bugs, is architecturally insane, concentrates power excessively, and guarantees that morons will write bugs like the one this story is about. And the other uses of the information are simply evil.
If you need to signal specific capabilities, which you generally should not be doing because you shouldn't have punted all real attempts at standardization years ago, then signal specific capabilities. Or let the server try stuff and give only success or failure feedback.
I'm with you for the most part. Today, with modern browsers on almost all devices and relatively high memory, compute and bandwidth capabilities - for certain.
However I can't agree with this:
> The UA header never had any business existing to begin with.
The early mobile web, and devices that were just too low-powered and/or low-bandwidth to support even early CSS/JS-based capabilities detection absolutely depended on server-side rendering to serve them mobile-HTML friendly sites.
Yeah, it was terrible, it was painful and insane to manage - but was necessary.
By default, I do not send a UA header. Mind you I use a text-only browser so I am not concerned with "responsive design", which is arguably one legit use for the UA header. The number of sites or hosting providers^1 that actually require a UA header in order to retrieve a webpage, let alone require a specific UA string, is relatively small. IME, they will fit in a short text file.^2 From firsthand experience I know that thousands of sites, i.e., from amongst the ones submitted to HN, do not require it. I suspect the true number of sites that do not require a UA header on the www is in the hundreds of millions. IMO, this is yet another incredible "tech" company stunt that there is currently so much commercially-related reliance on what is truthfully an optional header, and one that routinely contains "fake", i.e., arbitrary, data.
1. Or "website building companies" such as Squarespace.
2. The localhost-bound forward proxy adds a UA header for those few sites automatically.
> so I am not concerned with "responsive design", which is arguably one legit use for the UA header.
Responsive is specifically designed to work without client hints/detection, so you'd be good there anyways. Specific mobile-only versions of websites is what breaks, and a large part of that is (some?) Wordpress sites.
At first I was like "Yeah buddy, preach on". Then came the next thought:
Then what. Site providers will still will want to know and this likely results in a browser detection excalation arms race pushed to JavaScript.
I still think servers shouldn't get to know so much about browser clients, certainly not PII-level details (PII-level being the norm as it exists today).
Does the user’s browser support WebDoodle2? Why try to parse a weird history of nonsense and combine that with a table of what versions of what browser supports what?
Just check if document.doodle2 is defined. Done. Reliable. Easy.
FF pretends to be Chrome pretends to be Safari pretends to be IE pretends to be Netscape pretends to be Mosaic pretends to be god only knows.
Actually asking via JS instead of guessing is a massive improvement.
You can't stop fingerprinting. I wish you could, but you can't.
But you CAN make it enough of a pain to cut down on the number of half-qualified Web monkeys who try to use the information in ham-handed ways. You can stop just handing the information over for free to people who might want to casually exploit you. You might even make it harder for some more sophisticated and/or committed actors to do it; for example, if I'm an ISP running a middlebox and trying to fingerprint all the traffic that runs through me, I can't use JavaScript. And you can save some bandwidth in the process.
Can we presume a premise that having Javascript disabled is the dominant condition, and therefore does not make a browser stand out from the herd? What if having it disabled is true of only a small percentage of browsers?.
If we assume that the user-agent string is set to whatever is "most common", and Javascript is disabled, are there no other identifiable bits of information? What about CSS fingerprinting?
Perhaps the expected adversaries are not sufficiently motivated to try to sort somebody into a cohort based on lack of Javascript, and the remaining observable data?
My point with my comment was addressing the GP's claim about fingerprinting being pushed to JavaScript in an arms race, which is what's happening and is something that NoScript can't really stop unless millions/billions of people start using it.
NoScript is great, I use it, but isn't a solution to fingerprinting via JS if you actually want to talk to the servers that fingerprint that way.
> results in a browser detection excalation arms race
Countries should start enforcing their laws with regard to equal access for disabled users. Invoke hefty fines for denying service to people using assistive software that doesn't masquerade as a mainstream browser and the problem will disappear.
We live in the 21st century and have proper APIs to detect features now, we should not be relying on parsing this user-agent string which is 90% legacy-garbage anyways.
I mean, sites blocking compatible browsers is such a common problem that not only do user-agent switcher extensions exist, they're also some of the most popular extensions out there!
A side-effect of this will be web servers that won't allow scraping because your non-browser HTTP client won't have the same features that browsers have that also aid in profiling users for advertising.
We're already at a point where scraping can be hard without a full-fledged JS engine, a move towards feature detection will mean that you're going to have to use a browser that servers can easily fingerprint for ad targeting if you want to scrape data.
Chrome already deprecated UA strings 3 years ago. Google uses browser fingerprinting to detect if you're using Chrome so they can send you to working versions of their products, instead of the versions they send to other browsers that don't work as well.
I can see browsers being excluded if they aren't of the 'blessed' variety that either follows Chrome's implementation, or if they don't have features that advertisers want that allow for easy user identification/tracking/fingerprinting/etc.
I don't think their suggestion was "just do what chrome does", it was "make your browser indistinguishable from chrome so that Google trying to create a seperate browser standard breaks websites for other browsers".
If a website can't tell my browser apart from chrome, it can't attempt to support chrome-specitic features based on the user agent string.
No, it's just standard web developer laziness. They haven't made it IE only, they just broke the less popular browser, and likely anything else that is not chrome/blink, because chrome/blink is the new IE: its bugs define "correct behavior" it shipping something means "it's standard", no different from the "IE is the internet" era (and just like chrome there was a long period where IE was the best browser, before devs started targeting it alone).
I believe Firefox doesn't expose the audio device output picker, only letting you select a single output device (you can select one in the permission prompt, but changing it is difficult). Last time I looked into why Teams wasn't working right, that was the reason Firefox was still listed as unsupported, because this can easily miss up setups with conference call PCs hooked up to some fancy proprietary audio conferencing system.
Many aeons ago I worked on webkit, and the first step for a great many site compatibility bugs is "does it work with a Firefox ua string".
Because it is always easiest to just check ie/new-ie and then useragent gate everything else, that is what happens. Which is why we keep getting sites requiring chrome (new ie) or ie (old ie) - it doesn't matter if the site is broken due to reliance on chrome behavior vs spec, what matters is devs coding to a single browser and considering any deviation to be a bug in anything else.
Nope. A web browser browsers the world wide web. A WebDAV file browser just speaks HTTP. No hypertext, no web.
edit: Wow, it's not really that complicated. Please take a moment to stop taking a crowbar to each and every crack and crevice of this argument. It's not about hypertext specifically, or hypermedia, or HTTP, or HTML, that's missing the point. The point is, a web browser is software that browses websites. A WebDAV browser browses WebDAV. WebDAV is built on web technology and can be used inside a web browser, but it is no more "the web" than anything else. A WebDAV file browser is absolutely no more a web browser than the Deno JavaScript runtime is. I can't browse HN with a WebDAV browser, or Deno. Speaking HTTP does not make you a web browser.
Congratulations on finding the word you were looking for. The only problem is, when you use WebDAV, the payloads aren't hypertext. The acronym doesn't matter.
Tar stands for tape archive, but thankfully it works on disk drives too.
It's called that, but the program in question doesn't understand it. The fact that the protocol's name contains "hypertext" is irrelevant, just as using http to download a .zip file doesn't make that file a "hypertext".
To push browsers to get rid of UA string, we should all use a UA string extension that uses the same string like "DOG-SHIT". That way it'll start showing up in analytics.
And if you're trying to date the "data science" girl, spam the app/website with UA string like "hi-amy-will-you-go-out-with-me--sincerely-jack-who-sits-behind-you."