Hacker News new | past | comments | ask | show | jobs | submit login

But that's what this proposal is about. Instead of using the user agent string (which lies and needs to be parsed) you have some well specified variables to check.

Also I think you're missing why the user agent lies now. It wasn't done on a lark. It's because it's been abused so much by web devs. Both by sloppy regular expressions and deliberately. For example, even big names like Microsoft and Google have at times used it to unnecessarily deliver poorer browsing experiences to people not using their browser.

Sure browsers could have stuck to their guns and been "honest" about their UA. But honesty is no comfort to users of the browser when they find websites break for no reason. The average user is more likely to blame the browser than the website.

I am completely aware of why the user agent lies now. Didn't I explain exactly how it is broken in my post? I was basically there for when it broke; I remember when IE came out claiming to be "Mozilla" because otherwise a surprising number of sites wouldn't serve them the latest whizbang Netscape 3 HTML. (I thought it was a bad idea then, but with much less understanding of why.) This is why I kept calling what I'm asking for a new field; the User Agent itself can't be rehabilitated.

The parent of my post is correct; in practice we're still going to need the occasional ability to shim in browser-specific fixes, because even if the browsers do their best, they're going to inadvertently lie in the future and claim to support WebVR1.0 in Firefox 92, but, whoops, actually it crashes the entire browser if you try to do anything serious in it. Or, whoops, Firefox 92 does do a pretty decent job of WebVR1.0 but I need some attribute they overlooked. Or any number of similar little fixups. We know from experience from the field in the real world that we're talking about crashing bugs here at times; this is real thing that has happened. Whatever proposal gets implemented should deal with this case too.

If we standardized on the format like I suggested at the end of my post, it would go a long ways towards preventing future browsers from mucking up the field. If you just get "$BROWSER $VERSION $OS" in a rigid specification, and if the major browsers are sure to conform to that, and the major frameworks enforce it, it'll be enough to prevent it from becoming a problem in the future. It won't stop Joe Bob's Bait Shack & Cloud Services from giving their client a custom browser and/or server that abuses it, but there's no stopping them from doing things like that no matter what you do, so shrug.

Then I'm not sure I understand you. The proposal is clearly proposing new fields that are less susceptible to abuse (whether intentional or not). Your idea of parsing a "$BROWSER $VERSION $OS" string seems inferior to client hints that use structured headers.

I'm saying we still need a browser version field, in addition to a feature field. Features would do on their own, if they were perfect, but we shouldn't plan on them always being perfect. We have a demonstrated, in-the-field history of browsers claiming to support features when in fact they don't quite support them, and can even have crashing bugs. In the real world, supporting WebVR1.0 is more than just putting "web-vr/1.0.0" in the feature string.

Culturally, you should prefer to use feature detection. Most developers would never need to use anything else. But when Amazon makes its new whizbang WebVR1.0 front-end in 2024, they may need the ability to blacklist a particular browser. Lacking that ability may actually prevent them from being able to ship, if shipping will result in some non-trivial fraction of the browsers claiming "web-vr/1.0.0" will in fact crash, and they have nothing they can do about it.

Besides... they will find a way to blacklist the browser. Honestly "prevent anyone from ever knowing what version of the browser is accessing your site" is not something you can accomplish. If you don't give them some type of user agent in the header, it doesn't mean the Amazon engineers are just going to throw their hands up and fail to ship. They will do something even more inadvisable than user agent sniffing, because you "cleverly" backed them into a corner. If necessary, they will examine the order of headers, details of the TLS negotiation, all sorts of things. See "server fingerprinting" in the security area. You can't really stop it. Might as well just give it to them as a header. But this time, a clean, specified, strict one based on decades of experience, instead of the bashed-together mess that is User-Agent.

Or, to put it really shortly, the fact that a bashed-together User-Agent header has been a disaster is not sufficient proof that the entire idea of sending a User Agent is fundamentally flawed. You can't separate from the current facts whether the problem is that User Agent sniffing is always 100% guarantee totally black&white no shades of grey mega-bad, or if it's the bashed-together nature of the field that is the problem.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact