
Blink-Dev – Intent to Deprecate and Freeze: The User-Agent string - jasonvorhe
https://groups.google.com/a/chromium.org/forum/m/#!msg/blink-dev/-2JIRNMWJ7s/yHe4tQNLCgAJ
======
jchw
I wish people would stop trying to force inject cynicism into this because
this is actually pretty amazing of an idea imo. If you look at browsers like
Falkon or QuteBrowser they are perfectly serviceable browsers but actually
using them can be annoying due to UA sniffing, _even by Google_. In theory
this is the solution to that problem. Freezing and unifying the UA will
prevent everyone, including Google, from using it to gate features.

I am a bit worried, though, about how we will continue to count marketshare.
If the intent were to remove the UA it would be worse, but it does seem like
browser marketshare analytics based on user agents may be coming to an end.

(Disclosure: I work for Google on unrelated projects.)

~~~
jackjeff
I have been on receiving end of browser bugs, developing cutting edge
JavaScript applications.

It's _VERY_ useful to be able to use the UA string to do something like:

if (Chrome version X) then { do this crazy workaround because Chrome is broken
}

No. People don't even necessarily update to the latest version. The "Samsung
Browser" was notorious for seemingly _always_ be some old fork of Chrome or
Chromium that would _never_ update.

~~~
henryfjordan
In the short-term this might be annoying, but in the longer term this is going
to force browsers to adhere more closely to standards. When sites are less
able to write different code for different browsers, the onus to fix
inconsistencies will transfer from site-maintainers to browser-maintainers.

In 5 years you might look back and realize you no longer write browser-
specific code any more.

~~~
bhk
In such a world, it is not standards that win. It is the _predominant_ browser
that wins.

Are you old enough to remember the "best viewed with Netscape" badges that
were everywhere in the 90s?

~~~
necovek
Haha, brings back the old days. I had my sites plastered with "best viewed
with your eyes" badges, but that never took hold :)

------
ChrisSD
> On top of those privacy issues, User-Agent sniffing is an abundant source of
> compatibility issues, in particular for minority browsers, resulting in
> browsers lying about themselves (generally or to specific sites), and sites
> (including Google properties) being broken in some browsers for no good
> reason.

> The above abuse makes it desirable to freeze the UA string and replace it
> with a better mechanism.

UA sniffing should have died out a long time ago. It's frustrating that it's
2020 and I'm still having my browsing experience broken for no other reason
than the site doesn't like my UA.

~~~
sebazzz
User agent sniffing is still sometimes required. Take the SameSite bugs that
Safari had. On the server you needed to act on that based on User Agent
sniffing.

~~~
vdnkh
Agreed. For media streaming UA sniffing is required because of quirks each
browser has in decoding media. Safari has a few quirks about the MP4 packaging
it likes via MediaSourceExtensions. Firefox's network stack includes a delay
not related to the actual request because of its implementation[0] which makes
timing of small downloads very inaccurate. None of these differences are
discernible except by UA.

[1] I believe FF includes in the download length the time a request spends
sitting in the request queue, but I can't exactly remember.

~~~
anoncake
Browsers have those quirks because web designers work around and hide them.

------
buu700
There are many, many cases where UA sniffing is correct and required. Browsers
are buggy, complicated pieces of software that don't always behave 100% as
specified or expected.

Feature detection is well and good, and should be the first line of attack;
but sometimes you need to account for things like certain versions of Safari
having WebRTC "support" that's actually completely broken, certain versions of
Chrome crashing when certain WebAssembly features are used, and Firefox-
specific CSS bugs. (All real examples I've run into.)

It may not be the worst thing in the world if UA sniffing is broken for all
existing web properties though, since anything not well maintained enough to
migrate to a new API is probably either working off of outdated information or
abusing UA sniffing where feature detection would have been more appropriate
anyway.

That being said, requiring a server to use User Agent Client Hints is stupid.
What are client-side libraries like webrtc-adapter
([https://github.com/webrtcHacks/adapter/issues/1017](https://github.com/webrtcHacks/adapter/issues/1017))
supposed to do? I don't see any goals listed that wouldn't be addressed
equally well while providing an equivalent JS API.

~~~
xtracto
Just a couple of weeks ago we had a customer complain that their customers
were having problems using our technology in IE11 because it was too slow...

We had to recur to identifying IE11 to write workarounds so that it works
there...

~~~
darekkay
Did you really require UA sniffing, though? There are other methods for
detecting IE11, both for JS and CSS.

------
tsegratis
I'd still like a way to feature detect, rather than make a round-trip to the
browser. This would let me embed webasm rather than js+branch to begin a
second download if feature found... etc

I suggest UA string be a bitmask of features. Then feature detection should
stop being broken

Extra bits could be used for js-on/js-off, and is-bot/is-human

\--

Ah I see they're kind of doing the bitmask, but keeping a round-trip, and
making things complicated (though I realize latest http standards can probably
remove those round-trips in the average case)

I'd still suggest the bitmask for non-sensitive information, and have
everything else simply js-tested as it currently is

Maybe is-user-blind might be a nice bit too, since canvas based websites could
switch to the dom, or whatever

~~~
tsegratis
Please could we also have a couple more privacy setting bits for i-accept-
your-cookies and i-want-to-be-told-about-cookies-on-every-single-website-
because-i-forget-what-they-are-and-really-want-to-click-through-to-your-
privacy-settings

If we have those bits, then the user can make a set of choices once, for every
site, and we get rid of cookie pop-ups

\-- Websites could still ask if they want/need to do something that violates
those choices

~~~
zeven7
Or we can just assume like reasonable adults that websites are going to put
cookies in your browser and promote privacy-oriented tech to users rather than
trying to pretend that having every website ask for permission in order to
enable basic functionality solves anything.

~~~
pjc50
> promote privacy-oriented tech to users

Like what?

> every website ask for permission in order to enable basic functionality

I don't believe that purely functional cookies require GDPR permission -
that's covered by "provide services to the user". It's the ones which are
functionality to third parties _not_ the user which are the problem.

~~~
zeven7
> I don't believe that purely functional cookies require GDPR permission -
> that's covered by "provide services to the user". It's the ones which are
> functionality to third parties not the user which are the problem.

Ah, I didn't realize that. Well, that does sound much more reasonable.

~~~
pjc50
Actually the ICO page itself presents a great example: if you go to
[https://ico.org.uk/for-organisations/guide-to-data-
protectio...](https://ico.org.uk/for-organisations/guide-to-data-
protection/guide-to-the-general-data-protection-regulation-gdpr/principles/)
you get:

> Necessary cookies

> Necessary cookies enable core functionality such as security, network
> management, and accessibility. You may disable these by changing your
> browser settings, but this may affect how the website functions.

> Analytics cookies [toggle On/Off]

> We'd like to set Google Analytics cookies to help us to improve our website
> by collecting and reporting information on how you use it. The cookies
> collect information in a way that does not directly identify anyone. For
> more information on how these cookies work, please see our 'Cookies page'.

The implication is that a consent dialog would not be required if they weren't
using Google Analytics or any other third-party.

------
zerotolerance
I think the proposal is well-intentioned and shortsighted. Nobody ever plans
for a junk drawer feature, but certain features are destined to be fragile
fuzzy contracts regardless. Unless this proposal seeks to eliminate the
differences between different user agents, it is proposing moving to a
different but equivalent junk drawer. The change would create real pain for
service providers and consumers alike and provide little benefit.

From the implementation side, I would have preferred to see an OPTIONS request
style solution similar to CORS to allow complying UAs to detect what if any UA
information will be required.

~~~
Izkata
> I think the proposal is well-intentioned and shortsighted.

Could also be evil and long-term: user agents won't matter anymore once
everything is auto-updated Chrome, right?

------
dijit
Google recently introduced a new “secure” login page which forces my chromium
based browser (qutebrowser) to fake a Mozilla firefox user agent in order to
work.

As such, I think if detection comes in some other form, it might be harder to
trick some sites into working properly.

~~~
zzzcpan
Despite all the PR about "privacy", they don't seem to intend to prevent
detection at all and are introducing even more surface for detection with
client hints. Basically the opposite of what the title implies.

~~~
rowan_m
Client Hints ([https://wicg.github.io/ua-client-
hints/](https://wicg.github.io/ua-client-hints/)) move a passive
fingerprinting vector to an active one, i.e. information must be explicitly
requested by the site and then the browser can choose how to respond.

The default level of information exposed drops to just the browser name and
major version, which is only sent to sites on HTTPS and with JavaScript
enabled.

Additional hints are only sent on subsequent requests by the browser if the
site sends the matching header in its initial response and the browser chooses
to send a value. The current set of proposed hints define the same amount of
information exposed Chrome's User-Agent string.

~~~
HorstG
Yes, but the fear remains that all sites will just always request everything
from ua-client-hints. It is also totally unclear how browsers will handle
this. I think making this permissioned will just add to the
PrivacyNagOverload. Also, browsers will continue to lie in the hints because
sites will always make broken assumptions or even try to do mischief with the
info.

I think the only winning move is not to play: Freeze the User-Agent and do not
provide a replacement. Or at the very least, make the replacement based on
actual feature bits, not version numbers and device models.

~~~
rowan_m
This is part of what the Privacy Budget ([https://github.com/bslassey/privacy-
budget](https://github.com/bslassey/privacy-budget)) proposal aims to tackle.
Freezing the User-Agent string reduces the amount of information exposed by
default. UA Client Hints means the site has to explicitly request the
additional information. The browser makes a choice about how to allocate /
enforce budget. You're right though about how that works and how it would be
exposed to the user in their browser still being open questions! More
permission pop-ups certainly aren't the answer.

~~~
HorstG
That would still cause pages to do evil things if users set their privacy
budget to "0/paranoid" or anything below "11/just gimme all".

Just as with adblockers users will be nagged about "please turn that dial to
11". On average nothing will improve except for users who are able enough to
get around those shenanigans even now.

~~~
danShumway
I think there's a fundamental principle/security in privacy that we don't
really understand broadly enough across the industry -- that if you allow
someone to know whether or not you're hiding/disabling something, they can
often just force you to change the setting.

Just as one example, active-permissions that can be revoked after being
granted aren't perfect, but are a big step up over manifests, because they're
more work to exploit and often allow users to retroactively change permissions
after an app checks if they're allowed.

Not to pick on the Privacy Budget specifically, but I worry that proposals
like this don't really _get_ that larger principle yet -- that it's still
something we haven't quite internalized in the privacy community. If a site
exceeds the privacy budget, it shouldn't get told. It should just get
misinformation.

It's like autoplay permissions. Autoplay permissions on web audio are awful,
because you can just keep trying things until you get around the restriction.
What would be better is to auto-mute the tab, because that would be completely
invisible to code running on the page.

~~~
HorstG
Agreed, for things like autoplay. But dual-use features like feature detection
that also enables fingerprinting cannot be replaced by randomized
misinformation because that would really randomly break legitimate stuff.

The only privacy-conscious way would be no feature-detection at all or a very
coarse-grained approach like "I support HTML2021".

~~~
danShumway
Sort of.

You can't lie that you do support something, but you can lie in the opposite
direction. And for sites that legitimately need that feature to function, you
don't get much benefit -- if a site genuinely needs Chrome's Native File
access, saying that you don't have it just means the site won't work.

But there's a grey area past that, which is that sites that don't need a
feature, but are just using it to fingerprint, can have that feature broken
without repercussion. If a news site requests Native File Access, and I say "I
don't support that", then whatever.

This puts engineers in an interesting position. You can't just break your site
whenever the full range of features that every browser supports aren't
available, because:

A) You want to support a wide range of browsers, and if your news site doesn't
work with multiple browsers you're just losing potential market.

B) A fingerprinting metric that just rejects every browser that doesn't
support _everything_ is not an effective fingerprinter. At that point, we
basically have the corse-grained approach you're talking about.

The problem with this approach is that when a site requests capabilities, you
need some way to figure out whether or not they're actually required, and
whether or not you can lie about them. Permission prompts are... well, there
are probably UXs that work, but most of them are also probably too annoying to
use. In practice, I suspect that manually tagging sites is not an
insurmountable task -- adblockers already kind of do this today.

One thing to push for with Client Hints is that it really, really needs to be
an API that's interceptable and controllable by web extensions.

The same thing is true of fonts today -- if you lie and say you already have a
font that you don't, congratulations, your text rendering is broken. But you
can still lie about _not_ having fonts, and you can still standardize your
installed fonts to a smaller subset to make your browser less unique.

~~~
anoncake
And all of this incidental complexity wouldn't exist if we had a sane
document-based web that doesn't allow webmasters to run scripts in the
browser.

~~~
danShumway
I've written about this in the past, but we do really need at least one user-
accessible, general computing environment that protects against these kinds of
privacy attacks. It doesn't need to be the web, but I don't know of a better,
currently-usable platform.

I often hear proposals that the web should just be for static documents, and
I'm fine with that, but very rarely are those proposals followed up with
alternative ways for ordinary people to run untrusted code. The assumption
seems to be that if the web didn't exist, users would instead be responsibly
vetting every binary on their computer, rather than downloading them en-mass
from dozens of sources. And just looking at the smartphone app market, I don't
think that assumption is true.

Again, not to say that a better alternative platform _couldn 't_ exist, but
who's working on it? The native desktop platforms I see almost all do a worse
job than the web at protecting against fingerprinting. It's almost universally
better for privacy to use Facebook in a browser instead of downloading their
native phone app.

~~~
anoncake
I don't think sandboxing is the solution. It gets in the way of functionality
and you still have to trust the developer: They can abuse the permissions that
their program legitimately needs and they can use dark patterns. You should
never run untrusted code so there is no need for a platform that facilitates
it.

What we need are standardized protocols, strict customer protection laws and
trustworthy software repositories so users can get software they can trust
without having to vet it.

> Again, not to say that a better alternative platform couldn't exist, but
> who's working on it?

~Nobody is working on such a platform because the app web exists. Remove it
and there is a lot more incentive to create a replacement. Regression to the
mean alone practically guarantees that it will be superior.

~~~
danShumway
> You should never run untrusted code so there is no need for a platform that
> facilitates it.

I don't believe this is a practical philosophy given the way that ordinary
people use both the web and apps today. If you can convince me that you have a
plan to make everyone en-mass stop installing the Facebook App, I might be
persuaded to change my mind. But I regard the advice, "don't run untrusted
code" to be a bit like saying, "stick to abstinence to avoid pregnancy". The
advice isn't technically wrong, we just have good evidence that it doesn't
work for society in general.

I also think that "untrusted" is being used as a really broad catch-all here.
Trust isn't binary. I trust a calculator app to give me the correct answer to
a multiplication problem, I don't trust it to store my banking information.

What sandboxes do are they allow us to set up boundaries for apps that we
partially trust. Of course, sandboxes don't remove the need for consumers to
be taught not to blindly trust everything. But they're not designed to solve
that problem, just to make it easier to manage. There is no way to remove the
need to educate consumers; trust is too broad of a topic to divide every app
into a single "trusted" or "untrusted" bucket. So what sandboxes do is embrace
that grey area rather than ignore it.

In the physical world, if I'm securing a bank or an organization, there are
going to be people I distrust so much that they can't even enter the premises.
There are going to be people I trust enough to walk around while supervised.
There are going to be people I trust enough to be unsupervised, but not enough
to give them keys to my vault or server room. And finally there will be a
minuscule number of people I trust to have full access to everything.

Imagine if instead, our policy was, "you shouldn't let untrustworthy people
into your building in the first place, so security measures past that point
are useless." Would we be able to build a database of 'trustworthy' people who
could have access to the vaults of every bank they walked into?

When we get rid of sandboxes, we're still using a permissions system -- all
we've done is made that permissions system less granular and harder to
moderate, because we've removed our ability to say nuanced things like, "I
trust Facebook to connect to the Internet, but not to read my contacts."

------
saagarjha
Safari tried to do this but had to walk back some of their changes:
[https://bugs.webkit.org/show_bug.cgi?id=180365](https://bugs.webkit.org/show_bug.cgi?id=180365).
Currently, it reports the marketing OS and version of Safari truthfully, while
lying about the bundled version of WebKit. For example, my user agent is
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_3) AppleWebKit/605.1.15 (KHTML,
like Gecko) Version/13.1 Safari/605.1.15", but I am running Safari 13.1,
WebKit 15609.1.13.4 on macOS Catalina 10.15.3.

~~~
dangerface
Safari is a broken mess, I have to detect and fix it more than any other
browser, makes sense they would make it even more broken.

~~~
realusername
I agree with that, I don't know if it's a lack of manpower on Safari and
Safari mobile or what but it's severely far behind the other browsers.

~~~
dmitriid
Chrome published a draft of User-Agent Client Hints only a month ago. They are
already moving ahead to deprecate user Agent string and saying "Where feature
detection fails developers, UA Client Hints are the right path forward" (a
thing that's not even a standard yet).

Safari has been very wary of this break-neck speed of how Chrome imposes
Google's view of standards onto the world. WebKit team has even walked away
from proposals when concerns were not addressed (sadly, I can't find the link
in GitHub issues right now. Edit: [1]).

And yes, lack of manpower is another pressing concern.

[1] [https://github.com/WICG/construct-
stylesheets/issues/45#issu...](https://github.com/WICG/construct-
stylesheets/issues/45#issuecomment-521096423)

"Now I consider this issue as an absolute show stopper. I don't think we want
to ever implement this feature in WebKit unless this issue is resolved."

~~~
ksec
And that is what I like about Safari, instead of blindly following every
single proposal from Google. Many of them are not really well thought out.

------
LyndsySimon
For context, as it’s not immediately obvious at the link: Blink is
Chrome/Chromium’s rendering engine.

~~~
ShinTakuya
Thanks, I assumed so but it's useful to have this confirmation before I looked
it up. Honestly this is a little surprising to me. Sites pulling shit like
matching UA usually works out in Chrome's favour so it's nice to see them
throwing the minor web browsers an olive branch.

~~~
close04
Developers will still mostly target the most popular browser(s), particularly
Chrome. It would be great if browsers like Firefox managed to implement a
"feature detection spoof" that you can enable to still present yourself as
Chrome (or other browsers) regardless of the actual features requested. More
or less like changing the UA does now.

~~~
GoblinSlayer
That's Go Faster addon.

~~~
close04
This looks like it's intended to fix specific issues, not for fingerprint
blocking. I was thinking more like being able to present yourself as generic
Chrome on Windows 10 or Firefox on Android if you chose to, even with the risk
of breaking the site.

~~~
GoblinSlayer
Firefox resists fingerprinting by default, and go faster addon provides
mitigations for sites that break in this mode, like providing a specific UA
string.

------
shortercode
This seems particularly unhelpful for bug reporting systems. Being able to see
a specific bug is only happening on Chromium 78 for Linux can save a lot of
time and frustration. In theory the user can fill in the details on a bug
report themselves, but it's hard enough to persuade them to copy paste the
crash message we generate for them at the moment.

~~~
penagwin
There should be better ways to identify the browser than the user agent. The
user agent is a legacy artifact, have you seen how many "compatibility" parts
are in there? Like why does the chrome user agent say "Mozilla" in iy?

~~~
samastur
Yes, but do you know any?

UA is the only thing I get in server logs to discover how much attention I
should pay to support particular browsers.

~~~
penagwin
There's not really a good server side way to do it unfortunately. All the
current alternatives to my knowledge require js.

It depends on your application but the only real alternative is to use a js
analytics library to collect the statistics. The good news is that they're
more accurate, can give you more information, and there's plenty of self
hosted options like Matomo.

But it is yet another thing the client as to load, and adblockers will likely
skew your metrics.

------
anilshanbhag
The trigger for this was most likely the Brave/Vivaldi browsers. Brave used to
'Brave/X.y' at the end of user agent. Whatsapp didn't work with that
useragent. Now Brave uses Google Chrome's useragent.

In my app Dictanote ([https://dictanote.co](https://dictanote.co)) - which
uses Chrome's speech-to-text API, I have no way to distinguish Brave/Vivaldi
and user doesn't understand why its not working :/

~~~
DarkCrusader2
Shouldn't you be using feature detection[1] here anyway? Making browsers
unsupported in a blanket fashion via user-agent is the exact thing people
should stop doing.

[1] [https://developers.google.com/web/updates/2014/01/Web-
apps-t...](https://developers.google.com/web/updates/2014/01/Web-apps-that-
talk-Introduction-to-the-Speech-Synthesis-API#feature_detection)

~~~
anilshanbhag
The webkitSpeechRecognition object shows up on both browsers. When you start
recognition, it acts like you are not connected to the network. Not connected
to the network is a common error , so you cannot fingerprint it to Brave.

This specific API only works in Google Chrome unfortunately. So we need to
stop people from trying in another browser and getting frustrated as to why
its not working.

~~~
ss3000
We should of course default to feature detection whenever possible, but non-
standard behavior like this in certain browsers is exactly why feature
detection alone is never going to cover 100% of current use cases for UA
detection.

To add to OP's point, this thing in Safari also comes to mind as an example of
something that isn't easy to detect and address outside of UA detection:
[https://github.com/vitr/safari-cookie-in-
iframe](https://github.com/vitr/safari-cookie-in-iframe)

Deprecating UA Strings and moving towards UA Client Hints seems like a move in
the right direction though.

~~~
GoblinSlayer
3rd party cookies can be assumed to be always blocked, no need to detect
anything.

~~~
ss3000
Some pages are intended to be embedded in iframes (which have their own
security context isolated from the embedding page), and happen to use cookies
for authentication. Safari not allowing cookies from third party domains in
iframes is the issue here.

Assuming these iframes will never work means degrading the experience for all
users, when only users on Safari are actually affected. Detecting the UA and
branching based on that is a much more pragmatic solution.

------
devit
How about doing this but also NOT providing the client hints API and NOT
providing any way for JavaScript to explicitly ask for OS and browser product
name and version?

While browser and JavaScript engines are likely to continue being detectable
due to behavior and performance for the foreseeable future, it's probably
possible to make the OS, browser frontend and exact engine version
undetectable.

~~~
marcusarmstrong
Browser vendor/version is hugely important information for triaging issues and
implementing progressive enhancement approaches on large scale websites.

~~~
minitech
Triaging issues, sure. Progressive enhancement? If you’re parsing User-Agent
to implement that, you’re doing it wrong. (Feature detection is the correct
approach, when necessary.)

~~~
marcusarmstrong
Can’t feature detect during SSR, so making an educated guess and falling back
gracefully (when possible—many times it just isn’t, as with ES6 syntax) is
important to get initial page load right by default.

~~~
zamadatix
Because so many people abused the user agent header for things that could
gracefully fall back (or they just made invalid assumptions from it) it was
made an unreliable indicator for times you actually want to send shimmed
content on first load.

------
qwerty456127
> And finally, starting fresh will enable us to drop a lot of the legacy
> baggage that the UA string carries (“Mozilla/5.0”, “like Gecko”, “like
> KHTML”, etc) going forward.

Why not drop just that and leave nothing but the exact HTML and JavaScript
engine names and versions like "Blink a.b, V8 x.y"?

And what about robots? Will the Googlebot give the UA up as well?

~~~
richthegeek
There are plenty of unmaintained sites out there that do stupid things in
reaction to UA strings, and one of the stupid things is using a regex that
expects specific strings to exist.

Using a completely new string format in the same field (or removing it
entirely) breaks a lot of sites that'll never be fixed.

Freezing it prevents this. And if we're freezing and creating a new system
then why not go for something queryable without all the baggage?

~~~
userbinator
I tried using _no_ UA header at all for a period of a few weeks, many years
ago when "appsites" weren't as common, and yet a lot of sites failed to load
mysteriously, showed odd server errors, or even banned my IP for being a bot.

I expect no UA header to be even less usable now that sites are more paranoid
and app-ified, so instead I use a random one. That still confuses some
sites...

~~~
Kaiyou
A random one makes you unique and thus identifiable across sites.

~~~
userbinator
I meant random as in "randomly picked from list of common UAs", not as in
"randomly generated GUID".

------
danShumway
Requires some complicated conversations about how client hints should work in
a privacy-respecting way, but this is still unquestionably the right move.

The arguments we're seeing are around what the replacement should be (or if
there should be a replacement at all). But either of those scenarios are still
better than keeping user agents as they are.

------
dbetteridge
While the intent is good, didn't we recently see from the roll-out of
secure/strict same-site cookies that feature detection isn't mature enough? Or
does that not apply here.

Getting rid of user agent strings is great, as long as we get a better way to
determine browser capabilities that doesn't require some kind of special
feature checking library...

~~~
rowan_m
For older browsers, the UA string remains - so that's still viable for
compatibility issues. [https://wicg.github.io/ua-client-
hints/](https://wicg.github.io/ua-client-hints/) will provide the cleaner,
opt-in approach in the future.

~~~
1_player
What's cleaner about this new approach? I can't see the point of it.

It's exactly the same as the User-Agent header we had, but worse.

UA was used for tracking? With this new standard, just ask the user agent to
include all details in its Accept-CH header.

UA was used for feature detection? People will use this new standard to do
feature detection.

And it's worse because there's legitimate uses of UA sniffing, and JS won't
have access to it anymore - TFA wants to deprecate navigator.userAgent, so
only the webserver would have access to user agent details? Why?

~~~
rowan_m
> With this new standard, just ask the user agent to include all details in
> its Accept-CH header.

That becomes an explicit choice by the site to request more information, it's
up to the client/browser how it responds to that. Fewer bits of information
are exposed by default.

> JS won't have access to it anymore - TFA wants to deprecate
> navigator.userAgent, so only the webserver would have access to user agent
> details? Why?

I should have linked to the top-level repo with the explainer
([https://github.com/WICG/ua-client-hints](https://github.com/WICG/ua-client-
hints)) as it's not immediately clear from the spec, but access to the hint
values is provided via getUserAgent()

------
at_a_remove
I have mixed feelings on this. On the other hand, I knew some people above me
who are still stuck on useragent sniffing as the "way to go."

On the other hand, I did use it in combination with other techniques to
provide some real information. At one point I had a hard-to-even-track kind of
problem, with difficulty even looking for common trends. I created a help form
for the user. Please fill out when this occurred, where you were (this
mattered), what OS were you running, what browser, and so forth. What I found
using the UA (and backed by other tricks) was that many users, young users who
were enrolled in college, were not only unaware of their browser or operating
system version but the OS itself.

Its utility began its decline when the browser makers began copying one
another wholesale. It's kind of a shame. Capability-testing _is_ better but
for some troubleshooting applications, having something particular to pin your
problems to is handy.

~~~
miohtama
Please see UA Client Hints linked in the proposal

[https://wicg.github.io/ua-client-hints/](https://wicg.github.io/ua-client-
hints/)

You should still get this information, but more privacy oriented manner.

~~~
earthboundkid
I know no one RTFAs (I usually don't even RTFA), but in this case, it's really
egregious. The whole discussion is going back and forth about UA sniffing with
no one mentioning that the damn proposal is entirely based on adding UA Client
Hints as a replacement. It makes this whole multi-hundred comment discussion
pretty much valueless. Please, other commenters: read this link before
discussing whether you're caremad about "like Gecko" in the UA string!

~~~
mkl
In this case, it seems like you didn't read the discussion either. Many people
have been talking about UA Client Hints here, before you commented.

~~~
earthboundkid
This was the near top level comment I saw to mention it. I don’t think it’s
crazy to collapse a huge thread where the first several levels of comments
aren’t relevant.

~~~
mkl
Sure, but you could do a find before accusing all other commenters of
"egregious" not reading of the article.

------
leeoniya
[https://wicg.github.io/ua-client-hints/](https://wicg.github.io/ua-client-
hints/)

the proposed implementation leaves me questioning how useful the CH headers
are for initial impressions of a web property if only the browser and version
are sent by default, and more info is only provided on follow-up requests,
assuming the user agrees to some permission popup they don't understand. it'll
be the "this site uses cookies" nonsense / nuisance all over again. it'll
finally push end users to install the native reddit app without those constant
annoyances.

i have quite a few ux enhancements that need to know or guestimate the screen
size, if the device is touch capable and OS (especially for form input
styling). i dont see how it is possible without an additional delay, fetch
request and some annoying permission popup to deliver a good experience in the
brave new CH world.

~~~
HorstG
There is CSS media query for exactly your use-case. No need for sniffing,
round-trips or any server-side activity at all.

~~~
leeoniya
a media query will not tell me what OS it is so i can infer the metrics of
native fonts and form controls. i would have to use JS and create sentinel
elements to make the measurements, which would still be a form of slower,
shittier sniffing.

also, a media query or any other client side method prevents me from
delivering exactly the final content that's necessary and not have to do
additional reflow-inducing js to get the end result.

------
LeonM
Good.

The only legitimate use case I can think of is exception tracking, it is
valuable to know which browser caused the exception.

Beyond that, a website should never, ever rely on the UA for anything.

~~~
Tade0
Last time the UA string was useful to me was to show my SO's uncle that iOS
Edge is just a "repackaged and watered down version of Safari".

~~~
saagarjha
My user agent is "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_3)
AppleWebKit/605.1.15 (KHTML, like Gecko) Version/13.1 Safari/605.1.15".
Clearly I'm running a repackaged Firefox browser?

------
kerkeslager
One problem with User-Agent strings is that they actually don't work for their
intended use case. Nothing is stopping user agents from lying about who they
are to the server, and many user agents actively do this.

I've been on teams a few times that tried to use UA strings to try to serve up
features per-browser. Trying to do this directly falls apart quickly. There
are some libraries that handle the most common problems, but that really only
delays the inevitable: eventually some critical user is using a browser that
you are detecting wrong, and you end up lost in endless hard-coded edge cases.
Using UA strings to determine functionality isn't an effective strategy.

------
sheerun
Client Hints won't work with SSR because they are available only on second
request... I hope they fix it before deprecating.

------
jamescun
Feature detection isn't the only use case User Agent strings service, two I've
seen frequently are:

* Exception Tracking \- The User Agent string is usually attached to exceptions to aid in reproduction.

* Outdated Client Detection \- Primarily in internal dashboards in BYOD environments, I've seen the server display an error when a known outdated/insecure browser connects.

~~~
kalleboo
When Safari froze their UA, another big one that came up was browser bugs.
Specific versions of browsers on specific platforms have bugs, and you need to
be able to tell your user to upgrade.

~~~
clarry
Please use the latest supported browser.

There, I just told you to upgrade without sniffing UA.

~~~
kalleboo
No user is going to see that sentence. They're going to read right past it,
click some button, it doesn't work, "this site sucks" and go somewhere else.

~~~
clarry
I'll take that over a feature that just gets abused for all kinds of things
and will create breakage which makes users just go elsewhere because it's
broken.

It is ridiculous for every website to be babysitting their users software,
when we already have operating systems that keep software up to date, package
managers that keep software up to date, and software that keeps itself up to
date.

If I need to make a _really important website_ that can't lose users like
that, I just need to make damn sure it works with any relevant browser
_without any UA sniffing hacks that will bite back down the line._

If I'm making some bleeding edge crap that really needs a bleeding edge
browser, I'm sure I can find a place for that sentence where _most users_ will
find it, and if I lose a few users, it probably isn't that big of a deal.
Chances are I'd lose more users anyway due to many not wanting or being able
to update for any number of reasons.

------
chrisweekly
Must-read bit of webdev lore:

[https://webaim.org/blog/user-agent-string-
history/](https://webaim.org/blog/user-agent-string-history/)

------
jonarnes
> The opt-in based mechanism of Client Hints currently suffers from the fact
> that on the very-first view, the browser have not yet received the opt-in.
> That means that requiring specific UA information (e.g. device model,
> platform) on the very-first navigation request may incur a delay. We are
> exploring options to enable the opt-in to happen further down the stack to
> avoid that delay.

I'm surprised Google goes ahead with this without having a concrete plan to
handle the first request to a page. I fear that will become an ugly mess of
redirects, js hacks and broken pages. Think Google is under estimating how
widespread device detection user User-Agent on the server side is...

------
arendtio
As a KDE User, I am so happy every time I see 'KHTML' in one of those User-
Agent strings, it would be horrible if Konquerors legacy would be removed one
day ;-)

So it is good they don't want to remove it, but just freeze it :D

------
Ayesh
This can have a huge impact.

\- Server log analytics will not be able to provide OS/Browser stats.

\- Default download pages (Detect OS automatically and redirect to the
platform-specific download pages) would not work.

\- "m." Sites: there are still some sites that sniff the UA string and
redirect users to mobile site. It's looking for patterns like "/mobile/" in
UA, and don't require a lot of updates as the post argues.

~~~
onion2k
I think you'll still be able to tell what OS and browser it is, just not the
specific major or minor version because they'll be 85.0.0 (if that's the
version when this happens) forever. All of the examples you've put there will
still be achievable with the out-of-date UA string.

------
toast0
I may be cynical, but the Accept-CH proposal seems to me like something
proposed for appearances only, not with any intent to actually implement.

It just doesn't make sense from the perspective of HTTP. The only existing
flow where an HTTP server passes context to the client and the client makes a
new request based on that is HTTP authentication; but that's not intended to
be optional --- generally servers don't provide good enough content with HTTP
401 and expect clients to display it in case they don't feel like
authenticating. In this proposed flow, the server would need to, at least,
start sending the content on the original request, and then send it again in
case the client actually sends a new request.

If this is an HTTPS only feature, the server to client request could be moved
up into the TLS handshake; the client could include an empty Accept-CH
extension, and if present, the server could include an Accept-CH extension
with the fields it wants in HTTP requests. Then the client would send the
fields, or not, according to user preference. Zero added round trips.

Otherwise, it might be better if the server were able to indicate to clients
what its matching rules were. If the initial response returns best case data,
and a list of exception rules, the client could send a second request only if
the rules matched. Then you could say if it's a specific build of Chrome where
important feature is present but broken[1], or Chrome version before X, or
Chrome on mobile if ARMv5 or whatever, please disregard the content and
refetch with more details provided to the server, so it could provide an
appropriate page.

[1] Let's say that particular version crashes when the feature is attempted to
be used; it's plausible, and not really detectable from Javascript.

------
tjelen
And at the same time they recommend using UAs to detect agents incompatible
with SameSite=None cookies (see [https://www.chromium.org/updates/same-
site/incompatible-clie...](https://www.chromium.org/updates/same-
site/incompatible-clients)), including certain Chrome versions.

------
tracker1
I'm not sure I get the point of specifically requesting it... also, should
have a DOM API to get this information spelled out, not sure where that is...

Let alone requiring a round trip for the server to even see the information...
does that mean a 302 to itself for initial data? What about a POST? The first
render is where this kind of crap is most useful for server-side rendering.

The fact is, in general, it would be nice to get the actual browser/version
and os/version, and maybe have an engines list for cross-compatibility
detection as a fallback.

When you come across a bug that only effects a specific browser/version (or
other browsers using that engine as a base), it's a pain to deal with...
cleaning it up when fixed is another issue, or worse, when you target a
specific version, then the next release doesn't fix the issue and you're
scrambling to update your range for the fix.

It isn't nearly as bad as the late 90's, but it's still pretty bad.

------
CodeWriter23
> It provides the information in small increments, so servers are only exposed
> to the information they need and request, rather than being exposed to the
> full gamut of the UA string even if they are just trying to figure out one
> detail about the browser.

So, an extra RT if your app needs to sense which UA it’s talking to.

------
peterwwillis
It's going to be fun to watch all the product teams for all the web products
stop all their active development so they can push through emergency fixes
when this breaks their products. Just like Chrome 80's breaking of an
established cookie use pattern is doing.

------
ksec
Google's _Good_ Intention were to support _privacy_ , and

>the web should be is an _abundant source of compatibility issues. in
particular for minority browsers, resulting in browsers lying about themselves
(generally or to specific sites), and sites (including Google properties)
being broken in some browsers for no good reason._

Doesn't that mean every Web feature would be Google's way or the high way.
Google, using Blink and Chrome would be dictating the web standards. And if it
is incompatible, it would now officially be the browser vendors's fault and
not the designer.

~~~
trilliumbaker
> Google's Good Intention were to support privacy

I find this intent for be very difficult to believe. Chrome's privacy
policy[0] already lists a ton of information that Chrome sends to Google.

I am cynical and simply do not trust Google. I see this as a move for control
rather than privacy.

0\.
[https://www.google.com/chrome/privacy/](https://www.google.com/chrome/privacy/)

------
JohnFen
Removing the user agent string is something that I've wished for for decades.
While we're at it, the browser should not be revealing anything else about it
or our machines at all unless we consent to it.

------
EGreg
This is a terrible idea unless the better mechanism includes a way to
determine at least basic things like whether this is a mobile browser!
Determined web developers will still be able to detect the browser using
Javascript and send the info to the server. And meanwhile, we won’t be able to
send resources optimized for eg mobile browsers vs desktop ones. Now we have
to ship lots of crap from the server and hope JS is enabled to load the
minimum. The whole deprecation movement has to stop breaking the web for
activist reasons.

~~~
zzo38computer
The resource optimization is mostly needed only due to making stuff too
complicated already. Simplify the document and then it should just as well
either way.

------
superkuh
I can see why the commercial web interests would want to get rid of user-agent
strings. But to non-corporate persons, at least to techies, they're very
useful and fun things. It's just text too so it's easy to deal with. Adopting
some complex standard in javascript to replace it is no replacement.

------
jmurda
There is a live demo of client hints [https://browserleaks.com/client-
hints](https://browserleaks.com/client-hints) I don't see how this should
reduce fingerprinting if all sites start sending accept-ch headers by default

------
dochtman
Quite happy that the market leader is taking this step! Hopefully this will
decrease the problems with lacking feature detection.

I was surprised that the post doesn't contain any example user agent strings
for a sample of how they are expected to look starting from the different
milestones.

------
close04
> It provides the required information only when the server requests it

In the interest of fingerprinting a server would request this every time.

> It provides the information in small increments, so servers are only exposed
> to the information they need and request

Then the server would need and request the most comprehensive list possible in
order to fingerprint someone with better granularity than a UA string could.

I'm not against this as I appreciate the value of this kind on information for
the developers. It would be done one way or another. But why is this billed as
mainly a privacy move? Nothing suggests it intrinsically offers better privacy
when facing a website configured to fingerprint you. It actually looks like it
gives even more granular info over which the user has less control than they
used to with the UA.

------
sergeykish
How much of the web depends on User Agent header?

Maybe it can be removed altogether with small whitelist of exceptions.

    
    
        Firefox
        about:config
        general.useragent.override, String, leave empty
    

Looks fine so far

------
TheGoddessInari
This is rich coming from Google, which this week started blocking access to
Gmail based on a client's user agent!

------
smashah
Is this going to make spoofing UA harder/redundant? If so it's bad news for a
lot of projects.

------
darrinm
How will this affect caniuse.com? I use it every day.

------
josteink
Ok. So how do we do OS detection then?

How do I know which (binary) download to offer my users?

Edit: How do I provide reasonable defaults when the user’s OS actually
matters?

~~~
zzzcpan
Don't. That's an anti-pattern in UX. People download binaries for different
OSes all the time, so list links for all OSes you support.

~~~
josteink
Giving people a reasonable default is bad UX?

That’s a load of horseshit if I’ve ever heard it.

~~~
abjKT26nO8
Websites presenting me with a big button to download a "reasonable" default
and hiding everything else behind a small link that I have to go hunting gor
is really annoying. There is nothing reasonable about it. Don't think that you
know better what your users want then the users themselves.

~~~
josteink
Optimizing for the 99% use-case is fairly normal _and_ reasonable.

iOS users will almost always install apps via the AppStore. Most Windows-users
are probably not interested in a DMG. Are you really going to argue against
that?

I agree that _taking away_ options based on OS-detection is a seriously nasty
UX anti-pattern though.

~~~
abjKT26nO8
_> iOS users will almost always install apps via the AppStore. Most Windows-
users are probably not interested in a DMG. Are you really going to argue
against that?_

It may be true in the case of iOS and Android, because they are so locked
down. However, on more powerful platforms that Windows, Mac and Linux are it
isn't. I may want to run it in a VM, or not install it, but place it somewhere
on a shared drive, or anything that a non-handicapped OS is capable of
facilitating and many of these things will mean I will want a binary not meant
to be run by my native OS. Sometimes it happens that one of my devices will
break and so I want to use another one to download something that will help me
fix the issue. But now I'm going to have to go full Sherlock Holmes on a
website that thinks it knows better what I'm looking for.

~~~
AgloeDreams
In fairness, highlighting the right button for your OS and showing an 'other
downloads' button is really a 'You' problem that probably only affects less
than half of 1% of users. Almost all sites also show a 'other OS downloads'
button. But this is all meaningless, as shown above, UA will be replaced by a
client hint property.

------
ronancremin
The irony of Google purporting to protect users' privacy while at the same
time:

\- Chrome is still shipping with 3rd party cookies turned on by default
(Safari and Firefox have them off, by default)

\- Chrome usage stats are sent to Google including button clicks. This is
admitted in the Chrome privacy policy.

\- Chrome on mobile _automatically_ shares your location with your default
search engine i.e. Google

\- Chrome sort of forces a login …which shares browser and user details
history with Google

\- Google redirects logins through the youtube.com domain to enable them to
set a cookie for YouTube as well as Gmail or whatever, every time you login.
Naughty stuff.

So the stated reason for the change doesn't appear to make sense, suggesting
that something else is going on.

It amazes me that more people aren't calling Google out on this.

~~~
jasonvorhe
> \- Chrome sort of forces a login …which shares browser and user details
> history with Google

This doesn't get more true by just repeating it over and over. If you login to
Google it'll show up in Chrome next to the address bar but it doesn't enable
any syncing to Google servers. That's a different step and it requires opt-in.
You can also use Chrome without logging in to any Google services.

I don't get why privacy advocates, who often have a point when talking about
Google, have to rely on FUD.

~~~
techntoke
Because most of the negative attention that Chromium receives is FUD by people
that rely on feelings and not facts.

~~~
allovernow
Invasion of privacy is a valid and serious concern. The fact is that Google is
collecting sensitive information semi-consensually and semi-transparently and
arguably shouldn't be.

------
dangerface
This is a backwards step, I get the user agent is revealing things it has no
right to like os, but not all browsers are made equal I need to know what its
capable of.

~~~
AgloeDreams
They are adding a property for this. A browser will expose a bunch of 'I
Support this' flags and client hints rather than a browser version.

