
Chrome phasing out support for User-Agent - oftenwrong
https://www.infoq.com/news/2020/03/chrome-phasing-user-agent/
======
jorams
The weird thing about this is that the only company I've seen doing
problematic user-agent handling in recent years is Google themselves. They
have released several products as Chrome-only, which then turned out to work
fine in every other browser if they just pretended to be Chrome through the
user agent. Same with their search pages, which on mobile were very bad in
every non-Chrome browser purely based on user agent sniffing.

~~~
blntechie
Every single Google product is slower on Firefox and it’s hard to not call
this malice and artificial. Many people check out Gmail and GMaps on Firefox
and go back to Chrome because of their clunkiness on Firefox.

~~~
ricktdotorg
> malice and artificial

are you actually asserting that Google is purposefully adding code/"tweaking"
their web apps to run slowly on browsers other than Chrome?

do you have any evidence at all for this other than anecdotes about people
experiencing Google web app clunkiness on Firefox?

~~~
izolate
It could also be a passive, malicious de-prioritization of bugfixes for
Firefox that would cause the same effect. It seems like this would be a more
likely scenario.

~~~
sudosysgen
I would believe that if changing the user agent or toggling some flags didn't
fix it.

------
vxNsr
> _[https://github.com/WICG/ua-client-hints](https://github.com/WICG/ua-
> client-hints) _

I don't really understand how this will result in any real difference in
privacy or homogeneity of the web. Realistically every browser that implements
this is gonna offer up all the info the server asks for because asking the
user each time is terrible UX.

Additionally this will allow google to further segment out any browser that
doesn't implement this because they'll ask for it, get `null` back and respond
with sorry we don't support your browser, only now you can't just change your
UAS and keep going, now you actually need to change your browser.

And if other browsers do decide to implement it, they'll just lie and claim to
be chrome to make sure sites give the best exp... so we're back to where we
started.

~~~
untog
> I don't really understand how this will result in any real difference in
> privacy or homogeneity of the web.

It does a little: sites don't passively receive this information all the time,
instead they have to actively ask for it. And browsers can say no, much like
they can with blocking third party cookies.

In any case I'm not sure privacy is the ultimate goal here: it's intended to
replace the awful user agent sniffing people currently have to do with a
sensible system where you query for what you actually want, rather than infer
it from what's available.

~~~
jefftk
Switching it from passive to active means you can count it towards
[https://github.com/bslassey/privacy-
budget](https://github.com/bslassey/privacy-budget) . Yes, sites can ask for
all sorts of things, but if they ask for enough that they could plausibly be
fingerprinting you then they start seeing their requests denied.

(Disclosure: I work at Google, speaking only for myself)

~~~
vxNsr
Is the "privacy budget" an actual feature of chrome or just an idea? I've
never heard of it until now.

~~~
jefftk
It's a proposal for how to prevent fingerprinting:
[https://blog.chromium.org/2019/08/potential-uses-for-
privacy...](https://blog.chromium.org/2019/08/potential-uses-for-privacy-
sandbox.html)

~~~
dirtydroog
It prevents others fingerprinting, not Google though. Isn't there that
x-Client-Data header than chrome only sends to Google domains?

~~~
jefftk
The X-Client-Data header is documented in
[https://www.google.com/chrome/privacy/whitepaper.html#variat...](https://www.google.com/chrome/privacy/whitepaper.html#variations)
and Chrome uses it to run experiments to make the browser better. It's not
used for fingerprinting.

(Still speaking only for myself)

------
surround
Good. User-agent strings are a mess. Here is an example of a user-agent
string. Can you tell what browser this is?

Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/525.13 (KHTML,
like Gecko) Chrome/0.2.149.27 Safari/525.13

How did they get so confusing? See: _History of the browser user-agent string_
[https://webaim.org/blog/user-agent-string-
history/](https://webaim.org/blog/user-agent-string-history/)

Also, last year, Vivaldi switched to using a user-agent string identical to
Chrome’s because websites refused to work for Vivaldi, but worked fine with a
spoofed user-agent string. [https://vivaldi.com/blog/user-agent-
changes/](https://vivaldi.com/blog/user-agent-changes/)

~~~
rplnt
If companies like Google wouldn't abuse the user agent string to block
functionality, serve ads, force their users to specific browser then companies
like Google wouldn't have to use fake UA strings and then maybe companies like
Google wouldn't have to drop their support.

------
ravenstine
This is a good idea, and is something I've thought of for a while; the user
agent header was a mistake from both a privacy and a UX perspective.

Ideally, web browsers should attempt to treat the content the same no matter
what device you are on. There shouldn't be an iOS-web, and a Chrome-web, and a
Firefox-web, and an Edge-web; there should just be the web. In which case, a
user-agent string that contains the browser and even the OS only encourages
differences between browsers. Adding differences to your browser engine
shouldn't be considered safe.

Beyond that, the user agent is often a lie to trick servers into not
discriminating against certain browsers or OSes. Enough variability is added
to the user-agent string that a server can't reliably discriminate, but it
still remains useful for some purposes in JavaScript and as a fingerprint for
tracking.

Which brings me to privacy. It's not as if there aren't other ways to try and
fingerprint a browser, but the user agent is a big mistake for privacy. It'd
be one thing if the user-agent just said "Safari" or "Firefox", but there's a
lot more information in it beyond that.

If the web should be the same web everywhere, then the privacy trade-off
doesn't make much sense.

~~~
ldoughty
I agree, but this also is incredibly dependent on the major players (e.g.
Google) not going off on their own making changes without agreement from other
browsers...

There are still issues today where chrome, edge, and Firefox render slightly
differently. I certainly agree user agent isn't terribly necessary, but it's
literally the only hook to identify when css or JavaScript needs to change...
Or to support people on older browsers (e.g. Firefox ESR). How can I know when
I can update my website to newer language versions without metrics
_confirming_ my users support the new ES version?

I would argue simplifying the UA, product + major revision, maybe, or
information relevant to rendering and JavaScript only

~~~
_bxg1
Thinking cynically, it could be a power-move by Google to strengthen their
hold on the ecosystem.

Right now when they go out and make their own API changes without consensus
(which already happens), it's possible to distinguish the "for Chrome" case
and still support the standard. But if there were no User-Agent, and Google
wanted to strongarm the whole group into something, and 90% of browsers are
Chromium-based, devs will likely just support the Chromium version and
everyone else will have no choice but to fall in line.

~~~
ldng
You mean, like SPDY/HTTP2 ?

~~~
klodolph
As far as I can tell, HTTP/2 is such a major improvement that no strong-arming
is necessary. Speaking as a consumer of the web, as an individual who runs
their own website, and at a developer working at a company with a major web
presence.

The web suffers a ton from the “red queen” rule in so many different ways
anyway—you have to do a lot of work just to stay in the same place.

~~~
ldng
But, is it really such an improvement ? Or is it just an improvement for Cloud
provider that keep pushing the Kool-Aid ?

I still see a lot of contradicting benchmark and, apart from some Google Apps,
personnally, I have not seen a lot of sites actually really leveraging HTTP2
(including push).

But maybe you did put and leverage HTTP2 on your own website ? At your company
? Did you use push ? Do you use it with CDN ?

~~~
klodolph
> But, is it really such an improvement?

Yes, unequivocally. It’s amazing, even without push. The websites that use it
are faster, and the development process for making apps or sites that load
quickly is much more sane. You don’t have to resort to the kind of weird
trickery that pervades HTTP/1 apps.

> Or is it just an improvement for Cloud provider that keep pushing the Kool-
> Aid ?

I don’t see how that makes any sense at all. Could you explain that?

> But maybe you did put and leverage HTTP2 on your own website ? At your
> company ? Did you use push ? Do you use it with CDN ?

From my parent comment,

> Speaking as a consumer of the web, as an individual who runs their own
> website, and at a developer working at a company with a major web presence.

My personal web site uses HTTP/2\. It serves a combination of static pages and
web apps. No push. HTTP/2 was almost zero effort to set up, and instantly
improved performance. With HTTP/2, I’ve changed the way I develop web apps,
for the better.

My employer’s website uses every technique under the sun, including push and
CDNs.

~~~
jefftk
_> My employer’s website uses every technique under the sun, including push
and CDNs._

Are you actually seeing good results from push? I have seen many projects try
to use it, but am not aware of _any_ that have ended up keeping it.

(Disclosure: I work at Google)

~~~
collinmanderson
For comcenter.com I push CSS, except if the referrer is same origin.

I _think_ it's working pretty well as far as I can tell.

~~~
jefftk
If you were up for running an A/B test (diverted per-user, since cache state
is sticky) and writing up the results publicly I'd love to see it!

------
olsonjeffery
At my employer we are using UserAgent to detect the browser so that we can
drive SameSite cookie policy for our various sites (e.g. IE11 and Edge, which
we still support, doesn't support SameSite: None).

There are a variety of scenarios where this comes up (e.g. we ship a site that
is rendered, by another vendor, within an iframe; so we have to set SameSite:
None on our application's session cookie so that it's valid within the iframe,
thus allowing AJAX calls originating from within the iframe to work based on
our current auth scheme.. BUT only within Chrome 70+, Firefox but NOT IE,
Safari, etc).

Just providing this as an example of backend applications needing to deal with
browser-specific behavior, since most of the examples cited in other comments
are about rendering/css/javascript features on the client and how UserAgent
drives that.

~~~
jt2190
The proposed User Agent Client Hints API would replace this:
[https://wicg.github.io/ua-client-hints/](https://wicg.github.io/ua-client-
hints/)

~~~
anthonyrstevens
The User Agent Client Hints API looks like a very early draft. I could not see
any proposed timeline for implementation or estimate of when this might become
a supported standard.

I would not personally rely on this as a substitute or replacement for User
Agent by September (Google Chrome 85).

------
derefr
These days, it feels like the sole use of User-Agent is as a weak defence
against web scraping. I've written a couple of scrapers (legitimate ones, for
site owners that requested machine-readable versions of their own data!) where
the site would reject me if I did a plain `curl`, but as soon as I hit it with
-H "User-Agent: [my chrome browser's UA string]", it'd work fine. Kind of
silly, when it's such a small deterrent to actually-malicious actors.

(Also kind of silly in that even real browser-fingerprinting setups can be
defeated by a sufficiently-motivated attacker using e.g.
[https://www.npmjs.com/package/puppeteer-extra-plugin-
stealth](https://www.npmjs.com/package/puppeteer-extra-plugin-stealth), but I
guess sometimes a corporate mandate to block scraping comes down, and you just
can't convince them that it's untenable.)

~~~
jaywalk
Preventing scraping is an entirely futile effort. I've lost count of the
number of times I've had to tell a project manager that if a user can see it
in their browser, there is a way to scrape it.

Best I've ever been able to do is implement server-side throttling to force
the scrapers to slow down. But I manage some public web applications with data
that is very valuable to certain other players in the industry, so they _will_
invest the time and effort to bypass any measures I throw at them.

~~~
pocket_cheese
As a person who scrapes sites (ethically), I think it's impossible or pretty
damn near impossible to prevent a motivated actor from scraping your website.
However, I've avoided scraping websites because their anti scraping measures
made it not worth the effort of figuring out their site. I think it's still
worth for do minimal things like minify/obfuscate your client side JS and use
some type of one time use request token to restrict replay-ability. The
difference between knowing that I can figure it in 30 minutes vs 4 hours vs a
few days is going to filter out a lot of people.

Of course, sometimes obfuscating how your website works can make it needlessly
more complicated, so it's a trade off.

------
stirner
Meanwhile, you can still use youtube.com/tv to control playback on your PC
from your phone—but only if you spoof your User-Agent to that of the Nintendo
Switch [1]. Sounds like they are more interested in phasing out user control
than ignoring the header entirely.

[1]
[https://support.google.com/youtube/thread/16442768?hl=en&msg...](https://support.google.com/youtube/thread/16442768?hl=en&msgid=16717689)

~~~
ahmedalsudani
Oh wow. I used that in the past and it worked great. I didn’t realize Google
broke it only to force us to use their app.

What a bunch of turds.

Thank you for the Nintendo Switch pro-tip.

~~~
nofunsir
Yes. I firmly believe this is an attack on user control.

For example, I believe they REALLY want us to use the youtube app:

\- Viewing youtube.com on a new iPad pro, Goolag lies and says "your browser
doesn't support 1080p."

\- Ok, change to desktop version in app. Goolag once again lies and says "your
browser doesn't support full screen." They also lie and say they've redirected
you to the "desktop version", and nag you with a persistent banner that you
should return to the safety of the mobile website.

\- Ok, change to "request desktop version" via user agent. Full functionality.
Full screen is DEFINITELY possible with a javascript bookmark. 1080p+ is
DEFINITELY possible. Ads blocked in browser.

If I were to use the app, they would have FULL CONTROL.

~~~
true_religion
Are you deliberately misspelling Google as Goolag to make it sound like gulag?

------
eric_b
This feels very ivory tower. It reminds me of the "You should never need to
check user agent in JavaScript because you should just feature detect!!". Well
in the real world that doesn't work every time.

The same is true for server side applications of user-agent. There are plenty
of non-privacy-invading reasons to need an accurate picture of what user agent
is visiting.

And a lot of those applications that need it are legacy. Updating them to
support these 6 new headers will be a pain.

~~~
jacobr1
Chrome will support the legacy apps by maintaining a static-user agent. It
just won't be updated when chrome updates. If you want to build NEW
functionality that where you need to test support for new browsers, you do
that via feature detection.

------
Humphrey
Interesting. We don't use UA to track customers, but it has been invaluable
information for trying specific bugs. Eg, twice in the past 2 months, I've had
to fix weird bugs that didn't make sense. The only way I was able to solve
them was to look for patterns in which browsers and versions those who
reported the bugs were using. Both turned out to be to do different iOS Safari
cookie related bugs that only occurred in specific versions. Without logging
the UA there would have been no way I would have been able to discover those
bugs and create workarounds for those iphone users.

I'm all for preventing tracking, but I can't imagine a time where all browsers
behavior so similarly that we won't have to write workarounds for browser bugs
and differences. As a developer I can't imagine caring about Edgium vs Chrome,
but it's important to know what the underlying engines are.

------
hartator
New proposed syntax adds even more noise:

    
    
        User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) 
        AppleWebKit/537.36 (KHTML, like Gecko)
                Chrome/71.1.2222.33 Safari/537.36
        Sec-CH-UA: "Chrome"; v="74"
        Sec-CH-UA-Full-Version: "74.0.3424.124"
        Sec-CH-UA-Platform: "macOS"
        Sec-CH-UA-Arch: "ARM64"
    

Why not getting rid of the `User-Agent` completely?

It's already bad infrastructure design to have the server do different
renderings depending on `User-Agent` value.

~~~
magicalhippo
Why the hell does a regular website need to know what OS and CPU architecture
I got?

~~~
kabacha
I can already see the permission pop ups for those:

> for best performance this website would like to know what type of device you
> are using?

While requesting every single "hint" and there is "ok" button and greyed out
"read more or declide this request" tiny line.

------
floatingatoll
This was recently discussed on HN:

3 months ago:
[https://news.ycombinator.com/item?id=21781019](https://news.ycombinator.com/item?id=21781019)

1 year ago:
[https://news.ycombinator.com/item?id=18564540](https://news.ycombinator.com/item?id=18564540)

------
DevKoala
From the git repo:

> Blocking known bots and crawlers Currently, the User-Agent string is often
> used as a brute-force way to block known bots and crawlers. There's a
> concern that moving "normal" traffic to expose less entropy by default will
> also make it easier for bots to hide in the crowd. While there's some truth
> to that, that's not enough reason for making the crowd be more personally
> identifiable.

This means that consumers of the Google Ad stream have one less tool to
identify bots, and will pay Google for more synthetic traffic, impressions and
clicks; this could be a huge revenue boost for Google. A considerable amount
of their traffic is synthetic. I doubt this was overlooked.

------
leeoniya
does this mean there will no longer be a way of determining if the device is
primarily touch (basically all of "android", "iphone" and "ipad") or
guesstimating screen size ("mobile" is typical for phones in the UA) on the
server?

[https://developer.chrome.com/multidevice/user-
agent](https://developer.chrome.com/multidevice/user-agent)

i wonder what Amazon will do. they serve completely different sites from the
same domain after UA-sniffing for mobile.

is the web just going to turn into blank landing pages that require JS to
detect the screen size and/or touch support and then redirect accordingly?

or is every initial/landing page going to be bloated with both the mobile and
desktop variants?

that sounds god-awful.

~~~
bdcravens
Presumably you'll grab the dimensions (could cache after first load) and then
render dynamically based on that. If you're doing some sort of if statement on
the server to deliver content based on screen size you're probably doing it
wrong. Obviously I can't speak for every mobile user, but for myself, it's
infuriating to have a completely different set of functionality on mobile.

~~~
leeoniya
> If you're doing some sort of if statement on the server to deliver content
> based on screen size you're probably doing it wrong. Obviously I can't speak
> for every mobile user, but for myself, it's infuriating to have a completely
> different set of functionality on mobile.

there's not a "right" and a "wrong" here; it's about trade-offs.

you're either stripping things down to the lowest common denominator (and
leaving nothing but empty space on desktop) or you're wasting a ton of mobile
bandwidth by serving both versions on initial load (the most critical first
impression).

you frequently cannot simply squeeze all desktop functionality from a 1920px+
screen onto a 320px screen - unless you have very little functionality to
begin with. Amazon (or any e-commerce/marketplace site) is a great example
where client-side responsiveness alone is far from sufficient.

[https://www.walmart.com/](https://www.walmart.com/) does it okay, but you can
see how much their desktop site strips down to use the same codebase for
desktop and mobile.

------
StillBored
Can't happen soon enough. As a frequent user of various non-mainstream
browsers i'm sick and tired of seeing "your browser isn't supported" messages
with download links to chrome/etc. At least in the case of Falkon it has a
built in user agent manager, and I can't remember the last time flipping the
UA to firefox/whatever actually caused any problems. Although, i've also
gotten annoyed at the sanctimoniousness web sites that tell me my browser is
to old because the FF version I've got the UA set to isn't the latest.

~~~
y_nk
if your browser isn't supported, it's not the browser's fault, rather the
website you go on not to support your browser.

------
Roboprog
I log this for coarse statistics about what our user base is running, but that
is about it.

The good news: IE use is down over the last year to only about 40%.

The bad news: the growth elsewhere is all Chrome, with less than 1% Firefox or
Safari. There’s a tiny sprinkling of Edge, as well, but I forget the numbers
on that.

Our users are state and county offices and medical facilities, rather than
private individuals, so the users are somewhat captive to whatever their
organization mandates.

The only browser detection we do is in client side scripting to detect if the
browser can directly display a PDF inline (or not, in the case of IE11)

------
manigandham
I would much prefer a new version of the user-agent string. Normalize basic
information (like OS and browser versions) without revealing too much (build
numbers).

That would let servers still get necessary info without having to run even
more javascript. It can just be in querystring format to simply parsing on
both client and server.

~~~
recursive
Any user agent string will eventually be forced down the same path. Web sites
use them to deny content. And the browsers will continue to try to match more
patterns so their users see the content.

As long as they exist, I can see no escaping this arms race.

------
ErikAugust
Larry Page no longer wants to be a “good net citizen”?

[https://groups.google.com/forum/m/#!msg/comp.lang.java/aSPAJ...](https://groups.google.com/forum/m/#!msg/comp.lang.java/aSPAJO05LIU/ushhUIQQ-
ogJ)

------
nofunsir
I view this as an attack on the web as it stands.

Google wants to create a walled-garden net. Goog-net. All ads, shopping,
videos, documents, email, locations, articles flowing through THEIR protocols
and THEIR servers and THEIR fiber. No possibility of blocking ads they don't
want blocked. No URLs. No Agent strings. No user control. Only user
consumption.

If they have to allow some small chump players to have a piece of this cake (a
la: "Oh trust us, AMP is an 'open' protocol and anyone can host it. it's not
just for our own benefit in the end. Trust us.") in order for the entire
population to accept their changes bit by bit, so be it in their eyes. They
know we would reject an outright takeover.

------
gumby
This is OK...I guess? I mean it's great to get rid of that overloaded
carbuncle of user-agent, but that will just lead to a new round of
interpreting "hints". _shrug_

Google is a serial abuser of user-agent already so this is somewhat ironic.

------
guyn
The first time one of my articles appear in HN, I'm kinda excited

------
jakeogh
Fantastic. Thank you Chrome team! Especially for those who dont execute arb
JS, this is a huge +.

Personally, I would like to drop the line completely and not send the key at
all, but it's a start.

~~~
maverick74
totally agree!!!

------
varelaz
So Google found good way to fingerprint users without user agent and found
that a lot of user agents are forged and this stopped working anyway. It's
time to switch to API support forging.

------
superkuh
User-agent is super useful to human people. But corporate people don't have a
use for it. They will get that information via running arbitrary code on your
insecure browser anyway. So, because mega-corps now define the web (instead of
the w3c) this is life.

But it doesn't have to be. We don't have to follow Google/Apple-Web standards.
Anyone that makes and runs websites has a choice. And every person can simply
choice not to run unethical browsers.

~~~
recursive
> User-agent is super useful to human people.

For what? Honest question. You have to be like a 5th-level user agent wizard
to make any sense of user agent strings, since every browser now names every
other browser. How do you do anything useful with this in a way that's
forward-compatible?

~~~
superkuh
I look at the logs of my websites with my eyeballs manually after a perl
script to winnow them down (ie, remove hits form me, hits from tor, etc).

------
badrabbit
This is insane. You know, no HN post to a google blogspot site works for me
because these jerks are the only ones that discrinate on UA?

Google engoneering is Sooooooo disconnected from the rest of the world, I
think we need legal regulation to stop them from doing stuipd things like
this. Do they have any idea how many things need it?

HNers with a position of power at work, I plead with you: please advocate
banning of Chrome at work and replacing it with any one of the webkit based
alternatives or firefox. These people are insane. Every month I hear of some
ridiculous thing. They took out navbar url parameters, add links to in page
words, now this!

For those who think this is good for privacy...it is not! This is the same old
sneaky ass evil thing they do. UA can be used to finger print you but it's
very easy to set a generic user agent. Actually, if you look at user agents
most of them have the latest string for Chrome, IE or firefox so it isn't
useful without a whole lot of other details correlated with it. You know what
the exception is? Android and iPhone browsers that incluse your device make
and model in the UA and apps that includes whole lot more like facebook's
apps.

Do you know what a "flexible" api like the one they're talking about allows?
More fingerprintable data points! The fact that you even use that api is a
privacy issue. Let's sat clienthints allows for 10 different variations of
responses from clients,your specific client details might have just 3 things
different from the mean and bam, now they can track your specific device. With
UA,all versions of a client have the same exact detail and most people need an
extension to change it, so it makes it much less easy to finger print.

This is the same ol sneaky bait and switch Google pulls. The content of your
UA is not the privacy concern (although it contains too much at times) , it is
the fact that it can be correlated with timing info,IP (especially v6),and if
they already know your UA they will also use client default http header
options to identify and track you without consent.

------
KingOfCoders
Any idea on how to identify devices then? We currently check the user agent to
to send a new code when an user logs in from a new device. How would you do
this without user agent?

~~~
SifJar
Use a cookie?

------
fpoling
This change does not remove the user agent. In practice it just hides OS and
the version but the user may opt-in to send those to a particular site.

------
intsunny
Ah, the end of the countless references to KHTML :)

As a long time KDE user I'm a little sad, but also fully aware this day would
come.

~~~
marcosdumay
How can we use a browser that doesn't pretend to be Netscape Navigator? This
will never work :)

------
2400
if you want to go to the source of that story:

[https://groups.google.com/a/chromium.org/forum/#!msg/blink-d...](https://groups.google.com/a/chromium.org/forum/#!msg/blink-
dev/-2JIRNMWJ7s/yHe4tQNLCgAJ)

------
abhishekjha
I was wondering. Isn't the page rendered on mobile and desktop based on user-
agents? How would that work now?

~~~
tenebrisalietum
I thought it used Javascript to detect screen size. At least it should react
to resize events and if the dimensions are something that align with mobile,
it should switch to mobile mode.

~~~
wlesieutre
In a lot of cases you shouldn't even use Javascript for this, responsive
layouts can be built using CSS media queries based on viewport size.

More advanced webapps might occasionally need to do something fancier than
that if the mobile vs desktop functionality is (for some reason) substantially
different instead of just rearranged.

[https://developer.mozilla.org/en-
US/docs/Web/CSS/Media_Queri...](https://developer.mozilla.org/en-
US/docs/Web/CSS/Media_Queries/Using_media_queries)

------
maverick74
Finally someone step up to stop the UA madness!!!

Now, all we'll need is a way to not send anything at all!!!

------
CKN23-ARIN
> While removing the User-Agent completely was deemed problematic, as many
> sites still rely on them, Chrome will no longer update the browser version
> and will only include a unified version of the OS data.

So, nearly all of the information that makes User-Agent strings problematic
will remain. They're just phasing out precise version information.

------
y_nk
isn't it concerning that Google decides to follow and implement even though
the conversation on github concluded by "it should be rejected by W3C"?

------
classified
So basically, Google shat their own bed and is just now beginning to realize
that it stinks. Attempts to invent the universal internet user toll booth are
back on track.

------
smashah
Stupidity. user-agent spoofing is a fact of life for many projects. Whatever
feature they're going to come out with to replace UA will be spoofable too
soon enough.

------
mcs_
sorry, anyone knows the link of the original source of this?

------
dheera
No! I loved User-Agent because I could fake other user agents, e.g.

\- being the google crawler to get past paywalls

\- being a Mac user agent to get free internet access at some hotels

------
yu_chen
ahhh

------
justlexi93
More specifically, Google thinks they're the central authority as to what
Chrome will do.

------
PaulHoule
If they're the dominant web browser people will assume you are using Chrome
anyway.

------
baggy_trough
Annoying, as I just added a user agent based workaround for another Chrome
compatibility problem (the increased security on same-site cookies, which
can't be handled in a compatible way with all browsers).

------
zmix
Because, there can be _only one user agent_...!

------
gregoriol
As usual, this will fuck up the users, and not the techy nerds making such
decisions, but the average joe because things on the internet will be broken
for them.

~~~
untog
How will things be broken? Google is not removing the user agent, they're just
freezing it. So all sites that currently depend on the user agent will
continue to do just fine. New sites can use client hints instead, which are a
much more effective replacement for user agent sniffing.

This solution very specifically places the burden on "techy nerds" and not
users, so I'm not sure where you're coming from.

~~~
henriquez
Right, using user agent on the client side has been unsalvageably broken for a
long time. Other things, like checking the existence of window.safari or
window.chrome are more reliable.

For the server side, I’m not too aware of too many cases it’s useful other
than analytics, and there is too much info leakage and fingerprinting
happening anyway.

So killing user agent doesn’t really seem user-hostile, save for the fact that
the company doing it has near monopoly market share and doesn’t _need_ to
provide a user agent, as it’s assumed that everyone is writing code to run on
Google’s browser. In that sense it’s a flex.

