
Explainer: Reducing User-Agent Granularity - jimws
https://github.com/WICG/ua-client-hints
======
mikl
Why not replace the User-Agent with _nothing_?

I’ve been a full-time web dev since 2005, and I have never needed to sniff
around in the User-Agent string. Having the _server_ care about what browser
is on the other end is an anti-pattern, makes life difficult for browser-
developers (and caching systems), and helps creepy tracking software
fingerprint people.

So instead of weighing down HTTP requests with a bunch of new headers next to
no one will need, I think we should instead move to end browser-sniffing. Fix
the User-Agent string like OP proposes, but add nothing to replace it.

~~~
daxterspeed
There's two arguments I've heard against removing the User-Agent field:

"It helps us understand browser share" and "UA sniffing is faster than feature
testing".

I believe both of these could be remedied quite easily with some minor work.
Websites shouldn't have access to browser version, cpu architecture, model
name etc by default. Websites with legitimate needs for this information can
request access from the user and the user would be in charge of determining
whether the website has earned the right to this information (useful for eg.
sites that want to report "last login at [date] from [browser] on
[platform]").

In terms of UA sniffing being faster than feature detection old browsers will
continue to send their outdated User-Agent strings. Websites will simply have
to opt to use feature detection in actively developed browsers.

In terms of the server determining whether it _should_ serve a "lite" version
of the site, there's potential in headers like "Save-Data: on". There's also
research into headers that would send incredibly coarse information about the
devices capabilities (eg memory available rounded down to nearest 1024^n
bytes).

~~~
mikl
> potential in headers like "Save-Data: on"

Yes, headers, if added, should be about the _capabilities_ of the browser, not
the browser vendor, the underlying OS or hardware.

~~~
eyelidlessness
The problem with expressing capabilities is they will never be binary in the
real world, and the feature set is growing constantly.

Often with emerging functionality, one vendor will do an experiment, other
vendors will refine and it will go through the standards process. If Chromium
first introduces `Whargarbl: On` and WebKit implements it differently, the
header may become something like `Whargarbl: -webkit-frobnicate`. Dozens of
headers like that may be sent.

This is almost certainly better suited to feature testing in the runtime.

~~~
mikl
No doubt, feature detection is much better. There are a few things that needs
to be on the header-level, like compression standards and encoding, but the
less data is attached to a request by default, the better.

