
DoH Privacy Enhancement: Do Not Set the User-Agent Header for DoH Requests - generalpass
https://bugzilla.mozilla.org/show_bug.cgi?id=1543201#c4
======
DyslexicAtheist
I don't see a benefit of the built in DoH in firefox since it is shifting the
trust from my existing DNS provider to cloudflare[1] (based in the US).

Today I discovered that I couldn't just kill the disqus.com requests with my
usual method by sink-holing them in /etc/hosts because for some reason firefox
ignored the /etc/hosts file. While Chrome seemed to work fine and local dig
requests also returned

    
    
      disqus.com.  0 IN A 0.0.0.0
    

I was left dumbstruck.

I was able to override this in the about:config as described in [1].

I really don't like how mozilla ignors what is defined in /etc/nsswitsch.conf
because this breaks all the blacklisting rules a user had set up themselves.
At least they should have produced a huge warning telling users that this
ignores any local blacklists that are in place.

[1] [https://ungleich.ch/en-us/cms/blog/2019/09/11/turn-off-
doh-f...](https://ungleich.ch/en-us/cms/blog/2019/09/11/turn-off-doh-firefox/)

~~~
Spivak
Firefox is doing the correct thing here. Firefox cannot read /etc/hosts
without breaking some applications since NSS is a series of black-box modules
that Firefox can't assume work any particular way. There's nothing special
about /etc/hosts on a modern Linux system.

The files module on your system could read /etc/dns or the blarg module could
read /etc/hosts but in JSON format.

The thing you're annoyed about is that Firefox _with DoH enabled_ doesn't use
NSS at all when performing requests but there is absolutely no way that
Firefox can do both -- it either can send requests through the NSS gauntlet
and use whatever it returns or not use it at all -- "respecting
/etc/nsswitch.conf" is impossible to do correctly.

Linux systems don't provide the right facilities to supplement application
level DNS in the way you want. If you want to do DNS in your app you must
ignore libc DNS.

~~~
throwaway2048
Firefox should not be ignoring libc DNS without prompting.

------
Someone1234
I'm glad this happened. I feel like it is one of those "if you don't do it
now, you won't be able to do it later." As more DoH implementations spring up,
at least one of them will require the User-Agent for no reason.

Plus it is useless, may allow hacky discrimination, and wastes bytes.

~~~
desc
I wonder if the best approach might be to specify the precise request format
required for DoH and actively deny requests with extra information.

'Thou shalt provide exactly this, no more and no less'

Depends on whether we want to consider later 'improvements' to be silent
extensions or an entirely new protocol version. Given how easily abused HTTP
headers have been in the past, might be worth the inertia to limit
extensibility by design in order to limit MITM tracking.

But MITM middleboxes will of course start to rely on that...

I suppose that if TLS is broken for this part of the system you're rather
screwed anyway, but defence in depth is always worth considering.

~~~
anoncake
What's the point of using HTTP for DNS if you aren't going to use its
abilities?

~~~
Someone1234
Its raison d'être is largely to mitigate the issues that limited DoT's (DNS
over TLS) widespread adoption, specifically "Middlebox Bypass."

Meaning if you connect to a public WiFi network, many will only allow captive
DNS and block other protocols including DoT. DoH is difficult to distinguish
from other HTTPS traffic and is more likely to successfully resolve (since
they cannot block HTTPS and still be useful).

Therefore you've created a secure, flexible, and reliable DNS replacement with
only mild technical downsides on modern hardware (a lot of the complaints are
ideological or relate to specific incompatibilities with existing solutions
like the lack of HOSTS file support in the initial offerings).

So I guess, everything I just said is "the point." Rather than trying to use
every single possible combination of HTTP headers.

~~~
ocdtrekkie
DoH as a protocol has no issues with HOSTS files. The issue is browsers
implementing DoH instead of operating systems. If browsers stayed in their
lane, and let operating systems implement DoH, compliant with corporate IT
design and systems like the HOSTS file, everything would be fine.

Microsoft has already committed to implement DoH in Windows itself. Browsers
just need to stop trying to be their own DNS clients.

~~~
damnyou
Mozilla's operating system bombed horribly, so the only leverage they have is
with the browser.

~~~
ocdtrekkie
Why is a browser developer trying to apply "leverage"? In that, we've already
located the problem.

------
ghostpepper
To save everyone else from googling, DoH is DNS over HTTPS.

~~~
burundi_coffee
Not to be confused with DNS over TLS (DoT)

------
g82918
I feel like User-Agent in general is not well suited to the modern web. Most
browsers pretend to be some version of Mozilla or Netscape. And most things
that scrape sites like Apple's messaging app pretend to be something equally
outdated. Deprecating User-Agent seems like the best course of action in the
long run.

~~~
tzs
I'd be OK with getting rid of User-Agent if some kind of "capabilities
supported" header were added. Something like

    
    
      Agent-Capabilities: 1; 97464bde5d94a54f6e199309489a4b60
    

where the 97464bde5d94a54f6e199309489a4b60 is a set of bit flags in hex, each
representing some feature, and the 1 before the semicolon is a version number
for the bit definitions. The assignment of these flags would have to be
standardized. When new versions of HTML, CSS, or JavaScript are standardized,
that would include assigning bits for their new features, and bumping the bit
definition version number.

You might say that it is better to do run-time checks on the client for
features you want, and have the client then load workarounds if necessary for
missing features, and that indeed is a good way to go in many cases (maybe
even most cases).

That method, though, only works going forward. Sometimes I want to know what
capabilities past visitors to my site have had. For example, I'm going to be
redoing some pages at work and want to know if I can require certain CSS and
JavaScript and HTML features.

I could make a list of the features I want to use, add some JavaScript to
those pages to test for those, and send the results back to the server...and
then I'd have to wait weeks or months or longer to get a good idea of what the
consequences would be of requiring those features.

With User-Agent strings, all I had to do was pull up the site logs. Our
shopping cart logs the User-Agent string, and page flow through the cart. It
was an easy matter to pull up the last 18 months of those logs, and write a
script to find all the successful orders and make a table showing how many
there were for each browser. (I limited it to successful orders, because those
would almost certainly all be real people in regular browsers that are
reasonably honest about the User-Agents). I could then look up the features I
was interested in on [https://caniuse.com/](https://caniuse.com/) and see
which of the features were not on browsers that we still get significant
traffic through.

Dropping User-Agent without adding something to allow that kind of analysis,
such as Agent-Capabilities, would be very annoying.

------
3xblah
The Cloudflare TRR endpoint is mapped to a subdomain that indicates the user
agent, i.e., browser, was distributed by Mozilla: mozilla.cloudflare-dns.com.
The IP address for the subdomain is the same as for cloudflare-dns.com. Would
this subdomain be leaked in SNI.^1 Presumably it serves to enable someone,
maybe Cloudflare, to track queries as coming from Firefox.

Will the endpoints for other TRRs have special subdomains.

Perhaps another "privacy enhancement" would be to avoid using a browser-
specific subdomain.

1\. ESNI is still experimental and if I am not mistaken it is disabled by
default in Firefox.

~~~
M2Ys4U
That's still better than sending the entire UA, though, which includes OS and
browser version numbers.

------
badrabbit
Just curious, would your employers let you use DoH? I was told to stop using
it.

~~~
treve
Employers are typically not afraid of the protocol, but afraid of not being
able to control it. Companies can run their own filtered DoH just like they
can with DNS.

------
cocoa19
> Instead of setting it to an arbitrary string it would be great if the UA
> header was not set at all.

Didn't someone post his experience getting more expensive flights because he
used a Mac?

Yes! Please, take UA out.

~~~
jrockway
Are we sure this wasn't a session cookie that just happened to pick experiment
A or experiment B?

The user agent isn't very important. What we should start being worried about
is that ISPs are now offering API access that maps your IP address to your
phone number and demographic information. (Verizon does this right now, but
you can opt out in your account settings.)

Maybe all those VPN providers have a point. Delete the user agent, share an IP
with thousands of people, and maybe you can browse in peace.

~~~
DownGoat
Then you get stuck in ReCaptcha hell because you are not "human".

------
nif2ee
It's all bout money encapsulated by "security" marketing buzzword. Yes, DoH is
an improvement over old plaintext UDP DNS. But why should I used a nobody
service over Cloudflare or Google that update their databases within seconds
all over the world and rarely if ever faced a downtime?

~~~
generalpass
> It's all bout money encapsulated by "security" marketing buzzword. Yes, DoH
> is an improvement over old plaintext UDP DNS. But why should I used a nobody
> service over Cloudflare or Google that update their databases within seconds
> all over the world and rarely if ever faced a downtime?

It is not so straightforward as always being an improvement. For example, a
user can be uniquely identified through session resumption session ID whereas
with plaintext UDP behind a firewall with many others doing lookups unique
identification of a user can be close to impossible.

~~~
nif2ee
Yes, that's a one more reason to trust Cloudflare and Google over a nobody
service since their motives with their free DNS services (including DoH and
DNS over TLS) relate more to having up-to-date synchronization, load
balancing, etc.. with the servers they host themselves. Spying on users via
DNS queries by Cloudflare or Google is kind of a joke that can bait only the
ignorant. Cloudflare terminates TLS connections and even issues TLS certs and
can "theoretically" see every sensitive data that can compromise both
businesses and users and yet they are trusted by countless public companies
and startups alike. The same thing if not more for Google for their own
ervices or their GCP business.

------
rasvj
What if there's a bug in Mozilla's implementation at some point and DoH
servers have to return a slightly different response for certain versions of
Firefox? How will they achieve that?

~~~
Kuinox
Then Mozilla need to fix their implementation. If the client is bugged you
don't fix the bug in the server.

~~~
rasvj
Welcome to the real world.

