Is this really the general expectation?
From what I've read so far, I'd expect that both Chrome and Firefox will simply hardwire this to dedicated resolver endpoints selected by them - and maybe provide group policy or about:config options as an override, with strong communication that ordinary users should not use them.
Simply put, I still don't buy the idea that the browser needs a DNS client. The OS network stack can, and should, be the provider of that service.
This is just, "We know what's best for everybody. It's easier to ignore router vendors, DNS server vendors, ISPs, IEEE standards, and so on if we just do it ourselves in spite of the problems it's sure to cause. That other way is hard, anyways. Damn the consequences, we're going to do it anyways!"
What I see as more concerning is that this could hugely shift how the whole system of domains runs. In the worst case, if a single (or a small group of) DoH services turn out to be dominating, they will effectively have full control over all domains - even local ones.
If you want to make a domain inaccessible, no need to bother with the registrar, simply stop serving it.
Want to add a new top-level domain without handing the IANA billions of dollars? No problem, just start to serve it tomorrow - "technically", it won't be a valid domain, but who cares if all browsers resolve it?
Think another domain is rightfully yours? You can file a dispute with the registrar and "proactively" redirect it to your own servers while the dispute is processed.
I know that both Cloudflare and Google are attempting become the central recursive resolver for everyone, but it seems to me that if we are abandoning the current system of having the ISP as your automatic recursive resolver, then the step to just do it yourself has gotten very small. The only argument I have heard against it is that users will rebel from the extra milliseconds from not getting a cached response from a local cache, compared to the anycast response that most large site has for DNS (which given the practical nature of anycast, will be more or less the same local provider).
There is no need to keep refreshing DNS records in background and have a massive cache. The content delivery system does that all day long. Because this is a service that ISPs want to sell, there is also a huge incentive for them to get large companies to pay for the CDN rather than expect them to run background jobs with large memory costs for free.
But even if there were added latencies for some subset of sites, we are talking about milliseconds. With DoH there has been a simultaneous claim that with HTTPS they can push multiple responses for a single request (something which the DNS protocol supports but has never been used in any implementation), which means that now we can reduce the latency to a single round trip. The difference between a single round trip to Cloudflare's CDN recursive resolver, and a single round trip to the official anycast DNS resolver with DoH, is marginal if any. If the site is located on Cloudflare the whole question is also moot.
If you're configuring your own DNS server you're waaaay out of scope for the problems this is trying to address.
Yes, because you have to trust one less third party if you use whatever internet connection DHCP provides you. ISPs can see IP addresses you connect to anyway and can figure out what domains and services they belong to. If you don't trust that DHCP, you have to use at least a trustworthy VPN to have some privacy improvement, DoH to a third party will only make it worse.
At home, I block all the public resolvers on my router and intercept DNS, tunnel it through a vpn mesh.
For most people at public open WiFi spots, DoH could certainly help.
Yes. Because that's my network, my DHCP and something I have 100% control over.
There's literally nothing in the universe which better protects my privacy.
Maybe I am not up to date, but Mozzilla has always been explicit that it would not be activated by default (and that one way of activating it would have the system DNS as fallback)
> The OS network stack can, and should, be the provider of that service.
I understand why it is important not to mess with the underlying network stack, but I would also like not to be redirected to random websites when on a public wifi (even if sometimes blocking data intensive domains like youtube is a good thing). I feel that the original mozzilla blog had a pretty good explanation of the advantages.
Here's the old thread by a Chrome dev on why they wanted their own resolver: https://plus.google.com/+WilliamChanPanda/posts/FKot8mghkok
I don't really agree. If the host's DNS client is using a good caching algorithm, then it should have a cache hit rate in the 80s or 90s if your hostnames are remotely sensible. Most people go to the same sites over and over, and those sites typically feature content from the same networks. In that case, over 80% of the DNS traffic isn't actually network traffic. It's just querying the host's cache database. Furthermore, the browser knows every link the user has access to on a page or bookmark. They can be (and probably already are) pre-fetching DNS for those sites.
In that case, they need to work with OS vendors to optimize that. OS vendors should clearly wish to do this; nearly every user wants a faster Internet experience, and if Google's dev is right that a DNS query is a significant bottleneck, then they should be interested in improving that aspect of the system.
It sounds to me like what they want DNS to do is return not just the IP of the actual address, but all the domain names and IPs of all known resources at that domain name. Of course, if you've got a CDN or other system, that makes DNS suddenly very complicated compared to the current system.
No, the more I think about this the more it feels like a way to move the Internet away from open DNS towards some siloed proprietary system where the core databases that control who is on the Internet name system suddenly aren't under IANA control.
That sounds like a bullshit argument right there.
Name on example, and then help me understand why it should be fixed again and again in every browser instead of once in core DNS.
Note that this is still an experimental feature, so when this ships it won’t necessarily default to Cloudflare nor be enabled by default.
When you connect to a network that needs its own DNS to access its resources, you are not going to change that setting each time you connect to that network.
Even more interesting thing happens, when you physically roam between several of such networks (for example: your company and your customer). What was until now handled transparently handled by DHCP for you, now you have to manually configure each time you connect to that another network.
Now, we have moved from static network config decades ago, and for a good reasons.
At work though, probable the opposite.
Which an overwhelming part of the world does. Forced DoH is making problems for the clear majority when trying to solve a minor problem for a tiny majority.
Clearly that's not going about things the right way.
Anyhow: If you don't trust a network... Why are you connecting to it?
Why should I have to trust the network I'm connecting to? That seems like a major violation if the end to end principle. It's also just impractical.
If the overwheming majority of the wild trusts the networks they are connecting to, that seems like pretty good evidence that DoH or something like it has value.
I suspect, few users will have anything useful to put in that field, except the default.
The browser doesn't have access to the DHCP data because it usually happened during the OS startup way before the desktop even becomes visible.
That doesn't stop you from using a custom DHCP option.
> "Moving forward, we are working to build a larger ecosystem of trusted DoH providers, and we hope to be able to experiment with other providers soon."
More in <https://blog.mozilla.org/futurereleases/2018/11/27/next-step...
I'd rather have less people to trust, thank you.
Most notable, this doesn't hold true for Europe, where you do have a good contract with your ISP and GDPR protects you from lots of shenanigans that are possible in the US.
Maybe browsers and OSs could look at the local search domain, and send queries matching to DHCP servers, and everything else over DoH, kinda like split dns with a VPN?
I have local DNS servers set to resolve both our intranet and all external queries. Some external queries are blocked for various reason. Either a browser uses the local network settings or it gets banned from the network. This is not a discussion because the real world intrudes on what browser vendors think is best.
This tradeoff has always been true but everyone settled on the "good enough" that means security isn't actually strong, clients aren't actually treated as internal or external, and protocols are ossified because the FW expects wxyz to implement the above two failures.
The user doesn't control the client, because it's locked down.
The admin doesn't control the client, because DoH is tamper-resistant.
Who exactly does control it?
If it's not your device: Them. It's their device, you aren't (and shouldn't be able to) control their traffic.
Secondly, again none of this has anything to do with DoH. Users still set the system DoH/DoT DNS server the same as they always set the system DNS server and assumed system apps/installed apps weren't using custom sockets/transports/encoding to get around it. Users still control the browser feature enablement and can change destination server.
DoH changes none of this and is irrelevant to it, DoH is just a standard serialization of something anyone has been able to do for decades now. What DoH DOES do is prevent every random node inbetween a device and the DNS server from inspecting/modifying the traffic unless the end station is explicitly configured to allow it.
How would you detect that a browser isn't using the local network settings?
If I'm on a trusted network, use the network resolvers (either via UDP 53 or DoH, or whatever) and if I'm not use preconfigured resolvers like 220.127.116.11 or something, part of not trusting the network should also be not trusting the resolver. Before DoH this was a moot point since UDP 53 can be trivially captured and redirected, now the client OS can actually do something about it.
I don't like the idea of individual applications overriding system resolver settings.
Back in the days we used to call this DHCP option 6, i.e. DNS.
I'm not sure why you want to replace a DHCP provided DNS server with a DHCP provided DoH server.
Because why on earth would the DHCP-server provide different DNS-servers for those two use-cases? The idea itself makes no sense.
Can we please just get back to regular DNS, please? It works. It scales.
It's possibly the single last thing on the internet which is still decentralized, and I'd hate to see this become another centralized, single point of failure, walled-garden bullshit.
Just watched a UKNOF presentation on DNS changes. Sounds like google/mozilla were fed up with the sluggishness of standards bodies, network operators, and OSes in securing DNS, so have gone for their own version. Charitably I'd like to think that this is a wakeup call from Mozilla, "sort out your shit or this is the future, but with google owning it"
The browsers are not respecting DHCP in this case, but that doesn’t mean that other resolvers can’t be configured via DHCP to try DoH or DoT.
Internal DNS I think is largely not a good thing and I'd be happy to see it go.
Being able to host my own authoritative servers for my domains inside my org is a fantastic feature of DNS.
It lets me do things like split-horizon, which lets me deal with clients coming from different origins that may reach certain servers with or without NAT.
I'm also not keen on putting all my records on public name servers, for everyone to discover.
Second, my network filters DNS rebinding, expect from plex.com. I guess I could ad my domain to it, but that's an extra point of failure.
ci.myorg-int.com -> 10.11.12.13
The downside is that now all the internal services are discoverable using DNS scanning techniques. It means that competitors can see what services the organisation is using. Or attackers can better prepare themselves for infiltration.
Another downside of DoH is that it's not possible to filter out DNS rebinding attacks. For example and attacker can trick your browser in requesting a resource from xxx.somedomain.com that points to 10.11.12.13. If the CI is vulnerable to CSRF then the attacker can use the browser to exfiltrate information or do some actions on the CI.
Your web server should be configured to not serve content just by IP and require Host header to be a domain you control. (like using server_name in nginx) Otherwise they can just point to 10.11.12.13 directly anyway.
This is an internal client using an internal IP address to communicate with an internal service... just so happens that a malicious user made the internal client talk to it maliciously.
DNS rebinding attacks being stopped by the resolver are a great place to start and something we can do. Bypassing that protection in the name of resolving using DoH just means you've made things less secure.
I don't think it should be put in the browser. It would actually made my setup less private, since I use DoH over tor.
Though configuring the http servers (like printers or whatever) and/or putting them behind a proxy on a sparate network, if they are sensitive/not configurable, should be done too.
I think you underestimate the number of things on your network that run unconfigurable web servers.
If some crazy group does hardcode it then they have made your job even easier.
(1) Some kind of load-balancing thing and/or edge DoH servers. It gives you an opportunity to connect to a DoH server near you. Latency matters for DNS, and for traditional DNS, DHCP takes care of hooking you up with a nearby server. This could give comparable functionality for DoH. (This could probably also be done at the IP routing layer. But it's nice to have options.)
(2) Decoupling. You can change DoH server IP addresses without releasing a new browser build. And anyway, if you did try to do it by releasing new browser builds, you'd have users who don't bother to update.