Hacker News new | comments | show | ask | jobs | submit login
A cartoon intro to DNS over HTTPS (hacks.mozilla.org)
400 points by johannh 5 months ago | hide | past | web | favorite | 134 comments

As a cynic I would say this is an attempt by Google and Cloudflare to collect DNS data. Why else would they provide this service for free?

Both Google's [1] and Cloudflare's [2] DNS privacy policy prohibits them from storing personally identifiable information or from correlating DNS information with other Google data coming from the same IP/account but it does allow them to store information about which domains are popular, from which locations and from which type of device.

TLS (and therefore HTTPS) provides a very useful fingerprint based on accepted cipher suites, extensions, compression methods...

[1] https://developers.google.com/speed/public-dns/privacy

[2] https://developers.cloudflare.com/

[3] https://devcentral.f5.com/articles/tls-fingerprinting-a-meth...

> Why else would they provide this service for free?

Cloudflare runs the largest authoritative DNS server for their customers. The best way to make the DNS server faster is to make users query it directly.

For Cloudflare-hosted domains, instead of:

   User → ISP's DNS resolver → ns.cloudflare.com.
you get:

   User → [ 1111 → ns.cloudflare.com. ]
where the latter two are on the same machine.

I work at Cloudflare, this is correct. runs on our existing hardware deployed around the world, it costs us very little. When you use it it improves performance for the 8 million or so sites we sit in front of, that's our actual business.

Mozilla sends people to https://mozilla.cloudflare-dns.com/dns-query.

Can you explain why this site is blocked by uMatrix?

Strange, uMatrix doesn't block that site for me. It just doesn't have any content.

For Google it makes sense. The faster you resolve DNS, the more webpages with ads you visit. Small price to pay to increase impressions.

DNS over HTTPS is actually a lot slower to resolve than traditional UDP DNS.

Yes, but only because of TCP and TLS connection overhead.

Once the connection is established, response time is similar to UDP.

Does the connection get reused?


Not for long.

If the browser is controlling the resolver in question, there's no reason not to hold long-running connections, or reconnecting on disconnect.

DNS is not the bottleneck for page load speeds, especially now that 99% of the internet has images or video (even if the images are not a main focus of the webpage, such as a news article's image header)

Bandwidth bottleneck, no. Latency, time to first usable content on screen, absolutely.

> time to first /usable/ content on screen

If we are going by time to first /usable content/, then I would blame javascript for most of the it.

Also consider that every shitty webpage needs to load ressources from a dozen other domains.

Google is the business of displaying ads by means of collecting data to form a profile.

Cloudflare is in the business of running websites really fast and subsidize a free offering through paying customers.

Which of those has a conflict of interest in running a DNS server while promising to protect privacy?

Running websites and selling users' data brings you more profit than just running websites. With DNS Cloudflare can also learn what non-Cloudflare websites users are visiting and when.

Why would cloudflare do that? It's not in their business model and "it makes more money" is hardly a thing that motivates corporations all the time, otherwise google and facebook would be offering subscription services at 20$/year to get ad- and track-free.

Why would cloudflare even want to know what websites you visit? They don't operate an adnetwork, they operate a CDN. At best they could use it to pre-cache websites in regions before demand rises. But they can already do that without DNS...

> "it makes more money" is hardly a thing that motivates corporations all the time

Really? The primary purpose of any corporation is earning as much money as possible.

Okay, let's do it by having an analog.

You are given advice on how to safely cross a four-way intersection by two companies.

One is an insurance company specialised in people being run over by semi trucks at four way intersections.

The other is a contractor that designs, builds and maintains four way intersections for the government and private entities.

Of course, yes, the later could collude with the former to make extra money.

But it's also not their business model. They build intersections, people pay them to make those safe and reliable. People do not pay them to collude with shady insurance companies which try to kill people by semi truck.

People would actively not pay them if they did that.

Same with Cloudflare. If CF sold data to ad networks, a lot of websites would simply jump ship and use one of the other CDNs with free offerings. People pay CF a shitload of money for ensuring the connection is private and safe (notably banks, governments, etc.)

Your analogy is bad because it describes behavior which is illegal. If you want to make as much money as possible, you might avoid decisions like the one you present which will cost you more money when it is exposed.

It's not a bad analogy because it's illegal, it's a good analogy because it leads to people understanding what is happening here by simplifying in something people encounter regularly and understand somewhat.

See recent behavior of Wells Fargo before you dismiss this. They have built the largest consumer bank on practices that were not legal. Even after some of those practices have been exposed they are #1. Millions of people with multiple semi-truck tire marks on their bodies still bank with them.

I'm not saying that every for-profit company will abide by the law, but that they may have an inclination to do so because penalties would reduce their profits. So the comparison you are trying to make is not quite the same since there's a condition where a for-profit company might still do something ethically wrong (but legal) to make money, but avoid illegal behavior that would also make money (but cost them more if caught). For a recent example of this: facebook.

It doesn't mean, take the shortest route toward the stack of money in sight.

If CloudFlare started selling DNS info when they have emphasized that their DNS service is caring for privacy, people apparently stop using their resolver and given the impression that they can lie, it can also hurt their main business.

Cloudflare itself never made sense to me. What possible incentive do they have to stop their primary purpose (DDoS protection) - They have value in promoting the behavior.

Whats worse, is everyone and their dog is using them. What happens when they push a bad config to their core routers, or foobar their anycast?

It probably doesn't make sense because you misunderstand their primary purpose. It's not DDoS protection. Cloudflare has a pretty wide-spanning platform of products and services, but if you had to pick one out as their "primary", it would be their CDN product. The DDoS protection is just more visible because of the nature of the product (a good CDN will never make you aware it even exists), and because mitigating DDoS attacks makes for good news headlines.

Even if DDoS was their main business driver, what you're saying is similar to "doctors don't make any sense to me. what possible incentive do they have for keeping people healthy? they have incentive for promoting bad health."

As someone who works in security, believe me, there are plenty of cyber attackers out there that will easily keep companies like Cloudflare in business, no "promotion" of bad behavior required.

> what you're saying is similar to "doctors don't make any sense to me. what possible incentive do they have for keeping people healthy

People do say this, all the time!

It doesn't make sense to you to do the right thing and protect people at the expense of profit?

It costs money to do the things they do. If there's no profit, the service has to beg for money, or die for lack of resources. CloudFlare is not a charity, and if it was one, it would be ineffectual because their services are too behind-the-scenes and technical to get a donor base wide enough to support them. Profit is not necessarily an anathema to doing the right thing, and if you can align your interests with your cash flow, you can do the right thing without begging for money, which imho is even better than doing the right thing but having to subsist on the money generated by profitable enterprises that aren't as noble (donated either directly, or by their employees). But of course, aligning those interests is a challenge.

Um .. you remember how they spammed random password data and memory all over the Interwebs right?


As an ISP, I'm skeptical of the motivations of big CDNs and Google in general, but it's becoming an ietf standard. I run recursive resolvers for clients numbering in the hundreds of thousands, with an ACL that allows only our ARIN IP blocks to query them.

It is not hard to put a dns-over-https frontend in place for my clients which pulls queries from my own trusted bind9 servers.

Any ISP with a clue can do the same.

For people who know how, why not just run this stuff locally? Setup your own recursive resolver on an openwrt router? Or maybe in a hosted VM close to where you live?

I know Google and CF claim they don't track this DNS information, but why even use them when you can run your own. Keep in mind CF did have a software bug that spewed SSL traffic and passwords all over the Internet[1], and they took down a website once because their CEO didn't like it[2].

[1] https://blog.cloudflare.com/incident-report-on-memory-leak-c...

[2] https://fightthefuture.org/article/the-new-era-of-corporate-...

When you simply run a packaged router at home that doesn't have the ability to do its own resolver, then you have to host it somewhere but since DNS can't do authentication, it's hard to keep it private.

I'd like to know a way to host your own resolver but keep it private even when you're on mobile IP.

What’s your ISP’s web address?

I share some controversial opinions on here semi-anonymously and wouldn't want my personal positions on certain topics to be confused with an official position held by the companies I contract for. I can say that it's not a huge one, it's a mid sized regional ISP.

Oh! I thought you were the CEO of an ISP. I'm curious about starting my own someday so I take notes of smaller operations as inspiration.

Whenever Mozilla puts out one of their nerd cartoons, I instinctively look over my shoulders and tighten my sphincter. Of course, it's always nice to know the reason behind a reflex.

There are 3 major protocols available for DNS privacy:

* DNSCrypt

* DNS over TLS

* DNS over HTTPS

DNSCrypt is the one with better client support and a long list of providers available. If you pick DNS over TLS or DNS over HTTPS you will be restricted to 3 or 4 major players (google, quad9, cloudflare and cleanbrowsing). If you trust them, you are good.

For example, this is the list of providers with DNSCrypt support: https://download.dnscrypt.info/dnscrypt-resolvers/v2/public-...

For DNS over (HTTPS|TLS), there is very little client tools available for troubleshooting. The best one I found was these 2 in PHP:

https://github.com/dcid/dns-over-tls-php-client https://github.com/dcid/doh-php-client

DNSCrypt is also the fastest and most secure.

It doesn't require sessions (uses UDP by default, like regular DNS, but prevents amplification), enforces safe cryptography and pinned certificates, is trivial to implement, doesn't need OpenSSL, implements padding without inventing yet another DNS extension, and can use unique keys for each question (so that DNS providers can't fingerprint clients, unlike other options due to TCP sessions and TLS tickets).

If it's the fastest and most secure, why are people throwing their weight behind DNS-over-HTTPS? There must be a reason for it.

Because it's much more complicated to implement, where-as DNS-over-TLS and DNS-over-HTTPS are far simpler to integrate into existing software and operations.


Both HTTPS and TLS implementations require custom software in order to work, as no OS supports this natively (yet).

It boils down to install a stub that your local resolver will use instead of the upstream directly.

Well, basically because there are TLS libraries available in nearly every language. DNSCrypt is a custom protocol.

For example here is my implementation over rustls in TRust-DNS: https://github.com/bluejekyll/trust-dns/blob/master/rustls/s...

Basically that’s a thin wrapper over the TLS library, and I was able to do three different libraries. DNSCrypt on the other hand was a much larger project, and I gave up on implementing it when I saw the DNS-over-TLS RFC complete.

Quite the opposite, actually. A DNSCrypt client can be implemented in a couple lines of Python: https://github.com/tresni/dnspython-dnscrypt/blob/master/dns...

It probably took about 15 minutes to write these. Writing a fully functional client in Go, which is the core of dnscrypt-proxy 2, took about the same time: https://github.com/jedisct1/dnscrypt-proxy/commit/b076e01f7a...

Correctly implementing DNS-over-TLS is way more complicated.

It has to use TCP. So in order to avoid it being vulnerable to the most trivial slowloris attack, you need to implement a connection pool, reuse old connections, timers to enforce timeouts.

If you want half-decent performance, you need to make sure that multiple, out-of-order queries and responses can be sent over the same connection. This requires tracking query identifiers, making sure that there are no ID collisions in inflight queries, and if you are just building a proxy, you can’t expect any upstream server to support this.

TLS session tickets allow DNS operators to track devices no matter what their IP are. TCP sessions allow DNS operators to fingerprint devices sharing the same external IP. From a privacy perspective, this is effectively a regression over plain DNS. So for people who care about this, you need to add the ability to disable these. Performance will be terrible, but that’s what you get for using a transport protocol that was never designed for DNS. This can be partially mitigated with DoH using forthcoming HTTP/2 extensions. But for raw TLS that doesn’t allow much except send() and receive() packets, there’s no hope without reinventing HTTP.

Encrypted DNS requires padding. The way to do padding in DNS-over-TLS is to add extra records to DNS packets. So you need to parse and modify DNS packets. Which is slow and painful to write, if only because of name compression. Instead of that lousy hack, DNS-over-HTTP/2 can simply use existing HTTP/2 mechanisms: HTTP/2 frames already support padding. DNSCrypt doesn’t require packets to be parsed or modified either; padding bytes are simply appended to raw DNS packets before encryption, and are trivial to remove after decryption.

As we recently saw, DNS-over-TLS is virtually useless against attacks such as BGP hijacks, unless certificates are pinned. So, you need to implement pinning. Figuring out how to do it using the OpenSSL API is going to keep you busy for quite some time. DNSCrypt only requires one function call to verify a signature. DNS-over-HTTP/2 can leverage what browsers and modern HTTP library already do.

So, implementing DNS-over-TLS is hard. It’s not just about sticking stunnel in front of a stub resolver. Even just the TLS part is hard to implement securely. Validating TLS certificates in non-browser software remains the most dangerous code in the world: https://crypto.stanford.edu/~dabo/pubs/abstracts/ssl-client-...

But it’s also pointless. Other protocols are easier to implement and more efficient.

And from a server perspective, proposing DNS-over-TLS means that there is yet another thing to do certificate management for. Key management is hard. It’s the root cause of virtually all DNSSEC outages, and why people gave up with DNSSEC or didn’t even try. Software supposed to automate this exist, but the reality remains the same.

In contrast, key management has been solved in the HTTP world. Through built-in support in web servers, ACME clients, CDNs and proxies. People already run web sites. Let them leverage what they already have and that works well instead of forcing them to go back to square one and figure out how to do key management for DNS. Ditto for authentication and logging. Which is why DNS-over-HTTP/2 makes way more sense than DNS-over-TLS DNS-over-TLS also requires a dedicated port. Which is even not reachable from many restricted network environments such as the WiFi network I am currently on. This kinda defeats the whole point of the protocol. DNS-over-HTTP/2 uses the one port that is less likely to be blocked, and is fully compatible with proxies including transparent ones from mobile carriers. DNSCrypt also uses port 443 by default, can use TCP only if required, but it doesn’t even need a dedicated port either; DNSCrypt and regular DNS can share the same port, as done by Cisco servers.

So, DNS-over-TLS is hard to implement. Hard to deploy. Difficult to connect to. Slow. Won’t get any better without reinventing HTTP. Feels like it was invented 20 years ago, but it doesn’t really make sense any more today.

I disagree. In my TLS implementation, I support multiplexing, timeouts, and generally everything you said is hard, over standard TLS impls.

I found this much more straightforward to implement than DNSCrypt. See my response to the sibling comment for a link to the code.

AFAIK it isn't possible to combine it with Pi-Hole though.


Do the dnscrypt providers actually work, though? I tried setting it up from my machine, and it seemed like many of them were gone. I eventually managed to find a working provider in Iceland, but being that I'm in a country on the opposite side of the planet, the increased latency made the internet markedly more sluggish.

Unless you are still running version 1, dnscrypt-proxy will automatically pick the fastest, working servers for you.

Neat, thanks. I'll try and get it going again this weekend.

The location doesn't matter much. Most of them are anycast, loading from all over the world.

Might you or anyone else have suggestions/feedback of which providers supporting DNSCrypt you have good luck with?

Are OS implementations planning to switch to DNS over TLS or DNS over HTTPS anytime soon?

Because if not, any requests made by non-browsers are still susceptible and will only give users a false sense of security.

Because it's better than doing nothing in the short term, and OSes can switch over as time moves on. Browsers have a much faster cadence and automatic updates (mostly).

I kind of hate this. Taking a decentralised service, and replacing it with a service provided by a small handful of tech giants.

"But this doesn’t mean you have to use Cloudflare. Users can configure Firefox to use whichever DoH-supporting recursive resolver they want. As more offerings crop up, we plan to make it easy to discover and switch to them."

Only defaults matter. Your average web user wont be interested in knowing about or configuring this, no matter how simple the explanation/choice is made.

If only defaults matter, then it's already a dead horse, as the majority of users don't know what DNS even is, and are using their ISP's servers by default.

There is no decentralized DNS where the default for most users is their monopoly ISP.

Why does the amount of people knowing about DNS matter? Especially in the context of decentralization?

Depending on your ISP and/or country of origin, it can matter a lot.

It does not need to be centralized at all. Any internet service provider with a modicum of Clue can install a DNS over https frontend listening on the IPs of their recursive resolvers, and pull data from their existing bind servers.

This does not contain any sort of proprietary or non free software. People are free to ignore the content delivery Network provided recursive resolvers, and set up their own.

That is beside the point. What Firefox is doing is to actively distrust the DNS the ISP is advertising because of the bad practice of some ISPs. Even if the ISP would advertise a DoH endpoint, the same reasons for distrust would still exist (they only mention attacks at the ISP's DNS server or between the ISP's DNS server and the authoritative DNS servers).

Also note that DNS is one of those dinosaur protocols like email and usenet that have persisted from the early days of the internet, back when we could buy interoperable services from decentralized parties. Every service we buy today is centralized or even walled garden only, see Slack, Facebook, App Stores, AWS, etc. We currently just don't know how to build successful distributed ecosystems.

People are understandably highly suspicious of DNS services and privacy issues with giant companies like Comcast, Verizon, Centurylink, etc. But I'd like to point out that there's a large number of small to mid sized ISPs where the final business management decisions rest with the individuals who also have 'enable' on the routers and core Linux/BSD server infrastructure.

There is such a thing as ethics in network engineering, and that term encompasses things like not attempting to MITM your customers' recursive DNS resolution queries, or monitoring/tracking/selling the data.

Well, we never knew how to build a secure successful ecosystem. Between the various way to secure DNS this seems reasonalbe.

In this blog post, Firefox should encourage people who know how, to run their own. Maybe even provide/maintain some docker images for us tech heads.

I agree that immediately promoting CF doesn't seem like the best genuine idea for those who are still a part of the Firefox/Mozilla community.

If the vast majority of users today are using DNS severs that are wiretapped for harmful purposes, such as advertising on NXDOMAIN pages or maliciously rewriting DNS, then switching them to one of a few DoH providers is no more concentrated than it is now. Most of my coast uses and will someday use This isn’t as significant a change relative to the horror of today’s plaintext, MitM’d user-hostile reality.

Then put it as part of your startup process, whether that's first-time startup or just-upgraded-from-a-previous-version startup.

Do not select any default. Randomize the selections.

We have the NTP pool groups as a model for how to organize groups to offer services like DNS-over-HTTPS.

There was a good chunk of time where my ISP (Verizon FIOS at the time) was having some kind of DNS hijacking attack happening where many CDN IPs were being replaced with an IP of a server that was adding some ad-injecting javascript into many pages (and god knows what else, I still have the payload laying around somewhere as I saved it for future curiosity).

At the time my only real recourse was to pump my whole house through a VPN, as even Google's DNS ( was being hijacked, but ONLY when it was coming from my home IP. (Full disclosure, i'm not very well versed in the networking stack. I know enough to get myself in trouble, but not much more. This was what I understood to be happening, but I could be way off base. However it was happening on multiple devices, multiple OSs, multiple verizon IPs, multiple DNS servers, both with and without a router, and would stop instantly if any of those machines were pointed at a wireless hotspot, or a VPN was turned on. At one point I even sent my router's WAN connection through my phone's hotspot and the problem went away)

After talking with verizon many times and each time having to spend an hour or so trying to get through to someone that knew even remotely what I was talking about, all they were able to do was reset my IP, which fixed nothing.

Now that DNS-over-HTTPS is becoming more common, i'm going to use it everywhere I can. Yes, DNSSEC might be a "better" solution, but I can use DoH right now to protect myself on all sites and (hopefully soon) all devices.

Just the other day I discovered Intra [0] a (still unreleased) app by Google for android which has your whole android phone use DNS-over-HTTPS.

I've been running it the last few days and i'm quite pleased with it. Does anyone know of a way to force all DNS queries in windows to use DoH?

[0] https://play.google.com/store/apps/details?id=app.intra&hl=e...

I don't think DNS-over-HTTPS precludes the use of DNSSEC - I think the intent is that eventually, you will in fact use both in tandem. DNSSEC alone would only give you the ability to check the integrity of a record, but DNS-over-HTTPS makes the transaction confidential and prevents third parties from censoring the request.

I guess I was just heading off the flurry of comments along the lines of "Why use DoH when we have DNSSEC?" that always seem to come up when discussing DoH.

DNSSEC has no encryption. It's not for privacy at all.

Right, DNSSEC is about validating the authenticity of the DNS Record in a DNS Message, whereas DNS-over-TLS/HTTPS is about establishing authenticity and privacy with the upstream resolver.

In theory if the upstream resolver is using DNSSEC to validate all the Records, then the client over the TLS session can be fairly confident in the Records it receives.

> Does anyone know of a way to force all DNS queries in windows to use DoH?

I think you could use pi-hole to do this. https://docs.pi-hole.net/guides/dns-over-https/

You could also run your own DNS server as well, like Core DNS, and configure it to resolve through DNS-over-HTTPS. I'm sure this is about the same thing, but it's worth noting that you could possibly use your existing router or NAS to run the software.

Thanks a ton, this looks fantastic! Do you know if it's possible to setup Pihole to use this (and possibly other features) but not do any adblocking?

I'm using cloudflared [0] for this. Allows me to have system level DoH and everything uses it (unless explicitly configured not to). Working on Linux machines (amd64 and aarch64) and MacOS.

The documentation is not great / accurate but with a bit of fiddling I have it running as a systemd service (launchctl on MacOS). I'm using the /metrics endpoint to get details in Prometheus on the stats.

0. https://github.com/cloudflare/cloudflared

Sure, just deselect the blocklists in the GUI of your pi-hole.

Personally, I highjack all DNS requests made on my network at my router, then use a VPN tunnel to resolve them on a server that I control that runs unbound. My guess is that FIOS was doing the same to you, just without your interests in mind.

A similar setup to mine could be deployed at your network edge, and it could then force all of your port 53 DNS requests to go over a more secure protocol. Of course you would have to figure out how to set this up, and it wouldn't protect your devices anywhere except your home network.

>My guess is that FIOS was doing the same to you, just without your interests in mind.

It wasn't FIOS doing it, the IP was in Israel and was known as a malware serving IP.

Could be your router was hacked too...

I had tried 2 different routers and got the same result, including bypassing the router at one point, and I even ran my router's WAN through a wireless hotspot on my phone at one point and saw the problem stop.

Great find with Intra. Installed and working well on Pixel XL 2.

I am very conflicted about DNS-over-HTTPS vs. DNS-over-TLS.

Most of DNS-over-HTTPS' interesting use-cases start coming into play when you're using the same HTTPS session as the one being used to serve the site you're visiting. Otherwise, DNS-over-TLS is sufficient for the same level of privacy.

At that point though, DNS-over-HTTPS has a provenance issue that I don't fully grok how we're going to avoid. What I mean by that: if the site you're visiting supports DNS-over-HTTPS, where requests to that site for DNS records are requested, what happens when they decide to issue custom responses to DNS requests that ignore or supplement actual data in a zone? Won't that lead to a bifurcation of the DNS network, where web-sites can start issuing custom response to DNS queries?

Cloudflare, and Quad9, both offer DNS-over-TLS, this will be preferable for non-HTTP use-cases. Some of the points in the article imply that DNS when using DNS-over-HTTPS can't be used for tracking you, but really that just means you're passing that trust to Cloudflare, Quad9, or Google. I suppose the choice is open to you at that point.

I'm not sure I understand.

I was under the impression that DNS-over-HTTPS was nothing more than just an alternative DNS protocol just like DNS-over-TLS, where you perform an HTTPS request in order to query for a DNS name, and that DNS-over-TLS was just plain old DNS wrapped in TLS.

You seem to be implying that DNS-over-HTTPS would enable sites themselves to deliver DNS records. I don't see how that is possible, because connecting to HTTPS with a hostname requires resolving a DNS record. Am I misunderstanding?

You are correct for the initial request. I've seen many arguing for taking this to another level of actually sending DNS requests over the same HTTPS session being used with a site the browser is currently connected to.

Is this standardized/drafted? I am curious how one might implement this.

See this thread with one of the authors of the RFC: https://news.ycombinator.com/item?id=16728600

Everybody is right in this thread :)

First, just to avoid confusion, the post linked to this HN article is just about the classic recursive resolver model. That's the scope of what is being experimented with actively.

Second, the notion of resolverless dns (where dns records are obtained from somewhere other than your recursive resolver) is indeed something DoH contemplates but does not yet allow. That's because issues around tracking, correctness, and attacks haven't been fully explored. So unsolicited DNS is interesting but its not something any browser would accept yet.

There are some other opinions on how HTTPS matches the needs of DNS here: https://bitsup.blogspot.com/2018/05/the-benefits-of-https-fo...

Also notice how the plan is to push not only DNS entries but also TLS certificates:

"Right now, people are really keen to get HTTP/2 “out the door,” so a few more advanced (and experimental) features have been left out, such as pushing TLS certificates and DNS entries to the client — both to improve performance. HTTP/3 might include these, if experiments go well."


Some of those things could be used for bootstrapping SNI encryption as well:


"Threats to users' privacy and security are growing."

s/privacy/&, autonomy/'

Case in point about autonomy is on HN front page at present: https://news.ycombinator.com/item?id=17196888

The author cites a hypothetical example where a user shopping at Megastore is blocked from accessing her preferred source of DNS data in order to prevent her from checking a price.

Extending this hypothetical, imagine if in response to her request for an unbiased price quote the user was shown unwanted ads with inflated, customised pricing (informed by data gathered about her through tracking).

Choice of DNS data is an effective way for users to block advertising and tracking.

The issue with user control over DNS also arises with mobile and other devices (e.g. Chromecast/Google Cast/Google Home) that discourage or prevent a user from using her preferred source of DNS data, forcing her to use a commercially-oriented source which may block certain lookups.

This is relevant with any computer that connects to the internet.

It is an issue of autonomy.

There is a long tradition of HOSTS files and later non-commercial DNS where users can autonomously determine where on the network they want to "go". They have the final control over the source of DNS data the computer will use. They can delegate DNS service to someone else, however, following that long tradition, they still retain the autonomy to choose the source of the DNS data, whether it is another third party, their own DNS servers or perhaps /etc/hosts in place of DNS.

When an organization (e.g. running an "app store") seeks to circumvent the ability of the user to choose her own DNS data source on her own computer, that is an attack on autonomy.

The author mentions that Firefox will allow users to choose their own "DOH DNS" servers. If so, this respects users' autonomy.

(No one seems to be mentioning one obvious advantange of DOH DNS for browsers: bulk DNS "prefetch" lookups. One can use HTTP/1.1 pipelining to retrieve the IP addresses for every hostname contained in an HTML page, with a single HTTP request, instead of numerous, simultaneous DNS requests. As for privacy problems with TLS fingerprints, HTTP requests can be secured by CurveCP as an alternative to TLS - example is in my profile.)

I see additional problem with this, which actually endangers autonomy.

The resolving is not only done for user-initiated action, but is being done by many programs, even which you might not want to do it. For the same reason, many users use a local firewall to block outcoming connections, like Little Snitch.

(Sidenote: if you are using MS Office 2016 for Mac, and are not satisfied with the choice of telemetry that Microsoft offered you in the last update, and you are interested in third option, "None", the hostnames to block are nexusrules.officeapps.live.com and nexus.officeapps.live.com)

With apps using DoH and ignoring the local resolver, that firewall will now have a problem, especially if multiple, separate hostnames resolve to the same IP. Until now, Little Snitch used a guess (last resolved hostname that matches the IP); now it won't have that chance.

That's why, if the user wants to have a chance to who their local processes talk to, they must be forced to use a local resolver under user's control, not implement their private resolver. And of course, on non-public networks, it should be supplie-able by DHCP or RA.

As long as you can configure DoH, you can setup your own resolver and do what you want. In the end DoH will probably eventually be an option in the OS level, or not for lighter OSes. I think having it at the application layer is to add a nudge in the OS developer direction.

s/single HTTP request/single connection/

I tried DNS over TLS (somewhat similar) and it has some potential. But not with those strict timeouts. closes the TCP connection almost instantly after the query response, waits a bit longer, about 10 seconds (need to check again).

So everytime you want to make a query, you have to wait several RTTs before getting a response.

The connection need to be open for as long as possible, at least 5 minutes.

I used stubby as forwarder with idle_timeout: 6500000, the idle timeout in ms. The connection gets closed by the remote party, not by stubby.

Because DNS servers were never designed to keep many open TCP connections.

Doesn't matter what they were designed for. With TCP they need to behave that way. Otherwise this is a solution for people with latency <10 ms to the server. So not a whole lot.

I'll argue that the TCP and TLS handshake take more processing power then keeping the connection open.

The limiting resource with large numbers of idle sockets on the server side is memory, not processing power.

Which I doubt is a problem for Cloudflare or Quad9. Anyway, a TCP based DNS service needs to consider those things. Otherwise it is becoming unusable due to very high response times.

A standard 8 GB system with Debian 9 gives me 1048576 max file descriptors. I am sure this can be optimized still.

The default socket receive and send buffers are ~200KB, so you would actually need 400 GB of memory in order to have each of those 1048576 file descriptors connected to a unique socket.

And if you were keeping them open for 5 minutes as suggested, that would still limit you to only 3400 clients / second.

I do actually agree that they need a longer idle timeout on these connections, but I just wanted to point out that comparisons with the processing power required to set up a TLS connection aren't apt.

I'm pretty sure that they don't HAVE to use the defaults, and for something like DNS, they probably shouldn't be... The buffer should probably be limited to what the largest request segment would be for creating the TLS/HTTPS connection in the first place, which just guessing would be closer to 1K.

Seems feasible with some tweaking. Or confirms that this approach of using TCP is not worth the effort.

> "Threats to users’ privacy and security are growing."

Website won't load without allowing a call out to googleadapis.l.google.com

Yeah, you're right they are growing.

Was just wondering... what value will DNS over HTTPS provide if/when we all move to IPv6 and presumably everything could potentially be identified by IP address directly? Will datacenters/ISPs be incentivized to do NAT with IPv6 or have some other way of introducing indirection into the routing?

Note that for now, if you're sniffing packets, you can learn hostnames anyway due to TLS sending SNI in the clear. That may or may not change in the future...

Think about Cloudflare itself. Millions of websites hosted behind a handful of IP addresses.

So we go back to re-centralizing for privacy? I love Cloudflare, but... if that's really the answer to this... sigh.

Well, about 8M websites are already behind Cloudflare... if you add the top 50 hosting providers, that's probably 95% of the internet. Traffic is already relatively centralized.

Have fun remembering every IP by heart.

No, I mean that simply by observing the IP address of packets, you can know which hosts are being requested, since there are enough IP addresses to go around.

That's a reason to get rid of the TLS SNI extension and the HTTP Host header, but it's entirely unrelated to how DNS messages are transmitted.

Now just wait for your browser (or any other random application) to stop using your OS's resolve completely (at least Chrome does at times already by simply accessing DNS services via port 53 when it considers the configured OS DNS 'not good'. I have no idea about the exact criteria) by accessing its desired DNS-over-HTTPS server and bypass your carefully setup DNS filtering / monitoring.

Notes 1: I have NO idea if Chrome (or any other random application) accesses DNS-over-HTTPS already since I have not paid too much attention to it.

2: At least Chrome (on OSX) likes to access & & your configured DNS server on port 53 (happy eyeball protocol). This might only be on flaky networks like mine, where I tend to make all sorts of configuration experiments.

Encrypted DNS is an awesome idea. But routing all of your DNS requests to a private US company should not be enabled by default.

And what about SNI that shows domain name in clear text for HTTPS connection? Please do something with it too.

I applaud the efforts to increase privacy,reduce data collection and hardened security. Do we really want a SPOF in Cloudflare for this though? A single outage (or AT&T snafu) and many millions of users would be affected.

Definitely don't want SPOF. Firefox has both soft-fail and hard-fail modes.. for a soft fail it will fallback to traditional port 53 DNS. Its likely that will be the most common deployment - you need it to deal with captive portals and other split horizon issues as well cloud uptime incidents. But there is a hard fail mode if that is suitable for your environment.

and of course defaults matter a lot, but you will be able to select your preferred DoH endpoint (or not use it at all). Firefox wouldn't lock something like that down.

The article clearly states a desire to ship more providers as soon as more providers exist. If you know of any other providers who meet the declared privacy choices (e.g. deleted after 24 hours) and protocol choices (e.g. DoH, TRR, QNAME min), please do let us know!

In fact, it already happened between this Mozilla announcement and now: https://www.cloudflarestatus.com/incidents/2mz3wly2g7dy

I think encrypting DNS transport is as important as the next guy (though DoH is bad), but am super unhappy about Mozilla apparently signing on with Cloudflare's ongoing fairly successful attempts to centralize the internet. Sure, they say they'll delete your data "within 24 hours" (they shouldn't be keeping it at all), but pretty soon they'll get a Nat'l Security Letter like everyone else does.

Which begs the question, do they have a canary page?

In any case, it would be unreasonable to require logging for more than that... even a week would be too much data for many ISPs. Also, they have to have some logging to be able to even try and troubleshoot a problem.

There is no need for cloudflare to be a single point of failure. Any ISP that is capable of operating a high availability bind9 cluster has sysadmins with the knowledge to implement DNS over TLS and DNS over https. The software is all either gpl, bsd, lgpl or Apache licensed.

Why is DNS taking so long to have security patched in as if governments are pressuring to make sure they can snoop on things easily. Same goes for email.

Having an opt in security mechanism is easy to deploy as in keeping the http version of a site available while running https on a new port for clients that want to use it.

Doesn't TCP, TLS, HTTP, and finally DNS seem like overkill? Why not DTLS + plain DNS requests?

Standard HN response: Because my corporate firewall does not allow me to use UDP! Which is the nowadays excuse to use 80/443 for everything. Customers at home don't have this problem.

But there are alternatives, DNS over TLS (essentially the same without HTTP) and dnscrypt which uses UDP.

This is why I run an openvpn server on port 443 in tcp mode, not UDP, for places like shitty airport captive portal wifi.

As a good and responsible parent DNS over HTTPS will never be an option. I run a DNS server on my local network.

You could still have that DNS server use an upstream DNS over HTTPS or other encrypted channel that unifies traffic, which has the effect of anonymizing.

DNS over HTTPS uses DNS in the protocol. Does that make it extra-recursive DNS?

Not necessarily.

You can connect using an IP address. At least to bootstrap the process. This is where DNS Stamps come in handy https://github.com/jedisct1/dnscrypt-proxy/wiki/stamps

Isn't the expectation that the hostname header will be hardcoded for DoH requests?

>That means that your ISP can still figure out which sites you’re visiting, because it’s right there in the server name indication. Plus, the routers that pass that initial request from your browser to the web server can see that info too.

Well there goes the interest I had in this.

we're coming after SNI too. One step at a time.

(also, 1] dns leaks are worse than sni leaks as typically more people are exposed to the dns query and 2] HTTP/2 can carry more than one hostname on a connection so some hostnames that appear in dns are never leaked through sni.)

The TLS WG currently has only a problem statement for Encrypted SNI. Even the weak selection of two possible ways forward didn't achieve consensus as I understand it.

I don't see any way to have encrypted SNI without paying a price of one additional round trip. That's a fair price for something you must have, but for anybody to benefit we must insist everyone use it always, or adversaries will simply block it. And a round trip is a high price for users who don't (believe they) need this.

Well, when there's two issues, one needs to be fixed before the other.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact