TLS (and therefore HTTPS) provides a very useful fingerprint based on accepted cipher suites, extensions, compression methods...
Cloudflare runs the largest authoritative DNS server for their customers. The best way to make the DNS server faster is to make users query it directly.
For Cloudflare-hosted domains, instead of:
User → ISP's DNS resolver → ns.cloudflare.com.
User → [ 1111 → ns.cloudflare.com. ]
126.96.36.199 runs on our existing hardware deployed around the world, it costs us very little. When you use it it improves performance for the 8 million or so sites we sit in front of, that's our actual business.
Can you explain why this site is blocked by uMatrix?
Once the connection is established, response time is similar to UDP.
Cloudflare is in the business of running websites really fast and subsidize a free offering through paying customers.
Which of those has a conflict of interest in running a DNS server while promising to protect privacy?
Why would cloudflare even want to know what websites you visit? They don't operate an adnetwork, they operate a CDN. At best they could use it to pre-cache websites in regions before demand rises. But they can already do that without DNS...
Really? The primary purpose of any corporation is earning as much money as possible.
You are given advice on how to safely cross a four-way intersection by two companies.
One is an insurance company specialised in people being run over by semi trucks at four way intersections.
The other is a contractor that designs, builds and maintains four way intersections for the government and private entities.
Of course, yes, the later could collude with the former to make extra money.
But it's also not their business model. They build intersections, people pay them to make those safe and reliable. People do not pay them to collude with shady insurance companies which try to kill people by semi truck.
People would actively not pay them if they did that.
Same with Cloudflare. If CF sold data to ad networks, a lot of websites would simply jump ship and use one of the other CDNs with free offerings. People pay CF a shitload of money for ensuring the connection is private and safe (notably banks, governments, etc.)
If CloudFlare started selling DNS info when they have emphasized that their DNS service is caring for privacy, people apparently stop using their resolver and given the impression that they can lie, it can also hurt their main business.
Whats worse, is everyone and their dog is using them. What happens when they push a bad config to their core routers, or foobar their anycast?
Even if DDoS was their main business driver, what you're saying is similar to "doctors don't make any sense to me. what possible incentive do they have for keeping people healthy? they have incentive for promoting bad health."
As someone who works in security, believe me, there are plenty of cyber attackers out there that will easily keep companies like Cloudflare in business, no "promotion" of bad behavior required.
People do say this, all the time!
It is not hard to put a dns-over-https frontend in place for my clients which pulls queries from my own trusted bind9 servers.
Any ISP with a clue can do the same.
I know Google and CF claim they don't track this DNS information, but why even use them when you can run your own. Keep in mind CF did have a software bug that spewed SSL traffic and passwords all over the Internet, and they took down a website once because their CEO didn't like it.
I'd like to know a way to host your own resolver but keep it private even when you're on mobile IP.
* DNS over TLS
* DNS over HTTPS
DNSCrypt is the one with better client support and a long list of providers available. If you pick DNS over TLS or DNS over HTTPS you will be restricted to 3 or 4 major players (google, quad9, cloudflare and cleanbrowsing). If you trust them, you are good.
For example, this is the list of providers with DNSCrypt support: https://download.dnscrypt.info/dnscrypt-resolvers/v2/public-...
For DNS over (HTTPS|TLS), there is very little client tools available for troubleshooting. The best one I found was these 2 in PHP:
It doesn't require sessions (uses UDP by default, like regular DNS, but prevents amplification), enforces safe cryptography and pinned certificates, is trivial to implement, doesn't need OpenSSL, implements padding without inventing yet another DNS extension, and can use unique keys for each question (so that DNS providers can't fingerprint clients, unlike other options due to TCP sessions and TLS tickets).
Both HTTPS and TLS implementations require custom software in order to work, as no OS supports this natively (yet).
It boils down to install a stub that your local resolver will use instead of the upstream directly.
For example here is my implementation over rustls in TRust-DNS: https://github.com/bluejekyll/trust-dns/blob/master/rustls/s...
Basically that’s a thin wrapper over the TLS library, and I was able to do three different libraries. DNSCrypt on the other hand was a much larger project, and I gave up on implementing it when I saw the DNS-over-TLS RFC complete.
It probably took about 15 minutes to write these. Writing a fully functional client in Go, which is the core of dnscrypt-proxy 2, took about the same time: https://github.com/jedisct1/dnscrypt-proxy/commit/b076e01f7a...
Correctly implementing DNS-over-TLS is way more complicated.
It has to use TCP. So in order to avoid it being vulnerable to the most trivial slowloris attack, you need to implement a connection pool, reuse old connections, timers to enforce timeouts.
If you want half-decent performance, you need to make sure that multiple, out-of-order queries and responses can be sent over the same connection. This requires tracking query identifiers, making sure that there are no ID collisions in inflight queries, and if you are just building a proxy, you can’t expect any upstream server to support this.
TLS session tickets allow DNS operators to track devices no matter what their IP are. TCP sessions allow DNS operators to fingerprint devices sharing the same external IP. From a privacy perspective, this is effectively a regression over plain DNS. So for people who care about this, you need to add the ability to disable these. Performance will be terrible, but that’s what you get for using a transport protocol that was never designed for DNS. This can be partially mitigated with DoH using forthcoming HTTP/2 extensions. But for raw TLS that doesn’t allow much except send() and receive() packets, there’s no hope without reinventing HTTP.
Encrypted DNS requires padding. The way to do padding in DNS-over-TLS is to add extra records to DNS packets. So you need to parse and modify DNS packets. Which is slow and painful to write, if only because of name compression. Instead of that lousy hack, DNS-over-HTTP/2 can simply use existing HTTP/2 mechanisms: HTTP/2 frames already support padding. DNSCrypt doesn’t require packets to be parsed or modified either; padding bytes are simply appended to raw DNS packets before encryption, and are trivial to remove after decryption.
As we recently saw, DNS-over-TLS is virtually useless against attacks such as BGP hijacks, unless certificates are pinned. So, you need to implement pinning. Figuring out how to do it using the OpenSSL API is going to keep you busy for quite some time. DNSCrypt only requires one function call to verify a signature. DNS-over-HTTP/2 can leverage what browsers and modern HTTP library already do.
So, implementing DNS-over-TLS is hard. It’s not just about sticking stunnel in front of a stub resolver. Even just the TLS part is hard to implement securely. Validating TLS certificates in non-browser software remains the most dangerous code in the world: https://crypto.stanford.edu/~dabo/pubs/abstracts/ssl-client-...
But it’s also pointless. Other protocols are easier to implement and more efficient.
And from a server perspective, proposing DNS-over-TLS means that there is yet another thing to do certificate management for. Key management is hard. It’s the root cause of virtually all DNSSEC outages, and why people gave up with DNSSEC or didn’t even try. Software supposed to automate this exist, but the reality remains the same.
In contrast, key management has been solved in the HTTP world. Through built-in support in web servers, ACME clients, CDNs and proxies. People already run web sites. Let them leverage what they already have and that works well instead of forcing them to go back to square one and figure out how to do key management for DNS. Ditto for authentication and logging. Which is why DNS-over-HTTP/2 makes way more sense than DNS-over-TLS
DNS-over-TLS also requires a dedicated port. Which is even not reachable from many restricted network environments such as the WiFi network I am currently on. This kinda defeats the whole point of the protocol. DNS-over-HTTP/2 uses the one port that is less likely to be blocked, and is fully compatible with proxies including transparent ones from mobile carriers. DNSCrypt also uses port 443 by default, can use TCP only if required, but it doesn’t even need a dedicated port either; DNSCrypt and regular DNS can share the same port, as done by Cisco servers.
So, DNS-over-TLS is hard to implement. Hard to deploy. Difficult to connect to. Slow. Won’t get any better without reinventing HTTP. Feels like it was invented 20 years ago, but it doesn’t really make sense any more today.
I found this much more straightforward to implement than DNSCrypt. See my response to the sibling comment for a link to the code.
Because if not, any requests made by non-browsers are still susceptible and will only give users a false sense of security.
"But this doesn’t mean you have to use Cloudflare. Users can configure Firefox to use whichever DoH-supporting recursive resolver they want. As more offerings crop up, we plan to make it easy to discover and switch to them."
Only defaults matter. Your average web user wont be interested in knowing about or configuring this, no matter how simple the explanation/choice is made.
This does not contain any sort of proprietary or non free software. People are free to ignore the content delivery Network provided recursive resolvers, and set up their own.
Also note that DNS is one of those dinosaur protocols like email and usenet that have persisted from the early days of the internet, back when we could buy interoperable services from decentralized parties. Every service we buy today is centralized or even walled garden only, see Slack, Facebook, App Stores, AWS, etc. We currently just don't know how to build successful distributed ecosystems.
There is such a thing as ethics in network engineering, and that term encompasses things like not attempting to MITM your customers' recursive DNS resolution queries, or monitoring/tracking/selling the data.
I agree that immediately promoting CF doesn't seem like the best genuine idea for those who are still a part of the Firefox/Mozilla community.
Do not select any default. Randomize the selections.
At the time my only real recourse was to pump my whole house through a VPN, as even Google's DNS (188.8.131.52) was being hijacked, but ONLY when it was coming from my home IP. (Full disclosure, i'm not very well versed in the networking stack. I know enough to get myself in trouble, but not much more. This was what I understood to be happening, but I could be way off base. However it was happening on multiple devices, multiple OSs, multiple verizon IPs, multiple DNS servers, both with and without a router, and would stop instantly if any of those machines were pointed at a wireless hotspot, or a VPN was turned on. At one point I even sent my router's WAN connection through my phone's hotspot and the problem went away)
After talking with verizon many times and each time having to spend an hour or so trying to get through to someone that knew even remotely what I was talking about, all they were able to do was reset my IP, which fixed nothing.
Now that DNS-over-HTTPS is becoming more common, i'm going to use it everywhere I can. Yes, DNSSEC might be a "better" solution, but I can use DoH right now to protect myself on all sites and (hopefully soon) all devices.
Just the other day I discovered Intra  a (still unreleased) app by Google for android which has your whole android phone use DNS-over-HTTPS.
I've been running it the last few days and i'm quite pleased with it. Does anyone know of a way to force all DNS queries in windows to use DoH?
In theory if the upstream resolver is using DNSSEC to validate all the Records, then the client over the TLS session can be fairly confident in the Records it receives.
I think you could use pi-hole to do this. https://docs.pi-hole.net/guides/dns-over-https/
The documentation is not great / accurate but with a bit of fiddling I have it running as a systemd service (launchctl on MacOS). I'm using the /metrics endpoint to get details in Prometheus on the stats.
A similar setup to mine could be deployed at your network edge, and it could then force all of your port 53 DNS requests to go over a more secure protocol. Of course you would have to figure out how to set this up, and it wouldn't protect your devices anywhere except your home network.
It wasn't FIOS doing it, the IP was in Israel and was known as a malware serving IP.
Most of DNS-over-HTTPS' interesting use-cases start coming into play when you're using the same HTTPS session as the one being used to serve the site you're visiting. Otherwise, DNS-over-TLS is sufficient for the same level of privacy.
At that point though, DNS-over-HTTPS has a provenance issue that I don't fully grok how we're going to avoid. What I mean by that: if the site you're visiting supports DNS-over-HTTPS, where requests to that site for DNS records are requested, what happens when they decide to issue custom responses to DNS requests that ignore or supplement actual data in a zone? Won't that lead to a bifurcation of the DNS network, where web-sites can start issuing custom response to DNS queries?
Cloudflare, and Quad9, both offer DNS-over-TLS, this will be preferable for non-HTTP use-cases. Some of the points in the article imply that DNS when using DNS-over-HTTPS can't be used for tracking you, but really that just means you're passing that trust to Cloudflare, Quad9, or Google. I suppose the choice is open to you at that point.
I was under the impression that DNS-over-HTTPS was nothing more than just an alternative DNS protocol just like DNS-over-TLS, where you perform an HTTPS request in order to query for a DNS name, and that DNS-over-TLS was just plain old DNS wrapped in TLS.
You seem to be implying that DNS-over-HTTPS would enable sites themselves to deliver DNS records. I don't see how that is possible, because connecting to HTTPS with a hostname requires resolving a DNS record. Am I misunderstanding?
First, just to avoid confusion, the post linked to this HN article is just about the classic recursive resolver model. That's the scope of what is being experimented with actively.
Second, the notion of resolverless dns (where dns records are obtained from somewhere other than your recursive resolver) is indeed something DoH contemplates but does not yet allow. That's because issues around tracking, correctness, and attacks haven't been fully explored. So unsolicited DNS is interesting but its not something any browser would accept yet.
There are some other opinions on how HTTPS matches the needs of DNS here:
"Right now, people are really keen to get HTTP/2 “out the door,” so a few more advanced (and experimental) features have been left out, such as pushing TLS certificates and DNS entries to the client — both to improve performance. HTTP/3 might include these, if experiments go well."
Some of those things could be used for bootstrapping SNI encryption as well:
Case in point about autonomy is on HN front page at present:
The author cites a hypothetical example where a user shopping at Megastore is blocked from accessing her preferred source of DNS data in order to prevent her from checking a price.
Extending this hypothetical, imagine if in response to her request for an unbiased price quote the user was shown unwanted ads with inflated, customised pricing (informed by data gathered about her through tracking).
Choice of DNS data is an effective way for users to block advertising and tracking.
The issue with user control over DNS also arises with mobile and other devices (e.g. Chromecast/Google Cast/Google Home) that discourage or prevent a user from using her preferred source of DNS data, forcing her to use a commercially-oriented source which may block certain lookups.
This is relevant with any computer that connects to the internet.
It is an issue of autonomy.
There is a long tradition of HOSTS files and later non-commercial DNS where users can autonomously determine where on the network they want to "go". They have the final control over the source of DNS data the computer will use. They can delegate DNS service to someone else, however, following that long tradition, they still retain the autonomy to choose the source of the DNS data, whether it is another third party, their own DNS servers or perhaps /etc/hosts in place of DNS.
When an organization (e.g. running an "app store") seeks to circumvent the ability of the user to choose her own DNS data source on her own computer, that is an attack on autonomy.
The author mentions that Firefox will allow users to choose their own "DOH DNS" servers. If so, this respects users' autonomy.
(No one seems to be mentioning one obvious advantange of DOH DNS for browsers: bulk DNS "prefetch" lookups. One can use HTTP/1.1 pipelining to retrieve the IP addresses for every hostname contained in an HTML page, with a single HTTP request, instead of numerous, simultaneous DNS requests. As for privacy problems with TLS fingerprints, HTTP requests can be secured by CurveCP as an alternative to TLS - example is in my profile.)
The resolving is not only done for user-initiated action, but is being done by many programs, even which you might not want to do it. For the same reason, many users use a local firewall to block outcoming connections, like Little Snitch.
(Sidenote: if you are using MS Office 2016 for Mac, and are not satisfied with the choice of telemetry that Microsoft offered you in the last update, and you are interested in third option, "None", the hostnames to block are nexusrules.officeapps.live.com and nexus.officeapps.live.com)
With apps using DoH and ignoring the local resolver, that firewall will now have a problem, especially if multiple, separate hostnames resolve to the same IP. Until now, Little Snitch used a guess (last resolved hostname that matches the IP); now it won't have that chance.
That's why, if the user wants to have a chance to who their local processes talk to, they must be forced to use a local resolver under user's control, not implement their private resolver. And of course, on non-public networks, it should be supplie-able by DHCP or RA.
So everytime you want to make a query, you have to wait several RTTs before getting a response.
The connection need to be open for as long as possible, at least 5 minutes.
I used stubby as forwarder with idle_timeout: 6500000, the idle timeout in ms. The connection gets closed by the remote party, not by stubby.
I'll argue that the TCP and TLS handshake take more processing power then keeping the connection open.
A standard 8 GB system with Debian 9 gives me 1048576 max file descriptors. I am sure this can be optimized still.
And if you were keeping them open for 5 minutes as suggested, that would still limit you to only 3400 clients / second.
I do actually agree that they need a longer idle timeout on these connections, but I just wanted to point out that comparisons with the processing power required to set up a TLS connection aren't apt.
Website won't load without allowing a call out to googleadapis.l.google.com
Yeah, you're right they are growing.
1: I have NO idea if Chrome (or any other random application) accesses DNS-over-HTTPS already since I have not paid too much attention to it.
2: At least Chrome (on OSX) likes to access 184.108.40.206 & 220.127.116.11 & your configured DNS server on port 53 (happy eyeball protocol). This might only be on flaky networks like mine, where I tend to make all sorts of configuration experiments.
And what about SNI that shows domain name in clear text for HTTPS connection? Please do something with it too.
and of course defaults matter a lot, but you will be able to select your preferred DoH endpoint (or not use it at all). Firefox wouldn't lock something like that down.
I think encrypting DNS transport is as important as the next guy (though DoH is bad), but am super unhappy about Mozilla apparently signing on with Cloudflare's ongoing fairly successful attempts to centralize the internet. Sure, they say they'll delete your data "within 24 hours" (they shouldn't be keeping it at all), but pretty soon they'll get a Nat'l Security Letter like everyone else does.
In any case, it would be unreasonable to require logging for more than that... even a week would be too much data for many ISPs. Also, they have to have some logging to be able to even try and troubleshoot a problem.
Having an opt in security mechanism is easy to deploy as in keeping the http version of a site available while running https on a new port for clients that want to use it.
But there are alternatives, DNS over TLS (essentially the same without HTTP) and dnscrypt which uses UDP.
You can connect using an IP address. At least to bootstrap the process. This is where DNS Stamps come in handy https://github.com/jedisct1/dnscrypt-proxy/wiki/stamps
Well there goes the interest I had in this.
(also, 1] dns leaks are worse than sni leaks as typically more people are exposed to the dns query and 2] HTTP/2 can carry more than one hostname on a connection so some hostnames that appear in dns are never leaked through sni.)
I don't see any way to have encrypted SNI without paying a price of one additional round trip. That's a fair price for something you must have, but for anybody to benefit we must insist everyone use it always, or adversaries will simply block it. And a round trip is a high price for users who don't (believe they) need this.