The root servers already use DNSSEC (which is not DNS query encryption). Try
$ dig rrsig . @c.root-servers.net
if you want to check this. (I chose c-root just for fun.)
DNS query encryption has become a more and more popular idea because of things like spy agencies conducting surveillance of DNS queries for espionage purposes, ISPs collecting them for commercial purposes, and network censors using them for site blocking. If you can query resolvers in an encrypted way that the network can't see, you can make it less likely that they will know what sites you visit or be able to block particular ones (especially in certain cases where an adversary is on-path for DNS queries but not for the resulting HTTP connection, which is not always true but is sometimes true).
This letter seems to respond to suggestions that the root servers ought to enable a query encryption mechanism of some sort so that you could query them without revealing (to the network) what the content of the query was. This is of varying interest and relevance depending on your threat model (and who and where your recursive resolver is), but it could be important in some threat models.
The letter basically argues that it isn't really necessary for the root servers to do this, because (1) in principle the information they serve isn't that sensitive (and even the fact that someone is interested in a particular part of it might not be that sensitive), (2) other people are in a position to mitigate the privacy issues, and (3) other people can reasonably do this right away without requiring the root servers to change anything.
The QNAME minimization thing in particular is like: if you want to look up the IP address of controversialsite.example.com, this might end up generating a query "IN A controversialsite.example.com." (and/or "IN NS controversialsite.example.com.", "IN SOA controversialsite.example.com.") that goes unencrypted from you/your network/your recursive resolver to the root servers. But this is unnecessary. Instead, this query could ask just for "IN NS com." to the root servers, and neither the root server operators, nor your ISP, nor governments tapping Internet backbones, will know that you were interested in com because of a lookup for controversialsite.example.com as opposed to google.com. (The TLD servers such as gtld-servers.net would still need to offer query encryption in this case.)
A further argument made by the root server operators is that recursive resolvers can do better caching to make it much less likely that they'll need to query the root servers directly for the huge majority of individual queries that are sent to the recursive resolver.
Possibly lookup for google.com isn't completely innocent action depending on the Country, especially on this topic
This debate has happened in various places (here on HN; on mozilla.dev.security.policy; on the Let's Encrypt forum; on IETF WG mailing lists; presumably inside browser vendors' security teams) but I haven't usually seen the authoritative statement of both sides' views about the status quo and their visions of the future for web PKI.
I was involved in setting up Let's Encrypt, and I noticed that many of my colleagues were, to varying extents, themselves PKI skeptics (people who had been critical of existing CAs' practices, but also of the fragility of what certificates can even really attest to).
I've also seen (partly from a colleague who went to a bunch of meetings about these things) that generally e-mail people really like DNS and somewhat mistrust X.509, while commonly web people grudgingly respect X.509 and somewhat mistrust DNS. This was especially visible in the debate about DANE (a mechanism for putting public keys and statements about them in the DNS), and also in related debates about the role of DNSSEC. Reputedly, many of the avid DNSSEC users and DANE developers and proponents are e-mail software operators or implementers. We've also seen Thomas Ptacek here criticizing DNSSEC and arguing that it has no important role for Internet security (especially in terms of cryptographic protections against government eavesdropping).
During the DANE standardization, if I understand correctly, the developers hoped that it would eventually be on par with X.509 certificates as a way of validating public keys for TLS connections (both STARTTLS and HTTPS, as well as other applications). But I think the Chrome developers at some point made a clear statement to the effect of "nope, sorry, we're not going to do that"!
My intuition supporting any or all of "DNS registrars should be [name-constrained] CAs"/"DNS registries should be [name-constrained] CAs"/"DANE should replace X.509" was basically that DNS records are the fundamental source of ground truth for CAs issuing DV certificates, and literally if you look in the CA/B Forum's list of approved methods for doing domain validation, they're all about cross-checking information provided by DNS registrars, and similarly if you look at Let's Encrypt's domain validation methods (https://letsencrypt.org/docs/challenge-types/), they all directly rely on DNS, while the most secure method (DNS-01 challenge) is literally about the ability to place a TXT record into a specified DNS zone. So, DNS registrars and registries are directly being trusted in the DV issuance path -- every time.
The main counterarguments I remember were along these lines:
* DNS registries and registrars are (despite their huge importance) less security-conscious than CAs in numerous ways, and operate under less stringent security precautions and procedures. [But this argument is a little weird when CAs will issue certificates based on whatever the registries tell them.]
* DNS is less transparent than CAs, especially given the mandatory use of Certificate Transparency, which allows detection and investigation of misissuance. (Also if we relied on DANE without X.509, there would be no public or permanent record of the DANE DNS RRs that were used in a particular trust decision, so fraudulent ones might never be detected either contemporaneously or retrospectively, and a root or TLD keyholder could construct a completely fraudulent DNS signature chain that would be accepted by a particular relying party but never stored anywhere else.)
* You can choose a relatively security-conscious registrar, and then registry's security precautions should ensure that attacks against other registrars (or negligence or malice on their part) don't affect your domain's zone contents, but if registrars could directly issue certificates, arbitrary registrars could misissue for domains that aren't even registered through them. But registries couldn't usually issue certificates because they have no relationship with subscribers, no means of authenticating them, and no user interface through which certificate issuance could be requested or performed.
I think there are others. Maybe I could approach some people and get them to write up their different views of all of this for the record, not as part of a debate or flamewar.
Edit: the sibling comment reminds me that there are also different proposed use cases -- one is the belt-and-suspenders method where you need an X.509 certificate and a valid DNSSEC-signed DANE record confirming that the subject key is legitimate. Other proposals have been that the DANE record could be just-as-good as the X.509 certificate, or that it could eventually supplant it entirely.
It's worth keeping in mind that proof of domain ownership doesn't have to depend on the DNS; you can also just ask registrars directly, via something like RDAP. If we're going to keep pinning Internet trust to name ownership, there's probably no reason we have to involve the DNS at all. Moreover: if you can accomplish WebPKI trust using an RDAP-type lookup, that pretty much obviates the need to forklift in a cumbersome, pre-outmoded signing system for the DNS itself.
> It's worth keeping in mind that proof of domain ownership doesn't have to depend on the DNS; you can also just ask registrars directly, via something like RDAP. If we're going to keep pinning Internet trust to name ownership, there's probably no reason we have to involve the DNS at all.
Do you think this would be useful? I'm not part of the Let's Encrypt team anymore, but I still know almost all of them (but, to my knowledge, nobody from any registrars).
I've kind of wished for something like this in the past (on the basis that it would obviate other kinds of attacks against the domain validation process, such as routing or DNS attacks), but I'm not sure what the user interaction flow would look like, or whether it could be made compatible with Let's Encrypt's desire to automate almost all certificate issuance and renewal steps.
It seems like maybe the registrars would have to give out proof-of-ownership RDAP challenge API credentials (that a server could use to ask the registrar to serve a particular value via RDAP?).
Curl has the same complaint that I do:
$ curl https://126.96.36.199/domain/root.zone
curl: (60) SSL: no alternative certificate subject name matches target host name '188.8.131.52'
In the days of ye olde internet it used to be called "ftp.internic.net". Today, I believe it is still "internicftp.vip.icann.org".
To make curl work
curl -k https://184.108.40.206
The IP address isn't directly validated, but you get something even better.
And it can't go out of date as easily. Like you said, the IP changes sometimes. internic.net doesn't change.
Am I understanding this correctly. You are concerned that the IP address might change. As I said, if that happened, they would not change it without notifying the public in advance. This IP address is used to bootstrap DNS. Thus, no one should need DNS to find it. AFAIK, outside of EV certificates, CAs rely on domain name registration as their "verification" mechanism. Seems like one has to trust the DNS in order to trust a CA. And why trust a CA.
I am quite certain I will be dead before this IP address ever changes again. It used to be 220.127.116.11. This is going to be the "right" IP address for the forseeable future. TLS cert or not. I use FTP to get the root.zone, not TLS. Verisign's zone file access program used to offer .com and .net zone files only via FTP. Even if you can use TLS to get them now, I'll bet you can still use FTP.
> Seems like one has to trust the DNS in order to trust a CA. And why trust a CA.
You're arguing that certificates are useless?
They're not, because you might be on a hostile network, and it's much easier to attack one person than to attack domain verification.
And even if certificates are 99% useless, that doesn't affect my argument about which kind of certificate is better.
> You are concerned that the IP address might change.
That was only one of the things I said. Other than hostile networks, you might make a typo, and there are various reasons you might not notice getting the wrong file immediately that a nice verifiable "internic.net" label could help with.
In this case you are downlaoding a file that is served at a number of other known, unchanging IP addresses, "root servers." Even more, the RRs in the file have been signed, "DNSSEC".
All I am saying is that a "bare IP address" in this case is still useful, even without a domain name and certificate.
I meant useless in this scenario. So same.
> All I am saying is that a "bare IP address" in this case is still useful, even without a domain name and certificate.
What? Then we don't disagree at all. You need to reread my earlier comments. I said a certificate with a name is better than a certificate with an IP. I never said anything about the value of the IP address itself, only what's validated.
I think that's debatable. IMO, it depends in part on the perceived value of the ICANN "domain name business" as some sort of vetting mechanism. In this case the party being vetted is ICANN itself. Although they did pay for "EV".
Does Globalsign still offer certificates that are tied to IP addresses.
Here is a question for the TLS certificate fanatics: Why do CAs provide non-http URLs to their CA certificates in the certificates they sell. For example,
I suspect it is a bootstrapping issue. At the top of the chain there is some notion of implict trust. No different than with DNS. Trust should be decided by the end user not by developers, nor by some self-appointed "authority".
TLS 1.3 mandates use of X25519, Ed25519, X448, and Ed448.
Who is responsible for designing X25519, Ed25519, X448 and Ed448 and introducing these algorithms to TLS developers. The same person who designed and introduced DNSCurve. More diplomatically, a team lead by the same author as DNSCurve.
The observation I make from the facts is that while DoH may win a "popularity contest" over DNSCurve (VHS vs Betamax) in terms of what most people will use, TLS and therefore DoH nonetheless is or soon will be relying on the work of the author of DNSCurve. Whether anyone else besides me thinks this is notehworthy I have no idea. I mean, the author could have just intended the work to be used in DoT or whatever was the current trend (VHS), but the fact that he demonstrated how it could be used in a different way (Betamax), encrypting each packet, to me that is not a coincidence. It was not meant to be ignored, IMO.
As an end user, I am not interested in what is popular, I am interested in what is best. But that's only me. Readers can decide for themselves.
It was not that long ago that the same thing was being said about TLS.
I like my DNS to be validated. I don't see what the big problem with that is. Just don't enable the checkbox in your DNS resolver if you hate it that much.
But the bigger difference is, TLS gets you an encrypted secure channel between the client and the server, and DNSSEC gives you... a signed name lookup.
DNSSEC's risks are larger, and the rewards far smaller.
A DNSSEC outage shows a message saying "this site cannot be reached at the moment, try again later". I'd much rather have people see the latter than the former.
Signed DNS lookups are quite important for any use of DNS for domain metadata storage, for example in systems like DKIM or SPF. Neither is encrypted, neither is signed in any other way, yet together they control if you can send emails at all, or if your server will get (near) permanently banned from Outlook's spam filters. There are also various PTR records for auto-configuration of various protocols that can easily redirect traffic to other hosts were a victim to get a spoofed version.
We might have abandoned DANE, but there's still important information in DNS. Until we abandon DNS somehow (and I shudder to think what the alternative would be with the way the internet is ruled by Google and Facebook right now), I disagree that the risks are far smaller as you say.
Meh. If you handle your site correctly, you have at least HSTS, in which case, TLS errors are also game-over. If you don't have HSTS, well I have bad news for you.
The other point stands though (risks are higher and rewards lower for DNSSEC).
I respectfully disagree with that second part. The risks for DNSSEC/DANE might be higher, the rewards are bigger too.
TLS gives me a secure channel only when I connect to the right (i.e. expected) server. A TLS encrypted and validated connection to a wrong party is the threat here.
As user, I don't know which is the CA for any given domain. And Chrome only caches a small subset. Otherwise we wouldn't need neither CAs or DANE, self signed certificates would suffice ;-)
Either I have to trust that there are vigilant parties that monitor all CT logs against fake certificates or my user agent does that for every connection, blocking when it finds double certificates.
With DNSSEC and DANE, my agent fetches the address and CA for the site and validates these against the TLS handshake.
From there, HSTS will protect future lookups so it becomes a TOFU issue.
Besides, the middle-box problem is being resolved with DOH over TLS1.3, isn't it?
Domain registration and name serving are pretty similar, yet we treat them very differently. For both security reasons and functional reasons, it would be nice if we had one unified way to manage them both.
Other services depend greatly on DNS names, like PKI. It would be very advantageous, again for security and functional reasons, if there were better (not necessarily tighter, but better) integration between the two.
Secure connections all pretty much share the same properties. Virtually every system you can think of can be thought of as one that (at some point) requires authentication, authorization, encryption, and data integrity. So, perhaps there is a unified way we can provide all of this, the same way TCP and UDP provide a unified way of transporting either streams or datagrams. Maybe even making them OS primitives the way the TCP/IP stack is today [didn't used to be!].
If we step back and start from first principles, and design protocols that provide all the functionality we know we need (now and in the future), we could start getting ready for computing in the next century and beyond. It doesn't have to be a pipe dream, especially if we start building support for experimental protocols into devices today.
Somebody ought to tell the root zone operators about DJB's DNSCurve. Evidently from this PDF they are not aware of it, since it does not suffer from any of the three issues they enumerate as blockers (it is UDP-based, connectionless, and requires no server state):
The only additional overhead relative to plain DNS is two ECC operations and two salsa20 operations per query. Hardware capable of doing this at line rate is really not a budget-buster for them -- if you can't afford four crypto ops per packet you ought to reconsider whether you should be running one of the root servers.
Contrast that with DNSSEC, where you can sign zones offline. Not only is this more secure as hacking the server doesn't expose your private key, but you can also keep serving the same immutable data. It really doesn't matter how fast Curve25519 and Salsa20 are, they're significantly more work than just spitting out the same answer to everybody.
These benefits are easy to dismiss for leaf zones, but mean everything when you're serving the root or top-level zones in an increasingly hostile environment.
The "wrapping" and "unwrapping" in DNSCurve is done by a forwarder, a separate server. People have written such forwarders many years ago. No DNS software needs to be rewritten.
Yes, I know. I was contrasting their use of crypto and the degree to which they fit traditional name server architectures.
> The "wrapping" and "unwrapping" in DNSCurve is done by a forwarder, a separate server
Whether the wrapping is done on a [reverse] forwarder or integrated into the authoritative name server is entirely irrelevant. (Perhaps you were thinking of the querier, which would be even more irrelevant from the perspective of root and TLD servers.)
I like DNSCurve. But I was contesting the point that DNSCurve was effectively zero cost. It definitely is not zero cost, neither in terms of CPU nor operationally. The cost may be de minimis in most contexts, but root and TLD zones are certainly the exception.
Has anyone, e.g., at Verisign, ever debated this computational/operational cost of using DNSCurve at a large TLD.
Although this is a big "just" because of the amount of fanfare and (literal) ceremony, DNSSEC support on the server side is just about signing zones and being willing to serve the associated RR types.