* It's complicated to deploy and misconfigurations cause outages, and those outages get more severe the more people deploy DNSSEC.
* It sucks up all the oxygen from the effort to actually mitigate flaws in the DNS. The most important DNS security flaw is the last-mile problem between browsers and nameservers, and DNSSEC has practically nothing to say about that. DNSCurve, as a counterexample, does solve this problem, and it solves it regardless of whether 1 person deploys it or 300 million do. But all the oxygen has been stolen by DNSSEC.
* It provides a setting for us to transition the CA system from untrustworthy companies directly to world governments, with the most commercially important domain names giving CA-like authority to (wait for it) the US government.
* Any way you try to project the math out, it will be ludicrously expensive to deploy (the deployment numbers we see today are, effectively, trial/pilot deployments, since virtually no end-user software cares about DNSSEC).
* Speaking of expensive: since virtually no modern networking software is built on the assumption that there can be (a) transient (b) security failures for DNS lookups, actually deploying DNSSEC is going to require forklift upgrades to huge amounts of already deployed code. Just to make that clear: imagine you're still using gethostbyname() to look up names, like lots of code does. How does your lookup code change to accommodate the fact that a query can, under DNSSEC, (a) fail (b) despite the fact that there was a response to the query with a usable record? TLS solves this with a pop-up dialog. Where does the dialog go?
* The most common mode of deployment for DNSSEC leaks hostnames; it essentially re-enables public zone transfers. To avoid this problem, you can theoretically deploy minimum-covering NSEC ("whitelies"), but despite the fact that this is the only "safe" way to deploy DNSSEC, it's not the default. Why? Because whitelies requires online keys, and the original premise of DNSSEC was to keep keys offline. Net result: many many deployers of DNSSEC --- should we be so unfortunate as to have many deployers of DNSSEC --- will accidentally leak the contents of their zones to the Internet.
This is a subset of the reasons I don't like DNSSEC (a more significant one to me is that I believe it's cryptographically obsolete); it's just the subset that, off the top of my head, I think demonstrates the harm DNSSEC would do beyond simply not solving the problem it ostensibly solves.
> It's complicated to deploy and misconfigurations cause outages, and those outages get more severe the more people deploy DNSSEC.
It needs to be maintained of course, just like any deployment, but is it worse in this regard to other protocols? I would say no. If you can get a TLS certificate for your web server, you can get your DNS records signed.
The whole infrastructure is in place. Your operations team should know this already. There are books, courses, certifications. The whole shebang
> DNSCurve, as a counterexample, does solve this problem, and it solves it regardless of whether 1 person deploys it or 300 million do. But all the oxygen has been stolen by DNSSEC.
DNSCurve solves none of the problems that DNSSEC solves.
That's the problem with DNSCurve adoption. Not some conspiracy by the DNSSEC cabal.
> It provides a setting for us to transition the CA system from untrustworthy companies directly to world governments, with the most commercially important domain names giving CA-like authority to (wait for it) the US government.
This is just false. DANE shifts the authority from some 300 individal organisations, private and otherwise, into your parent zone, which you must trust anyway under every sane DNS model.
The authority for the system as a whole is delegated to a transparent organization, currently indirectly appointed by the US government. If the word "government" here makes you see red, you're missing out on the bigger picture.
> it essentially re-enables public zone transfers
This is a true problem with DNSSEC. But it has been discussed over and over again for 15 years and no one really thinks it is a showstopper. It's well covered in the literature.
> a more significant one to me is that I believe it's cryptographically obsolete
Well, show that then. There is a whole working group who'd love to hear something more concrete. That's the way standards should be set, not by personal beliefs.
"Operations teams should know by now how to handle DNSSEC" isn't a rebuttal to "DNSSEC is complex".
DNSCurve converges to the same protection DNSSEC provides; the difference is that during the decade or two in which DNS security isn't fully deployed, DNSCurve actually does something useful, and DNSSEC doesn't. Don't get hung up on the tactical value of DNSCurve just because DNSSEC has no such value. It's a long-term win too.
"The CA system is worse" isn't a rebuttal to "DNSSEC/DANE gives the USG direct control of certificates".
> "The CA system is worse" isn't a rebuttal to "DNSSEC/DANE gives the USG direct control of certificates".
Again, please stop the overly broad statements and explain precisely HOW "DNSSEC/DANE gives the USG direct control of certificates". I've not yet been able to have someone explain to me how this is true. I put my cert (or a fingerprint) in a TLSA record signed by my DNSSEC key in my DNS zone. Where does the USG get involved?
Maybe it's just that I'm deliriously tired, but I'm still missing how "the DNS root" can alter the data in MY zone sitting on my little authoritative server somewhere. Can you please walk me through the actual attack?
As someone else pointed out, all the root can do is potentially redirect a TLD to a controlled TLD registry, which could then conceivably serve out NS records for my domain pointing to an owned authoritative DNS server... which could then serve out bogus data. BUT... I have to think someone would notice the redirected TLD!
> "Operations teams should know by now how to handle DNSSEC" isn't a rebuttal to "DNSSEC is complex".
It's not meant to be. "Is it worse than comparable protocols? I would say no." is the argument. This is all a matter of personal opinions of course, but it's not more complicated than the TLS/CA system.
The operating procedures is modelled against regular DNS, on purpose, and should fit well into your existing workflow. But this all besides the point since the standard works and is in place all around the world.
> DNSCurve converges to the same protection DNSSEC provides
Please don't. This discussion has been had a million times on IETF mailing lists and I don't know why it keeps popping up. Perhaps djb has some sort of fan base out there who wants him to "win" some imaginary discussion.
No, DNSCurve is designed to secure your DNS questions and answers from prying eyes. DNSSEC is designed to authenticate and proof the extisting DNS system from tampering both by resolving and authoritative servers.
One of the early design goals of DNSSEC was to be backwards compatible with the existing DNS system, protocols and implementations.
Any divergence from these principles will be very difficult to get implemented. Any reason why DNSCurve or any other standard in the DNS space is without implementations should be sought here, not in a conspiracy.
> DNSCurve actually does something useful, and DNSSEC doesn't.
That's simply not true. DNSSEC is implemented around the world and solves a problem. If it is a useful problem or not depends on your point of view.
But there is no reason to come up with all these straw man arguments against a technical standard. Please join the relevant mailing lists and speak your mind in more technical terms instead. There is no cabal in your way, but you must be prepared to argue design goals, operating procedures, implementation, and Internet governance.
> "The CA system is worse" isn't a rebuttal to "DNSSEC/DANE gives the USG direct control of certificates"
That's not what I said.
No, DANE does not in any way give the US Government direct control over certificates. It does, however, make them dependent on the DNS root, which is administered by an organization indirectly appointed by the US Government.
However, that is not a problem given that the US Government has not much actual say over daily operations -- and even if they did, that would not be a practical attack vector due to the reasons laid out above.
"The CA system is worse" is a rebuttal to not doing anything. If you want the CA model gone, you need to get your TLSA records out there and you need to get them signed, now.
The reason for that is that there is no alternative. DNSCurve does not solve this problem. TACK does not solve this problem. No other system has support in any popular DNS software, and designing any contender takes ten years to get implementation, support and operations right.
Please keep attacking this problem, and please keep testing new ways to solve this. Not by rehashing old arguments against DNSSEC that has already been rebutted, but by embracing it and see what can be done better.
That's because you haven't actually responded to anything in this thread. You just repeat your standpoint that you'd personally prefer DNSCurve and DNSSEC adoption holds it back.
You write that as if there weren't thousands of words of my comments, none of which involve a mere personal preference for DNSCurve, that you literally need to wade through to get to this comment. Which says more about the strength of your argument than mine.
> How does your lookup code change to accommodate the fact that a query can, under DNSSEC, (a) fail (b) despite the fact that there was a response to the query with a usable record?
Hmm... I'm not clear how a failed query could have a "usable" record. If DNSSEC validation fails then the response to the query can't be trusted. So that would not be "usable" to me. Or am I missing something here?
What percentage of failed TLS certificate validations are the result of attacks, versus benign operational failures? My guess is the one with the majority has it to the tune of 99.9999%.
That doesn't answer the question I asked. If you issue a DNS query to your resolver and DNSSEC validation fails, then the resolver can't trust the result it received as that info could be bogus. I can't see how there is a "usable result". Am I missing something?
If TLS had used the same model, it would never have succeeded. You are missing something: 99.9999% of the time, DNSSEC resolver failures will be benign, and the data returned in the query not only useful but required for connectivity.
When TLS was implemented, it had the benefit of being entirely new; every piece of software that TLS secured had to be modified to accommodate it, and so almost everything that implements TLS has some kind of policy switch for how to handle verification failures.
But part of the pipe dream of DNSSEC is that it's a switch server operators can flip on behalf of all their millions of users. And of course that's not going to work, because different users and different sites are going to have radically different policies for when to "click through" a resolver failure. But because the code for all this stuff was written back in the 1980s, none of it supports any kind of policy lever for this problem.
When a TLS certificate fails to validate, you know something went wrong with TLS. When a DNSSEC resolution fails, gethostbyname() just returns NULL, and the host falls off the Internet. You're a programmer, right? What are the implications of this problem for you users?
This, incidentally, is something DNSCurve got entirely right. Unlike DNSSEC, where virtually every failure is going to be benign (because DNSSEC is even harder to administrate than TLS, where 99.9999% of failures are also benign), DNSCurve fails when attackers fuck with connections and in basically no other situations.
Ah, so you don't like that DNS resolvers just return a regular SERVFAIL when DNSSEC validation fails, without some kind of indication as to why the failure occurred.
I agree. I wish a failure of DNSSEC validation returned a different code or set a bit somehow so that resolvers could then know that the failure was because of DNSSEC validation. The resolver could then take action based on that knowledge.
A couple of us (who were not involved in the early discussions of DNSSEC) were discussing re-introducing this idea to see what kind of traction it might get now that we've seen a good bit of DNSSEC deployment and have more operational experience.
> * It's complicated to deploy and misconfigurations cause outages, and those outages get more severe the more people deploy DNSSEC.
I would dispute that. Pretty much all of the authoritative name servers (Bind, NSD, Windows, Knot) have made the signing service a few lines in a configuration file. YES, there are operational steps you need to put in place, primarily related to KSK rollovers, but the actual deployment is pretty simple these days.
Similarly, on the validation side, deploying DNSSEC validation is basically one line in a config file for BIND and Unbound and a little bit more in Windows Server 2012. It's simple to deploy.
> * It sucks up all the oxygen from the effort to actually mitigate flaws in the DNS. The most important DNS security flaw is the last-mile problem between browsers and nameservers
That may be the most important issue to YOU, but to others ensuring the integrity of the DNS info is more important.
> * It provides a setting for us to transition the CA system from untrustworthy companies directly to world governments
Huh? I keep hearing DNSSEC critics bring this up and I've yet to have anyone truly explain how this can happen.
> * Any way you try to project the math out, it will be ludicrously expensive to deploy (the deployment numbers we see today are, effectively, trial/pilot deployments, since virtually no end-user software cares about DNSSEC).
There are 18 million customers of Comcast in North America who are receiving the integrity protection of DNSSEC. Every Comcast user is having every DNS query validated by DNSSEC. Please take a moment to look at the stats off of this page: http://gronggrong.rand.apnic.net/cgi-bin/ccpage?c=XA&x=1&g=0... (which I know you've seen because we've discussed this on Twitter). There are very real deployments of DNSSEC validation happening around the globe.
I've run out of time right now to answer... and the reality is that you and I could probably just go back and forth on this for quite some time. You don't like DNSSEC. I like DNSSEC for solving the problems it does (and also for providing additional capabilities such as DANE).
Trust in DNSSEC rolls up the DNS hierarchy, the top of which is overwhelmingly controlled by governments. I'm at a loss to see why this isn't an obvious problem to you.
If you don't want to trust DNS then don't trust DNS. All DNSSEC does is verify that the reponse you get from a DNS server is the response the domain owner wanted you to receive.
It's also a bit of a stretch to say that DNS is controlled by governments. I don't even know how to unpack that statement as it seems overly broad and unnuanced. When the USG wants to seize a domain they can seize a domain. DNSSEC has nothing to do with that. The USG seizes domains now, and they'll most likely seize domains if DNSSEC reaches full deployment. Whether or not DNSSEC is deployed is entirely irrelevant to any seizure of domain names by any government.
I'm all for working on non-hierarchical naming systems for the Internet, but DNS is already rooted in hierarchy. We might as well have a hierarchy we can trust, so why not DNSSEC?
DNSSEC is a hierarchical PKI, like the CA system. Just off the root of the DNSSEC hierarchical PKI are a series of branches that are controlled entirely by world governments. I don't know how to say this any more clearly without sounding patronizing. In a DNSSEC/DANE world, the Libyan government can successfully publish a fake certificate for BIT.LY.
I absolutely do not accept the premise of the question in your last sentence. No, let's not bake a trusted hierarchy into the core of the Internet, please.
> DNSSEC is a hierarchical PKI, like the CA system. Just off the root of the DNSSEC hierarchical PKI are a series of branches that are controlled entirely by world governments.
I am assuming you are talking about the country-code TLDs (ccTLDs) here, correct?
I agree that certainly many of those ccTLDs are directly controlled by governments while many others are operated on behalf of governments.
The generic TLDs (gTLDs) are different and are mostly operated by private companies operating registries under contact with ICANN. With the "new gTLD" program there are now MORE gTLDs than there are ccTLDs.
> I don't know how to say this any more clearly without sounding patronizing. In a DNSSEC/DANE world, the Libyan government can successfully publish a fake certificate for BIT.LY.
Simple answer - if you don't trust the Libyan gov't, don't use a .LY domain! I would argue that in DNS in general you do need to trust your parent zone. If you don't trust them, don't use them. Period.
If you are a service provider, then yes, you can avoid using any TLD. If you're a user, you can't use bit.ly services with DNSSEC without trusting the .ly TLD, right ?
> If you're a user, you can't use bit.ly services with DNSSEC without trusting the .ly TLD, right ?
True, although you can remove the "with DNSSEC" part. You can't use bit.ly (or any other .ly) without trusting the .ly TLD.
Interesting, I wasn't thinking about it from the user point of view - I was thinking about it from the domain registrant who is publishing a DNS zone under .LY. But you're right that it equally applies to the end user from the client perspective.
DNSSEC doesn't create or diminish any trust in ccTLDs that wasn't already there. The Libyan government has all the authority to mess with .ly, it's their ccTLD. Without DNSSEC the Libyan government can mess with .ly, and with DNSSEC the Libyan government can mess with .ly. The main difference is that with DNSSEC, if someone does mess with .ly you know it was the Libyan government. Without DNSSEC attribution of the messing-with becomes much more difficult.
I think we both agree that there are innate problems with hierarchies of trust. Unfortunately, for better or for worse, we're stuck with hierarchies until something better comes along. Let's also not make perfection the enemy of good, Namecoin, or other massively distributed naming systems, might eventually develop into really interesting technologies. However, for the immediate future, we're stuck with DNS and we should make the most of it.
No. The Libyan government does not have the authority to surreptitiously control BIT.LY. That's not how the Internet trust model works. Even in the badly broken implementation we have today, there are things BIT.LY can do to override Libya; for instance, they can have their certificate pinned to a specific trusted CA.
The general belief that TLD managers "already" control sites is probably behind a lot of otherwise- inexplicable DNSSEC boosterism. Because if you really believe that, then sure, giving still more control to the operators of those TLDs is just a cosmetic change.
But, thankfully: no. No, no, no. The operators of .COM don't get to monitor Google mail. The government of the British Indian Ocean Territories doesn't get to patch Redis.
The schemes we have now to fence Internet trust off from TLDs are imperfect. The right response to that is to make them better. DNSSEC advocates act like this is a pipe dream, but there are surprisingly simple things we can do right now to massively improve the situation, like adopting TACK or HPKP to turn everyone's (or at least everyone running Chrome and Firefox's) browsers into a surveillance system for attempts to compromise Internet trust.
"But we can do this in a DNSSEC world, too", the DNSSEC advocates say. That's not quite right; they mean to say, "but we still have to do this in a DNSSEC world". Two problems. First, if we're going to rely on HPKP/TACK and CT as a bridge to a decentralized reasonable trust system, why waste the time and effort on DNSSEC in the first place? Answer: there is no compelling reason to do that. Secondly: DNSSEC actually makes it harder to do those things; among the reasons, when TLD operators misbehave, there is no recourse at all for rectifying the situation. How long do you think it takes the Chrome team to flip the switch to remove a rogue CA? Now, how long will it take them to remove .COM?
>In a DNSSEC/DANE world, the Libyan government can successfully publish a fake certificate for BIT.LY.
They can also do this in a non DNSSEC/DANE world, at the crudest lowest level by hijacking the domain, and then buying a new cert and clicking the confirmation email.
* It sucks up all the oxygen from the effort to actually mitigate flaws in the DNS. The most important DNS security flaw is the last-mile problem between browsers and nameservers, and DNSSEC has practically nothing to say about that. DNSCurve, as a counterexample, does solve this problem, and it solves it regardless of whether 1 person deploys it or 300 million do. But all the oxygen has been stolen by DNSSEC.
* It provides a setting for us to transition the CA system from untrustworthy companies directly to world governments, with the most commercially important domain names giving CA-like authority to (wait for it) the US government.
* Any way you try to project the math out, it will be ludicrously expensive to deploy (the deployment numbers we see today are, effectively, trial/pilot deployments, since virtually no end-user software cares about DNSSEC).
* Speaking of expensive: since virtually no modern networking software is built on the assumption that there can be (a) transient (b) security failures for DNS lookups, actually deploying DNSSEC is going to require forklift upgrades to huge amounts of already deployed code. Just to make that clear: imagine you're still using gethostbyname() to look up names, like lots of code does. How does your lookup code change to accommodate the fact that a query can, under DNSSEC, (a) fail (b) despite the fact that there was a response to the query with a usable record? TLS solves this with a pop-up dialog. Where does the dialog go?
* The most common mode of deployment for DNSSEC leaks hostnames; it essentially re-enables public zone transfers. To avoid this problem, you can theoretically deploy minimum-covering NSEC ("whitelies"), but despite the fact that this is the only "safe" way to deploy DNSSEC, it's not the default. Why? Because whitelies requires online keys, and the original premise of DNSSEC was to keep keys offline. Net result: many many deployers of DNSSEC --- should we be so unfortunate as to have many deployers of DNSSEC --- will accidentally leak the contents of their zones to the Internet.
This is a subset of the reasons I don't like DNSSEC (a more significant one to me is that I believe it's cryptographically obsolete); it's just the subset that, off the top of my head, I think demonstrates the harm DNSSEC would do beyond simply not solving the problem it ostensibly solves.