Hacker News new | past | comments | ask | show | jobs | submit login
Against DNSSEC (sockpuppet.org)
186 points by tptacek on Jan 15, 2015 | hide | past | favorite | 144 comments

There are at least two more arguments against DNSSEC:

1) DNSSEC does materially make DDOS reflection attacks worse. djb has gone on about this at length, so I won't.

2) DNSSEC also degrades a DNS server's ability to respond to requests. A regular non-DNSSEC DNS server can look up responses in O(1) time; see djb's TinyDNS for an example (it uses a CDB hash table to do it). A DNSSEC server must either use an ordered index to find "nearest match" for NSEC(3) ... and hence is O(log N), or it must sign the response on demand (which is against DNSSEC religion) and takes something like quadratic time.

It is a meaningful increase in both the computational and space complexities of serving a core infrastructural protocol - but for no meaningful benefit.

Well I'm sure that plain DNS look up responses are faster but I have been tought that that the difference between O(1) and O(log N) is really just theoretical and in practice the constant factor is typically more important. log N grows really slowly in the function of N.

O-notation lacks utility. It isn't sufficient for a thorough analysis of the space/time complexity of algorithms, and profiling is much easier anyway.

In this case: The constant coefficient in a DNS lookup is very small, whilst the constant coefficients in a DNSSEC response are very big.

On-the-fly signing (with "white lies") is also required to prevent enumeration of the zone.

1) Is not a valid argument, or at least shouldn't be. By that logic we should completely drop UDP.

I agree with everything Thomas has said here, but I also think that it might be a good idea to explain why proponents believe DNSSEC is a good idea for those who don't have a good handle on it.

1. DNS is vulnerable to man in the middle attack. Any authority with access to a root server or another router in the DNS resolution chain can tell a computer that a web page goes anywhere they want. This actually happens - for example, Turkey tried to go this with Twitter, and redirected Twitter's domain name to a government IP address instead for anyone within the country.

2. There is a dangerous vulnerability called DNS cache poisoning where UDP packets can be used to manipulate DNS servers to incorrectly respond when queried for a domain's IP address until their cache expires. This attack was described in 2008 by Dan Kaminsky, and on the whole, a methodology of preventing this sort of vulnerability would be good enough motivation for repairing the failings of present-day DNS.

The motivation is great, but DNSSEC is largely the wrong tool for the job - it's a blunt instrument with no support for nuance and it will make DNS harder. What Thomas explains in greater technical detail is that DNSSEC is a move towards even further internet centralization that will make implementation more difficult without really solving the principal issues.

It's worth keeping in mind that the Turkish government hi-jacked DNS resolvers; as-deployed DNSSEC would in no way have helped.

The 2008 attack was not novel and was described by DJB about 7 years prior to Kaminsky, and DJB's utilities included a mitigation: randomized source ports. At present there is an equivalent vulnerability: UDP/IP fragments can be spoofed in such a way that poison records can be injected to resolvers. It is possible to mitigate this, but many DNSSEC proponents - including the Bind maintainers - refuse to consider mitigations because they feel that people should be moving to DNSSEC instead. It bears repeating that as-deployed; DNSSEC doesn't help protect against cache poisoning towards stubs - only towards caching resolvers.

DNSSEC was also the reason why source ports weren't randomized, and thus the reason that Kaminsky's attack was briefly so viable. Somewhere there's a NANOG post with the BIND maintainers talking about the "scoreboarding resolver" they'd implemented but wouldn't release, because (even back in 1997) the real answer was DNSSEC.

I'm not sure I agree about the novelty of Kaminsky's attack. Ruefully: I remember arguing with him about that on Twitter, him getting me on the phone to explain how his attack refined the older cache poisoning attacks, me conceding it, and then bad stuff happening a few months later.

> and then bad stuff happening a few months later.

Are you referring to http://www.gonullyourself.org/ezines/ZF0/zf0%205.txt perchance?


And DNSSEC probably did look good back in 1997, before the day of gTLDs and URL shorteners for example.

> It's worth keeping in mind that the Turkish government hi-jacked DNS resolvers; as-deployed DNSSEC would in no way have helped.

Entirely false! ! ! Please get your facts right. [Not talking about the 2nd part of your comment]

Yes, the Turkish government hijacked the Turkish DNS resolvers, so they were returning incorrect IPs for the twitter.com domain.

Yes, DNSSEC would have helped. The Turkish government does not have any of:

- The keys for the root zone

- The keys for the .com TLD

- The keys for the twitter.com domain

As a result, the Turkish government had no way to forge signatures for the twitter.com domain. All they could do was send a fake AD bit.

Using DNSSEC to validate answers for twitter.com from Turkey during the hijack would have failed. That is the exact purpose of DNSSEC.

My facts are right. As-deployed, clients don't validate DNSSEC; so the resolvers can pass on any response and the stubs/browsers will accept it. The signatures aren't being checked.

Edit to update: The Turkish Government forged DNS to block twitter. Even if clients did start validating DNSSEC - if the failure mode is that it still be blocked, but for a different reason, is that really useful? A Government could simply drop all queries for twitter.com too. How does DNSSEC meaningfully help?

The signatures should be checked client-side. As I pointed out in another comment on this page (https://news.ycombinator.com/item?id=8896318), recent studies find that more and more resolvers send the RRSIGs to the clients, who then should check them.

In the case of the Turkish hijacking, of course hijacked resolvers would not send RRSIGs. This should prompt the client to request another resolver.

It depends on what you mean by clients. The most popular Firefox plugin to validate DNSSEC does in fact validate responses properly, and would have caught the Turkish MITM.

I understand DNSSEC may not be the perfect solution, but many arguments are misguided. Worse yet, no real solution has been posed in its stead that works perfectly. But here I go again. In fairness, there are some valid arguments. I'm cherry picking pieces I find questionable.

DNSSEC is a Government-Controlled PKI - DANE is not DNSSEC. Why conflate the two?

DNSSEC is Cryptographically Weak - Wait, what? That's like saying openssl is cryptographically weak, since it allows you to create weak keys. You can use ECDSA with DNSSEC. Minimal research would have revealed that.

DNSSEC is Expensive To Adopt - It is up to the software to decide how to handle failure cases. Software with security in mind should do the proper checks. Others should display warnings. Others should probably just accept the answer.

DNSSEC is Expensive To Deploy - What? DNSSEC is braindead simple to deploy. It literally takes a handful of minutes.

DNSSEC is Incomplete - Unclear...not positive how DNSSEC relates to browser implementation.

DNSSEC is Unsafe - I'll possibly give you this one, but stand by that you shouldn't hide in plain sight. That is, why put 'hidden' records in a public zone?

DNSSEC doesn’t have to happen. Don’t support it. - DNSSEC has already happened. People ask for it. However, largely, I support the argument. Start a working group, propose a better alternative, and replace it. I'm all for that, believe it or not.

There is a real alternative solution, and it has the virtue of being exceptionally simple: do nothing. The DNS doesn't need to be secured, just like raw IP isn't, nor every individual BGP4 update.

DANE (using DNS as an alternative to TLS CAs) is the primary real-world use case for DNSSEC.

No site in the world is secured purely using ECC DNSSEC, because the roots and TLDs rely on RSA. Moreover: find the most popular site that uses ECC DNSSEC and report back what it is. (It'll be tricky, because very few of the most popular sites use DNSSEC at all). And, of course, the ECC variants DNSSEC uses are already outmoded. Which is why I was careful to refer to modern deterministic Edwards-curve signature schemes. Also: did you really look at the links in that post and think it lacked even "minimal research"?

I don't see any other arguments in this response, but you can call me out if I missed any.

I'm using DANE on most of mine, but the public-key pinning + CA usage of it - so an MITM would need to either steal signing keys, or a pocket CA and to swap my DS record. Not impossible but it's an extra barrier (instead of a replacement for the PKIX). With free certs on the near horizon from Let's Encrypt as well, then the two together allow two independent barriers, one with certificate transparency, and that may work quite well for domain validation in practice.

The country that owns/operates the TLD I'm under, or the registrar, can already swap my DNS, but DNSSEC makes it harder for others to. I don't know what more we can do about that - probably more in the realms of I2P or Tor, etc.

DNSSEC has definitely got some problems. The reflection problem, for one, which needs careful mitigation. And yes, it could use modern crypto badly, but that's waiting on standardisation; signatures are a way off in CFRG and then there's all the HSMs. It may be better as part of a nutritious layering breakfast; DNSCurve on top, maybe?

You won't see me arguing for plaintext anything, however, so if unsigned plaintext is the alternative, bugger that. QUANTUMDNS is just way too easy as a man-on-the-side otherwise.

BGP is a total clusterfuck and needs a lot of security it doesn't have - that's about all I can say as to that.

IIRC when I crawled the alexa top million's DNSKEY records none of them used ECC. I did see a few using DSA.

DANE is not DNSSEC, and was not and is not the motivation behind it. That thought is incorrect.

As for algorithms, you're insulting DNSSEC as it's been implemented, not as a protocol. DNSSEC reserves purposely up to 255 slots for algorithms, and has added them as it's evolved. But claiming it depends on RSA and doesn't support modern eliptical curve schemes is at worst wholly false, at best half false depending on the definition of modern.

All I can say is that I've read, I believe, every dnsext and namedroppers post and the archives of the old mailing lists from the 1990s, when DNSSEC was a government funded project of my then employer, for whom I was busy writing DNS security testing tools, and I believe you're wrong: DANE is the motivating use case for DNSSEC.

It's a pretty weird rhetorical spot to put yourself in, by the way. Without DANE as a motivation for deployment, there is no reason to deploy DNSSEC. DNSSEC can't provide end-to-end security, and TLS and SSH already authenticate themselves without DNSSEC.

I don't know what your second paragraph means. You can see what I'm referring to empirically by playing with DNSviz. Like I said: go find the largest DNSSEC site that uses ECC records. Then, for fun, and not counting the TLDs themselves (sigh), the largest that use RSA-1024.

Note that only SHA-1 and RSA must be supported by conforming DNSSEC implementations; see RFC 4034. That's probably why support for other ciphers or hash algorithms is inconsistent at best.

So just to be clear, the thesis here is that we should not secure DNS in any way, correct? Or is part 2, in a series of articles going to detail a different solution?

In other words, is the objection to DNSSEC the idea, or the implementation?

Lastly, what is our best hope for getting rid of CA's?

The thesis here is in fact that we should not secure DNS in any way, just like we don't secure raw IP and won't for the foreseeable future secure BGP4. We instead do what we've done for 20+ years: assume the network is untrustworthy (because it is, with or without DNSSEC) and build security on top.

Thanks for clarifying! I have followed many of the discussions in which you participated talking about this issue, but this is the most concise way this has been put.

If I may suggest something, the article talks a lot about the implementation flaws in DNSSEC, giving the impression that if only it was designed better, or even redesigned, that we could actually have a secure DNS.

DNSSEC is probably OK if all you use it for is within an organization for your own stuff (ie, with your corporate root CA, for your corporate domains only, where you control for and vouch for every node connected), and then using it to enhance other things like distributing SSH keys through SSHFP (again, totally internally, not internet facing).

But everything else said here I totally agree with - if you don't control the root, you don't control a whole lot of anything.

If you want something that hides the contents of DNS packets in transit, DNSCurve at least does that, and is relatively easy to deploy - the current best server appears to be the curvedns in the freebsd ports (updated to use libsodium, etc.): https://svnweb.freebsd.org/ports/head/dns/curvedns/

> if you don't control the root, you don't control a whole lot of anything.

What's of course valid for TLS too. Do you suggest people use HTTPS?

> DNSSEC is Expensive To Adopt/Deploy

I live in .cz domain and almost every registrar there have DNSSEC in one click (without changing price) - adoption here is 40%! - see https://stats.nic.cz/stats/domains_by_dnssec/ and I am not experiencing any mentioned problems at all

Maybe DNSSEC is not best DNS proto in the world, but these arguments are not entirely based on facts.

This is like saying "TLS is easy if you just let GoDaddy manage your keys for you".

So ?

It might be worse than securing it yourself, but it's still way better than no security at all.

Incorrect. You don't have to let CZ.NIC manage your keys for you.

That's not what he's saying. DNSSEC is a pain in the ass to operate on your own, and that's with guides like http://users.isc.org/~jreed/dnssec-guide/dnssec-guide.html (which is itself relatively new). BIND, at least, has a lot of bad defaults (e.g., NSEC), and the docs make poor recommendations about algorithms and key sizes (SHA-1? 1024-bit RSA keys? and what's this about RSA being depreciated after September 2015?)---even NIST's guidelines make stronger recommendations than that (http://csrc.nist.gov/publications/nistpubs/800-131A/sp800-13..., and that still has Dual EC DRBG on the books). Furthermore, ISC's guide downplays the importance of DNSSEC on private networks, which is probably the most important (in terms of opsec) and most difficult (in terms of complexity) place to implement it.

DNSSEC doesn't offer any privacy guarantees, either, so at this point DNSSEC doesn't really do much for me.

That's not what I am saying at all.

I am saying "almost every registrar [in .cz] have DNSSEC in one click (without changing price)" is nothing like "letting GoDaddy manage your keys for you".

The first is a matter of cost. The second a matter of convenience.

It's not at all like letting a third party manage your keys in any reasonable way to interpret it.

Securing those DNS lookups therefore enables no meaningful security.

When you're reasonably sure that the DNS response you're seeing is actually what the zone owner meant it to be, you can have records for SSL certificates (DANE, mentioned later), or GPG keys for users with email in that domain, all kinds of things which were not previously possible. Each of those may or may not be a good idea but it clearly enables meaningful uses.

DNSSEC is a Government-Controlled PKI

Sure. But once you're willing and able to forge registry information about a domain, you can also usually get a valid (DV) certificate for it. Assuming you don't control a CA to begin with. So at least we're being honest with ourselves and stop paying for the security we don't get.

And DNSSEC doesn't get in the way of any replacements, does it? You can sign the certificate with your personal key. Or you can use some sort of convergence mechanism to see if you're seeing the same public key as everyone else.

Authenticated denial. Offline signers. Secret hostnames. Pick two.

The first two, right? (Secret hostnames?)

Articles like this (whole lists of criticisms) sound lawyery and unconvincing to me. I can't tell which ones are the meat and which are garnish. Not that a committee design cannot be flawed in multiple ways (death from 1000 cuts style) but this feels as if the author didn't like the idea first, perhaps for a perfectly valid reason, and came up with a bunch of additional reasons later.

Full disclosure: I signed one of my zones and I run validating resolvers for my private needs (unbound, both on servers and my home network). Nothing blew up. Yet.

Governments today have too much control over Internet crypto keys. I am baffled by arguments that suggest it might be a good thing to just give up and give them total control over them.

Today many many governments can create TLS certs for any website domain.

With DNSSEC, only the USA can, and other governments can only create TLS certs for a small and distinct subset of website domains. That's better.

(It would be even better if dns resolvers included certs for all TLDs they knew about, and did not rely on the root keys for those TLDs. Just like pinning TLS certs. Then, you can choose a TLD run by an organization you trust, and even the US won't be able to forge your certs.)

> With DNSSEC, only the USA can, and other governments can only create TLS certs for a small and distinct subset of website domains. That's better.

I fail to see how that is better.

Less organisation that can forge = less weak-points = better security

And if the government or anyone creates a bogus cert for a TLS domain, you can exclude it from your cert store. DNSSEC, not so much.

Thomas - I'm still looking for the walk-through of how you see the attacks by a government against DNSSEC. I don't see them!

And to your repeated refrain both in your post and here on HN that "DNSSEC has virtually no real-world deployment", I would again refer you to various DNSSEC statistics sites that do indeed show a good amount of deployment happening in different parts of the world.

Here's a list of those sites: http://www.internetsociety.org/deploy360/dnssec/statistics/

I know you wish it were NOT happening, but the reality is that DNSSEC deployment is going on... and many of us see it as a way to provide another solid layer of security on the Internet.

Yes, it's always great to be exploring other alternatives, too. My personal interest is in "securing the DNS"... and right now DNSSEC is to me the best tool we have going today. If we can develop other tools over time that are even better, that will be outstanding. Meanwhile, I want to see what we have today get better deployed.

I keep seeing a claim that something like 10% of DNS requests are signed with DNSSEC, but that's an awfully hard number to square with the top sites on the Internet, virtually none of whom are DNSSEC-signed. Reconcile for us, please?

"signed with DNSSEC" is an ambiguous statement. You need to take into account that, while nameservers may enable DNSSEC, resolvers may:

(a) validate (include the AD bit) but strip the RRSIGs (i.e. regular DNSSEC resolver)

(b) omit the AD bit but include the RRSIGs (i.e. you have to validate yourself)

(c) omit the AD bit and omit the RRSIGs (i.e. regular non-DNSSEC resolver)

(d) validate (i.e. include the AD bit) and include the RRSIGs (i.e. alleluia!)

My own study [0] (using the Atlas network [1]) found 30% of resolvers doing (a), and 65% doing (b), including some overlap. There are people way more qualified than me doing this kind of stuff, namely APNIC's Geoff Huston, see [2] for instance.

[0] https://www.os3.nl/_media/2013-2014/courses/rp2/p14_report.p...

[1] https://atlas.ripe.net

[2] http://www.potaroo.net/presentations/2014-06-03-dns-measurem...

Hmm... I'm not sure where you are seeing that claim that 10% of DNS requests are signed with DNSSEC. I've certainly promoted the statistic that 10-12% of DNS requests are performing DNSSEC validation - and that is based on Geoff Huston's DNSSEC validation metrics out of APNIC - see: http://stats.labs.apnic.net/dnssec/XA?c=XA&x=1&g=1&r=1&w=7&g... (although the measurement seems broken for the past few days)

> I'm still looking for the walk-through of how you see the attacks by a government against DNSSEC. I don't see them!

Assume we are attacking example.tld

For the US government: 1a) Use the root zone keys -- that are essentially already under their control -- to sign fake zone keys for fake .tld nameservers. 2a) Continue with 2b) below.

For the government under whose jurisdiction the .tld zone is operated: 1b) Get the zone keys for .tld using legal measures (e.g. the local equivalent of national security letters). 2b) Use the .tld zone key to sign fake example.tld zone keys for fake example.tld nameservers. 3) Serve signed records for example.tld records (via man-in-the-middle attacks).

This in practice a redelegation of the zone.

If you redelegate a zone to a new owner, that owner owns the domain for every intent and purpose, including generating signed TLS keys from pretty much any CA.

There are legitimate ways to delegate a zone, including selling it to someone else. It's not really an attack, and it's definitively not something you can protect against.

Except for the fact that there really are no owners of any zone in DNS and DNSSEC, beside the owner of the root zone. Every other zone is just a temporary delegation that lives as long as the delegation record is cached. If you go that far down, then sure, this does not constitute an attack, because the protocol does not claim to protect your "ownership" in any sense (except for the ownership of the root zone).

People do have mental models of domain ownership, though, which is founded in the contractual agreements they have with their registry. To them it feels like an attack when for a select group of people their domain lookups result in different records than for everybody else. And it makes it worse that selective (or tailored) man-in-the-middle attacks don't leave any traces behind.

> It's not really an attack, and it's definitively not something you can protect against.

Sure you can. See how namecoin cryptographically reserves domains for a certain owner. It is just a pretty big step away from the current practice of how the DNS is run.

> Except for the fact that there really are no owners of any zone in DNS

That's only true using your very own definition of ownership.

A zone has an owner in a strict juridical sense of the word. You can read in detail what this means in the relevant agreements for registrars.

> attacks by a government against DNSSEC

When people talk about government attacks against DNSSEC, I assume that means co-opting the TLD operator and feeding alternate, signed, validated-to-the-DNS-root copies of a particular domain to some target. Is that not the mode of attack being described?

How much more centralized does DNSSEC make the Internet? It's already bad enough that people are calling for mandatory TLS for all websites (though with free certs from the EFF/mozilla that will be a bit less painful) but if I want to have my own DNS root, or play by my own rules on the net, will running my own infrastructure be harder with DNSSEC?

Yes. If a CA egregiously misbehaves today, Google can kneecap it with an update to Chrome, and end-users can remove it from their configuration entirely (this should be easier to do, by the way, and there's no technical reason it can't be).

If the signers for .LY or .COM misbehave, nobody has any solid recourse. .COM's role in the post-DANE certificate hierarchy is baked into the fabric of the Internet.

DNSSEC represents a massive move towards centralization of Internet trust, which is a baffling thing to get behind in 2015.

Isn't the signer for .com the same as the owner of .com, so if they misbehave then your DNS resolution could be broken regardless of DNSSEC?

If you add DANE in the mix, they can set any certificate as trusted for your domain (if clients don't require it to be also signed by a trusted CA). So they can't just redirect your domain somewhere else, they can also feed clients "trustworthy" certs for the new target.

Sure, but without DANE, they can get a domain validated certificate quickly from any number of CAs that your clients probably already trust.

(edit) I understand DANE puts explicit trust on the registry, registrar, and the DNS root; but given the common use of domain validated certificates, that trust is already there, and I think it is better to have it explicit. Also, there are fewer parties to watch out for, the Belgium Root CA can issue a cert for my domain, but Belgium is unlikely to compel my registry/registrar unless I've chosen to have a .be domain. (My applogies to Belgium Root, if they're not affiliated with the government of Belgium)

Also, I don't think cert issuance can scale without domain validation or a large expense.

Coincidentally, VeriSign (responsible for DNSSEC for the root zone and .com) also runs a major CA in the existing CA infrastructure. Even worse, it is such a major player that removing it from the certificate store would invalidate the certificates from a huge amount of sites and it would be impossible for a browser to remove them without breaking a significant part of the internet. Thus while the situation is horrible now, it won't get any worse with DANE.

Verisign controls .com and is on pretty much every root CA list, so they can do what you describe today, no DANE required.

Do you not see a problem with a massive deployment of new crypto infrastructure that leaves Verisign in cryptographic control of any site in .COM?

Oh, I definitely do. I was just trying to say that Verisign could already do a similar attack even without DANE. As controllers of .com they could easily redirect example.com to an evil server, and as a root CA they could give the evil server an EV certificate for example.com.

If anything, I suppose that should be an argument against consolidating DNS and TLS powers into single entities, which is exactly what DNSSEC and DANE do.

Relying on Google to mitigate problems sounds a lot like centralization to me.

There are in fact organizations that scrupulously audit the certificates in all browsers/cert stores in their organization. One I know of is etsy.

Here is a reference well worth reading about how they approach security: http://www.slideshare.net/zanelackey/attackdriven-defense

Wow. That’s hardcore.

You know, it's possible to simply lift the guts out of the TLS protocol and wrap them around a directory service, or to take the architecture of DNSSEC and kludge it into TLS. Basically we can keep TLS's good points and support an alternative to CAs within the protocol itself. All a service provider'd have to do is pick one or both to support, and advertise in the protocol what combination they use. That'd be a fun weekend project.

Non-expert here, why are secret hostnames important? Aren't the machines either reachable by IP Address or not regardless of whether the hostname is known or not?

I guess it's a form of defense in depth. Hostnames often contain information about the role of a server that wouldn't be obvious from an IP scan.

Companies want security through obscurity, in this case for internal-network servers. Makes CTOs feel better, and security consultants are in the business of making CTOs feel good, so they want it too.

Staging sites which need to interoperate with a third-party are one place I've seen obscured hostnames used.

eg, You want to test some front-end changes in conjunction with changes to the CDN config. If the production site is origin.example.com, the staging site might be at origin-stage-asdgwdse.example.com because you (hope!) the changes don't leak early.

IPv4 literal addresses are often inconvenient / not feasible for larger stacks or cloud setups where you don't control the underlying infrastructure.

I don't think DNSSEC ensures the hostnames are secret. From RFC 4033 [0]:

"The DNS security extensions provide origin authentication and integrity protection for DNS data, as well as a means of public key distribution. These extensions do not provide confidentiality."

[0] http://tools.ietf.org/html/rfc4033

Not secret, Secure. Secret is overrated.

I think people are referring to this: http://en.wikipedia.org/wiki/Zooko%27s_triangle

> Zooko's triangle is a diagram named after Zooko Wilcox-O'Hearn which sets out a conjecture for any system for giving names to participants in a network protocol. At the vertices of the triangle are three properties that are generally considered desirable for such names:[1]


> Human-meaningful: The quality of meaningfulness and memorability to the users of the naming system. Domain names and nicknaming are naming systems that are highly memorable.

> Decentralized: The lack of a centralized authority for determining the meaning of a name. Instead, measures such as a Web of trust are used.

> Secure: The quality that there is one, unique and specific entity to which the name maps. For instance, domain names are unique because there is just one party able to prove that they are the owner of each domain name.

While DNSSEC certainly deserves a lot of criticism, I think some of the points are unfair.

> Had DNSSEC been deployed 5 years ago, Muammar Gaddafi would have controlled BIT.LY’s TLS keys.

(DNSSEC's, and thus) DANE's security can break down at every parent DNS zone. For bit.ly the parent zones are the .ly top level domain and the root zone. Each of them (but only these two) can forge valid DNSSEC records. With the current X.509 certificate infrastructure there are more than a hundred root CAs who can issue malicious certificates for use with TLS. Even though I certainly dislike the centralized nature of DNSSEC, it is hard to argue that reducing the attack surface from over a hundred institutions to two or three doesn't constitute a marked improvement. Specifically, domains like tumblr.com whose domain path is completely US hosted will no longer have to worry about "obscure" foreign CAs like "Unizeto Certum".

> In fact, it does nothing for any of the “last mile” of DNS lookups: the link between software and DNS servers. It’s a server-to-server protocol.

DNSSEC is decidedly not a server-to-server protocol. The signatures are created offline and can be verified by any party, even when they come out of a cache. In contrast, DNSCurve is a server-to-server protocol, which only secures the transport layer, but not the data itself. The post refers to the aspect, that stub resolvers (those software parts of your OS that look up the query at your ISP caching DNS server) where not envisioned by the designers of DNSSEC to verify the DNSSEC entries themselves. Nonetheless, there is nothing that hinders them from doing so and I would expect that feature to be included once DNSSEC is spread enough to justify the effort. In fact, you can already check how this would work using the dig request `dig @ +dnssec example.com` which replies with the A record and the signature on the A record (you still need to get and verify the zone key of example.com though).

> DNSSEC changes all of that. It adds two new failure cases: the requestor could be (but probably isn’t) under attack, or everything is fine with the name except that its configuration has expired. Virtually no network software is equipped to handle those cases. Trouble is, software must handle those cases.

I'm curious about why software would have to handle these cases. For the end-user a binary "lookup went well, here is the result" / "there were some errors during the lookup, sorry I can't or won't give you a result" should be totally sufficient. Firefox should not confuse the user with detailed difference between "I can't verify this record, even though the parent zone says it is DNSSEC enabled" and "I can't resolve the name at all", so why does Firefox need this information?

Look at the code in Firefox that handles certificate failures, presents user interface to end users to resolve the ambiguity, and then completes the operation once that happens. Then look at the sample code I linked to. The code I have there is a decent approximation of "idiomatic". The code I'm referring to in browsers is larger by a factor of, what, 100? 1000?

Browsers and operating systems aren't going to add full DNSSEC resolving caches. They're going to use stub resolvers and delegate this problem to "DNS servers" (caches), like they always did. DNSSEC does not protect the last mile. You can dodge this point by redefining the concept of "last mile", but that's not a persuasive argument.

The US NSA controls both the CA hierarchy and .COM. It is amusing to see people try to rescue DANE from criticism by evoking the CA system to protect it.

I have a question. Your article's reference for "governments control the DNS" just describes public in rem legal actions by the Justice Department to Verisign to seize domains. How does that imply control? By that logic, wouldn't it be a valid statement to say "Matasano is controlled by the US Government (i.e., the NSA)" because it's only one in rem legal action away from injunction?

I'm just asking for clarification for the statement; I'm in agreement with the crux of your argument. I just don't understand how the Government(s) control DNS more or less than any other aspect of the internet that is vulnerable to meatspace attacks like injunctions and arrests.

EDIT: I think I understand the distinction now. Your argument is saying "the CAs are bad but we mustn't allow DNSSEC to replace them." And a domain's TLS certificate is safe from an individual government's subversion if the CA resides outside of that government's jurisdiction.

Am I understanding this correctly?

> Browsers and operating systems aren't going to add full DNSSEC resolving caches.

Fedora 22 feature change request: https://fedoraproject.org/wiki/Changes/Default_Local_DNS_Res...

> Look at the code in Firefox that handles certificate failures, presents user interface to end users to resolve the ambiguity, and then completes the operation once that happens. Then look at the sample code I linked to. The code I have there is a decent approximation of "idiomatic". The code I'm referring to in browsers is larger by a factor of, what, 100? 1000?

(The link tptacek is referring to is https://gist.github.com/tqbf/e8d82d614d1fea03476b btw.)

If I had my say, the code you link to would not have to be changed at all, because the `gethostbyname` interface would stay the same and return a lookup failure on unsuccessful DNSSEC verifications, if the parent zone reports the domain to be DNSSEC enabled.

> Browsers and operating systems aren't going to add full DNSSEC resolving caches. They're going to use stub resolvers and delegate this problem to "DNS servers" (caches), like they always did. DNSSEC does not protect the last mile.

You've said so in your blog post, but you don't say why. Everyone that is seriously considering using DANE as a replacement to the existing CA hierarchy assumes they do include a DNSSEC enhanced resolver (I don't see why it has to be a "full" and caching resolver, though).

> DNSSEC does not protect the last mile. You can dodge this point by redefining the concept of "last mile", but that's not a persuasive argument.

When the "last mile" is on my PC (i.e. when my local resolver checks the DNSSEC signatures) then it is as protected as my OS.

> The US NSA controls both the CA hierarchy and .COM. It is amusing to see people try to rescue DANE from criticism by evoking the CA system to protect it.

While I find your enthusiasm to protect against the NSA encouraging and important, my point actually was that DANE improves upon the security of the existing CA system against attacks by non state-sponsored attackers and foreign state-sponsored attackers beside the NSA.

Let me state it differently: At the moment, example.de has to worry about attackers controlling the .de zone (, the root zone) and attackers controlling any of the >100 CAs (including many nation states). With DANE, example.de only has to worry about attackers controlling the .de zone and the root zone. As a more graphic example consider how at the moment the "Deutscher Sparkassen Verlag GmbH", the CA of a german umbrella organization of local banks, can issue certificates for amazon.com, wikipedia.org or any other domain world wide. This kind of nonsense is solved by DANE, not NSA spying.

(My apologies to German readers for not inflecting "Deutscher" in the above sentence. I keep the identifier verbatim to allow readers to look it up via non-fuzzy search.)

> Browsers and operating systems aren't going to add full DNSSEC resolving caches.

You phrase it like a fact. In practice we're seeing quite the opposite. Firefox clearly sees a fully validating client as the only reliable way to implement TLSA.

No. Firefox has no current plans to implement DANE. The plans on the wiki date from 2011, at roughly the same time Chrome played with DANE. The Chromium team subsequently removed the code for DNSSEC validation, and has since then declared that they're not moving forward with DNSSEC. The Bugzilla ticket for DANE support in Firefox features a member of the Mozilla team declaring that DANE support is no longer an active project there.

That's not what I said. If you look at said ticket, you will see that in fact plans were to make it fully validating.

What's your point? One dude on the Mozilla team played with the idea of implementing DANE. Then they gave up on it. Now that idea is a rebuttal to my point that browsers won't benefit from DNSSEC?

Is the point that it's possible to implement DNSSEC on end systems? Of course it's possible; everything is possible. But that's not what's going to happen, in large part because the DNSSEC operational community doesn't believe it should happen, and designed the protocol to make it easier not to.

No. That is not what I responded to.

You wrote that browsers, if they implemented DNSSEC, would choose to implement stub resolvers without full validation. In fact, both browsers who looked at implementing it clearly did so with the intention of doing full validation.

I did not say that they implemented it, or even that they will. I did say that the claim that there were plans to only implement a non-validating stub resolver in common browsers is not true.

You write as if what I said about browsers and DNSSEC wasn't right there in the thread above your comment. An odd rhetorical strategy.

> Browsers and operating systems aren't going to add full DNSSEC resolving caches.

Actually they already did. OS X for instance has this baked into mDNSResponder.

> Actually they already did. OS X for instance has this baked into mDNSResponder.

That's not altogether true and now also irrelevant.

mDNSResponder has DNSSEC support that isn't quite baked and was not enabled by default. The only way to use the support it did provide was by passing specific flags to a relatively low-level API. (You'd have to configure the system to use a DNSSEC enabled resolver as well of course.)

mDNSResponder has been replaced by discoveryd which does not have any DNSSEC support (other than silently accepting the validate flags). Perhaps it'll gain further support in the future. If it does I'd not bet on it being enabled by default any time soon.

Fire up a terminal, run "tcpdump -s0 -i en0 udp", open Safari, resolve "www.enteract.com", and tell me if you see a full recursive DNS lookup happening, or if instead you see Safari's stub resolver asking the DHCP-configured DNS cache server for help over the insecure Internet.

mDNSResponder supports DNSSEC validation, please look at the source code:


This is like saying "everyone can just run their own DNS server". Of course, as I said, they won't. The fact that Apple has a DNSSEC-resolving recursive lookup server and Safari doesn't use it strengthens my point instead of weakening it.

That doesn't really respond to the point. Does Safari use DNSSEC or not?

Of course it does, Safari uses the resolver provided by OS X, which is mDNSResponder. (It superseded the stub resolver in libSystem.dylib starting with 10.6.)

How's that saying go about only proving the code correct, but not testing it? I think there's a reason tptacek specifically asked about tcpdump of on wire traffic.

I've never understood the criticism that DNSSEC places control of TLS keys with governments - it seems like they already have that control. For example, Libya could change the dns records for bit.ly and get a certificate. I suppose an EV certificate with offline verification might help that, but you could use those with DANE, but DANE doesn't seem any worse than domain or email validated certificates.

I think the issue isn't that this is somehow worse but simply that DNSSEC is redundant to the fixes which we already need to make (e.g. public key pinning) so you're left with the question of deploying something which doesn't make you more secure and does provide a new way for things to break.

Public key pinning seems to be heading in a direction that relies on the current CA model. For example, see https://tools.ietf.org/html/draft-ietf-websec-key-pinning-20, or https://code.google.com/p/chromium/codesearch#chromium/src/n... (Chrome's pinned certificate list -- which has a total of _9_ non-CA entries).

Can you expand further on the problem which you see? Pinning appears to address the biggest problem with the current CA model, where any of the CAs are equally valid, by allowing you to pick the CA(s) used and, if desired, even locking it to a specific host certificate:


> To perform Pin Validation, the UA will compute the SPKI Fingerprints for each certificate in the Pinned Host's validated certificate chain, using each supported hash algorithm for each certificate. (As described in Section 2.4, certificates whose SPKI cannot be taken in isolation cannot be pinned.) The UA MUST ignore superfluous certificates in the chain that do not form part of the validating chain. The UA will then check that the set of these SPKI Fingerprints intersects the set of SPKI Fingerprints in that Pinned Host's Pinning Metadata. If there is set intersection, the UA continues with the connection as normal. Otherwise, the UA MUST treat this Pin Validation Failure as a non-recoverable error. Any procedure that matches the results of this Pin Validation procedure is considered equivalent.

Unless I'm missing something, that allows you both to limit the number of CAs which you trust and if you go to the trouble of pinning specific certificates you can even limit damage from a compromised CA.

It doesn't scale, as the above spec points out. Pinning could be used to eliminate CAs (as most people initially expect) but this isn't viable at the moment. The specification, which proposes a method to validate the certificate chain, also accomplishes very little: it's fully vulnerable to MITM attacks on the first client connection. There are also privacy concerns about storing pinning data; and a browser which is configured to clear all data at exit would not see any benefit.

It's also wrong to say that public key pinning (which addresses PKI/TLS weaknesses) makes DNSSEC redundant. I suppose the proper comparison would be to the (optional) DANE?

> It's also wrong to say that public key pinning (which addresses PKI/TLS weaknesses) makes DNSSEC redundant. I suppose the proper comparison would be to the (optional) DANE?

It wasn't so much specific to pinning as a general thought that most sites will be in the situation where they need to deploy the TLS-specific measures because they have too many clients which can't assume DNSSEC but if you're already doing that, it's not clear to me that many places would see enough additional value deploying something which is less mature and harder to manage. This is particularly a big deal for anyone who doesn't control all of their infrastructure or works at a large organization.

They can do that because of domain-validated certificates, and they can do that after the DNS is secured.

And the CA that issues the cert will be flamed and possibly removed from Chrome/Firefox.

(Edit: If they are caught.)

Why would they be removed? If the CA correctly follows all of their procedures and approves a domain-validated cert, why punish them for approving what is a legitimate request?

If the CA is coerced into issuing a cert, however, I agree with you.

You’re right. I misunderstood the scenario we were discussing.

So isn't that a criticism of domain-validated certificates? This is a serious question. I haven't heard a compelling argument for how DNSSEC/DANE gives governments any more power than existing DNS delegation.

Domain validated certificates do not require a new protocol however.

There are better DNS security proposals circulating already.


Perhaps he is referring to http://dnscurve.org/

DNScurve solves a different problem. It does not implement offline signing. It is not something you could trust your cryptographic keys with.

These are just the same old arguments all over again.

1. DNSSEC is unnecessary, we have SSL.

Do you really want to argue that the CA PKI infrastructure is working? Placing your trust in your top domain solves 99.5% of the problem, roughly (considering there is about 500 CAs that you trust right now, that is not a made up number).

2. DNSSEC is a government-controlled PKI.

No, it's not. It is ICANN-controlled, which is under government mandate but not government control. You already trust ICANN for the root zone, and they have so far stayed out of politics, and has so far never meddled with domains even when the US administration wants them shut down. Because if they did, the Internet would route around them in a heartbeat.

This is actually one thing that DNSSEC did absolutely correct. If you want a cryptographic assurance that a domain belongs to an owner, who better to make the assertion than the TLD responsible for delegating it? Sure, if you get a Libyan domain, you have to trust the Libyan TLD to sign it. But they are already trusted to delegate it! They run the whois. If they decide to transfer the ownership to Ghaddafi himself, there is absolutely nothing you can do about it. If this is a problem for you, don't get a Libyan domain name.

3. DNSSEC is insecure.

It's ugly, but it's not insecure in any practical way. It could be a lot more modern, but it is in no way worse than SSL (see 1 above).

4. DNSSEC is expensive.

It is already deployed. You are already paying the price for this, whatever it was.

5. DNSSEC is hard.

It adds complexity to your operations, but your tools already supports it and it's included in courses already. No competing standard is.

6. DNSSEC makes for DDoS.

That wasn't actually included in the list, but this one is true. DNS is a DDoS vector, and DNSSEC strengthens it. That should be addressed. Better deployment of filters would do much to improve the situation.

7. DNSSEC allows for enumration of zones.

Also not included in the list. Yes, this is a real issue you need to be aware of. Don't store private data in public zones. Use split zones for this.

All technical decisions is a matter of pros and cons. Pro is that it builds on a proven infrastructure and is already deployed in most of the world. You could absolutely build a modern standard, but these things take at least ten years to roll out. Start today and build the successor to DNSSEC!

But remember that DNSSEC did get a few important things right: Assurance follows delegation. Your DNS operator does not have your key. It protects against negative answers. Make sure your successor inherits those.

First, I'd be a little happier if you didn't misquote me for the convenience of your comment. These weren't my subheds. You've rewritten some of them, condensed others, added one, and misread another, which lead you to suggest that I hadn't "allowed for enumeration of zones" when in fact there's a whole subhed dedicated to it. You could just clarify at the top of your comments that your numbered list is your own, not mine. I'd then strike this part of my comment.

Now then:

1. I do not argue that the CA PKI infrastructure is working. In fact, I state explicitly that it is not. The problem is that DANE's replacement for the TLS PKI is controlled by world governments. At a broader level, the problem is that centralized PKIs don't work against nation-state adversaries. The solution for the TLS CA problem is (a) a race to the bottom for CA pricing, led hopefully by EFF, (b) widespread deployment of public key pinning, which serves both to thwart CA attacks and to create a global surveillance system against them, and (c) the eventual deployment of alternate opt-in TLS CA hierarchies, such that users could for instance subscribe to a hypothetical EFF CA hierarchy. DNSSEC accomplishes none of this.

2. Yes, DNSSEC is a government-controlled PKI. Your whole argument here is that if governments abused it, the Internet would "route around it". You can't make that argument without conceding my point.

3. I didn't write "DNSSEC is insecure" (though I think it is). I wrote that it's cryptographically weak, which it is: it's PKCS1v15 RSA-1024. Crawl signatures back to the roots: you'll see RSA-1024 all the way at the top of the hierarchy. TLS has already deployed forward-secure ECC cryptography and will by the middle of this year see widespread deployment of Curve25519 ECDH and Ed25519 deterministic signatures. DNSSEC simply will not: it will be RSA-1024 for the foreseeable future.

4. "DNSSEC is expensive" is not a subhed in my post. "DNSSEC is expensive to adopt" was, and it is: virtually no networking software currently handles soft failures from lookups, which are a universal shared experience among HTTPS users, and will occur just as frequently with DNSSEC (should it ever be deployed). "DNSSEC is expensive to deploy" is another point I made, which is also true: of the top 50 Alexa sites, why not go look how many are DNSSEC-signed?

5. "DNSSEC is hard" isn't something I wrote at all, so I don't know how to respond to it.

6. "DNSSEC makes for DDoS" is something I think is probably true, but also an argument I wasn't willing to go to bat for, because amplification attacks using ANY queries against normal DNS also work. But sure, if you want to add fuel to the fire.

7. "DNSSEC allows for enumeration of zones" is captured under "DNSSEC is unsafe" in my post. If you answer to this is "don't use DNSSEC for important zones", then we agree, modulo that I think you shouldn't use it at all.

> The problem is that DANE's replacement for the TLS PKI is controlled by world governments.

I think that the concept (though not DANE itself) could be made to work, e.g. by using k-of-n systems to verify that a majority of governments agree to a particular fact. If the US, UK, Russia, Iran and Ghana all agree on a set of facts, those facts are likely to be true; if enough nation-states agree on a set of facts, then those facts might as well be true.

This could be used to secure the root for a nation-state's DNS, since that root is, after all, just a fact.

As an example, a majority of member states could certify that the ICANN board has authority for the root; a majority of the ICANN board could certify that is authoritative for .example; in exactly the same way, could then use its authority to delegate responsibility for example.example to, and so forth.

A similar scheme could be used for IP address ownership.

Yeah, any nation state is going to be able to lie about the identity of machines it is responsible for—but it's a government; it can do that anyway. Other approaches like TOFUPOP help there, but at the end of the day the guys who can point guns at a certificate holder have the ability to make him do anything anyway.

> DNSSEC is a government-controlled PKI.

This needs to be better explained in depth. Do world governments control DNS because of ICANN's GAC? Does the USG control DNS because ICANN is contracted to the NTIA? Are the root key creators USG agents? What is your reasoning behind world governments controlling DNS and/or DNSSEC?

I can't argue against this statement because it's not clear to me what you actually mean.

I thought I was perfectly clear in which points were not yours. My point with including the whole list of standard arguments against DNSSEC, not only yours, was that they are well known to DNS implementors for over ten years.

1. Since you decide that the whole of DNS is "government controlled" (which is a statement of a kind that are useless to discuss furhter), you must also accept that "a race to the bottom in CA pricing" means certficates generated without human intervention. That would mean a very complex and error prone to implement the same security model as DANE gives you, where domain ownership is cryptographically assured.

2. That is only because "government controlled" is a meaningless phrase. ICANN is not more government controlled than any other organization operating in a jurisdiction. They have a mandate from the US Department of Commerce but that does not mean they are "controlled" by them. It could certainly improve, but the proposals for a multi stakeholder model has so far been thinly veiled attemts to politicize ICANN. That does not mean it can not be done. The fact that they root zone has a spotless record of political meddling so far should count for something.

3. This last statement is incorrect. DNSSEC is not RSA-1024 for the forseeable future. Implementating an alternative way to secure DNS with more modern cryptography is many orders of magnitude more difficult than switching algorithms in DNSSEC, which is completely supported.

4. Again a statement so general it holds true for implementing any world wide standard. If you want every software under the sun to handle lookup failures correctly, that is expensive in itself. Is DNSSEC more expensive than any reasonable alternative? No, probably less so. Failure modes are well understood.

Since the rest of the points weren't raised by you directly (but then again, many times by others) I think we can concede them from any further discussion.

The difference between the insecurity of the CA system and the parallel, comparable insecurity of DNSSEC is that the CA system already exists and DNSSEC TLSA does not in any meaningful way. If DNSSEC TLSA solved real security problems, it would be worth the expense, insecurity, and instability of deploying it. It doesn't though, and thus isn't worth it.

ICANN is not the only problem. Even if ICANN is scrupulously honest, the most popular TLDs are controlled by other organizations.

Not only am I right about RSA-1024 in DNSSEC, but I'm obviously and empirically right about it, as a trip to DNSviz will show you. I'm amused by all the arguments that presume that I'm just Googling random stuff and posting them to see what sticks. Unfortunately, no: I'm an implementor and I've been writing about DNSSEC for more than 7 years. I'm also amused by the objection that because there's an algid somewhere in the DNSSEC RRs, RSA isn't a problem. DNSSEC in its current deployment trajectory is PKCS1v15 RSA.

I am also obviously right that a secure DNS system designed in 2015 would not look like DNSSEC. It would instead be based on online-signing, would use deterministic ECC signatures, and would do full end-to-end encryption of DNS requests instead of trying to shoehorn the whole protocol into standard DNS RRs.

You've also missed the point about software reliability. Software today gets away with a shortcut in which it can assume that failed DNS lookups are the result either of user error or lack of connectivity. Software does not generally have a recovery state machine for lookup failures. That cheat stops working when DNSSEC failures are introduced to the model.

> the CA system already exists and DNSSEC TLSA does not in any meaningful way.

Well, DNS exists. Most of what makes you uncomfortable with DNSSEC are really issues with DNS. But as long as DNS names are what you wish to secure, no alternative system could do that in a way you would be comfortable with. Unless you wish to replace the domain name system itself?

I guess what we disagree about here is if we need an alternative to the CA model. I believe we do, and I believe the most sound way is to cryptographically assert domain ownership. Such a model would be transparent in operation, not add any failure points except for the ones we already have in the domain name system, and be very clear to the end user in what it is we really secure.

(A common point made in the SSL CA system is that we want to secure "entities" and not domain names, and that is why a world wide PKI system should never sign keys based on domain names alone. I believe that is completely and utterly wrong. Most users actually do want to know that they are talking to the domain name bigbank.com, not the entity "Big Bang Co. Ltd.".)

Given that premise, DNSSEC at least solves the right problem. One could argue that its cryptographic implementation is not great, and it's not, but it's pretty much identical to TLS and IPsec which is what we will use for the forseeable future.

> Not only am I right about RSA-1024 in DNSSEC, but I'm obviously and empirically right about it, as a trip to DNSviz will show you.

I do not follow at all. We will be "stuck with RSA-1024 for the forseeable future" because that is what people use today. That does not follow at all.

You could just as easily have said "we will be stuck with SHA1 for the forseeable future" one year ago, but reality just 12 months later looks very different.

> It would instead be based on online-signing,

This is a point you can make, but I disagree. We have seen more and more centralization in DNS serving infrastructure. Amazon and Cloudfront and the other big players serve more and more of the domain name space.

I think giving them complete powers to fully sign replies on behalf of the domain owners would be a mistake. It would clearly be a step to a more centralized PKI.

(And if government interference is what you fear, you should absolutely want offline signing. Amazon has yielded to government interference more times than I can count, while ICANN has not. Not that an online signing model would free you from ICANN of course.)

> Software today gets away with a shortcut in which it can assume that failed DNS lookups are the result either of user error or lack of connectivity.

No, you missed the point here, which is that all reasonable alternatives are worse off. Unless your idea is to swap out the domain name system or stay with the broken CA model, of course.

Yes. The alternative is to stay with the broken CA model, as opposed to adding a second, even more broken CA model to run alongside it.

> a race to the bottom for CA pricing, led hopefully by EFF

Also Startcom. https://startssl.com (free TLS certs, each good for one subdomain + your base domain, for one year)

Startcom is great (I use it for a bunch of domains), but there are some caveats:

* They are only free for personal, non-commercial use. You can't (contractually) use them for your startup.

* They are free to acquire, but they charge to revoke them. So don't lose your key (even accidentally, via something like heartbleed).

* Their roots are not in the Windows XP base install. They are included in an optional update, but my experience with a web site that had old machines in its demographics showed that practically no one had it installed. Given that XP is no longer supported by Microsoft, this point is getting less and less relevant as the days go by.

I think even WinXP had auto root update by default, and even Win7 don't have the StartCom root installed by default.

Sorry, but no. Their response to Heartbleed - refusing to revoke certs without charge even in an emergency - means I can never recommend Startcom again.

Failing to revoke keys on any notification they've been compromised is a CA/B forum guideline reason to revoke their trust as a CA, actually. They still have valid unrevoked signs on thousands of Heartbleeded keys now, because they issued them free but won't withdraw them without charge.

Fortunately, Let's Encrypt will replace them.

This is the second thing I've seen you say that surprised me, given your background (the first being that you actually use DANE). TLS certificate revocation is, of course, theater --- and will be until widespread deployment of some kind of "must-staple" extension --- and Start's refusal to issue revocations had virtually no operational impact on the Internet or on privacy.

For now, but it did made me think about short lived certificates.

> Better deployment of filters would do much to improve the situation.

No, it wouldn't. People have been calling for an to end spoofing for 15 years with virtually 0 progress.

Filters don't help in volumetric attacks.

I think the internet would benefit from more competition in protocols, especially those to do with security, and especially those that have historically done a bad job (TLS, DNSSEC, PKIX).

These are obviously hard to design, with a bunch of trade-offs and decisions to make along the way. I'd argue that this makes practical research and competition more important, not less.

"Had DNSSEC been deployed 5 years ago, Muammar Gaddafi would have controlled BIT.LY’s TLS keys."

No he would have had their TLS public key, assuming they were deploying DANE, which he could have got by going to their website.

I think you're a little unclear on the concept here. The problem isn't that Libya gains the ability to read the TLS keys.

Since Libya can't write a valid certificate to DNS, the ability to make changes doesn't help them. At worst, they might be able to make browsers reject the certificate (as the one served by the server won't match the one in DNS, or the one served by the server and in DNS (if your A records and DNS key gets changed) isn't signed by a CA), but they can already do that. If you don't trust your DNS Zone, you can already be trivially taken offline by them deleting your DNS records.

Once DANEs is deployed widely browsers should require both a certificate from a CA, and that certificate to be in DNS. Two factor authentication for SSL.

If the CA signature means anything, you don't need DNSSEC.

If it doesn't, you've given control over the certificate to Libya.

It is easier to see the problem when you look at the real issue, which is that the NSA controls both the CA hierarchy and .COM.

Perseids made his counter-arguments clear in many places on this page, but your comment allows me to summarize.

> If the CA signature means anything, you don't need DNSSEC.

If DNSSEC is fully deployed and supported, you don't need CAs.

> If it doesn't, you've given control over the certificate to Libya.

If it (DNSSEC) isn't, you've given control over the certificate to any trusted CA in the world.

> It is easier to see the problem when you look at the real issue, which is that the NSA controls both the CA hierarchy and .COM.

Neither DNSSEC nor the CA system can prevent the NSA from doing their evil stuff.

The problem with the CA system is that it fails to resist nation-state attacks. DNSSEC not only has that problem, it has it by design. That's the point made by the post. All you've done is restate it.

You are right, so let me sum up:

Centralized architecture leaves DNSSEC vulnerable to nation-state attacks. This is by design.

Decentralized architecture leaves the CA system vulnerable to attacks coming from any trusted CA. This is by design.

National Security Letters (and their non-US equivalents) leave the CA system vulnerable to nation-state attacks.

DNSSEC 2 - 1 CAs

I think the point is that once we hve DNSSEC, we have no way around. With the CA system there is lots of room to improve on it, without more centralisation.

Im not a expert but thats my take so far.

We've had CAs for two decades already and they haven't improved much, why would their third decade be different?

The demand for change is growing and many project working on this show this. There is lots going on, much more then I can see going on in the DNS space. People are deploying more and more https and browser vendors, research and the open source community are working on it.

Project like Lets Encrpyt, CertCA on the CA side. Certificate Transparency on the standard side. Inside of the Browser you have HTTPS Everywhere, SSL Overservatory and things like Convergence.

Are this many people working on activlly innovating on DNSSEC and DANE? If they exists, I dont see them.

Also, even if they exists, once the system is centralised, its almost impossible to move it forward. In the CA system, I as a individuall can do more for my own security.

Why is DNSSEC not vulnerable to NSLs?

I am distinguishing on the attacker, not the means. Sorry if that was not clear.

Of course DNSSEC is vulnerable to NSLs, but that is not relevant. What is relevant is:

- DNSSEC is vulnerable to nation-state attacks.

- The CA system is vulnerable to nation-state attacks.

- The CA system is vulnerable to attacks from any CA.

Missing points:

- I, as a user, have mean to circumvent or mitigate CA issues (using certificate patrol as one possibility, certificate pinning as another,...)

- There is no user work around for the DNSSEC vulnerabilities

Furthermore, I'd guess that the majority of CA attacks are nation-state attacks so that both boil down to the same. I don't know of any criminal attacks (such as attacks on online banking) on the CA's. Conclusion: I, as a user, don't gain anything from DNSSEC.

It obviously is vulnerable.

An NSL is secret, right? Some nation-states have the power to compel changes to the root zone, but none have the ability to do it in secret.

Reaction when @tqbf, again, claims DNSSEC is NEVER GOING TO HAPPEN


Why all do we get so many articles with missleading text arguing agains DNSSEC recently?

> DNSSEC is Unnecessary That's opinion.

> DNSSEC is a Government-Controlled PKI And TLS is an all-governments-controlled PKI, where any of them can subvert it when they want.

> DNSSEC is Cryptographically Weak Nope, it isn't. But yes, there are people using it with weak crypto... Exactly like TLS.

> DNSSEC is Expensive To Adopt A bunch of overblow technical issues that won't make any difference on practice. If the domain is being attacked, the lookup failed, if there's a configuration problem, the lookup failed.

> DNSSEC is Expensive To Deploy Oh, that's correct. TLS needs an entire hour to deploy for the first time, DNSSEC needs some 3 or 4.

> DNSSEC is Incomplete Yep, it's goal is to do the same as the TLS PKI. There's DANE for the rest of it.

> DNSSEC is Unsafe Yes, also because it's DANE's job to do that.

> DNSSEC is Architecturally Unsound I can't make any sense out of that.

Your rebuttal would be a lot more convincing if it were more specific. For example:

> DNSSEC is Cryptographically Weak Nope, it isn't.

So you're saying that 1024-bit RSA keys are just dandy? You're saying that PKCS1v15 padding is a good idea?

DNSSEC does not mandate any cryptography algorithm. Why are you associating it with 1024 bit RSA? (I know, because some article appeared here that claimed it was limited to this. It isn't, and most roots do not use it.)

No article claimed that DNSSEC was limited to RSA-1024, and you clearly haven't looked at the roots recently.

puts on his compliance hat

Many organizations are mandated (by statutes etc) to use elliptic curve based cryptography already. It is not because RSA is broken, but because the key sizes would have to be uneconomically large.

The key size requirements are picked based on the advancement rate of technology (mathematics, calculation power / price), and the time the data should stay protected. Many security related standards aim for several decades, and for RSA key sizes that does not look so good.

If the system is not already in wide usage, rolling out a major protocol that does not contain support for ECC at this time is a very, very bad choice. Many organizations have simply to disable it already because it violates the requirements. (Oddly I have seen being not encrypted not violate anything, because "that's the way DNS is"... Go figure)

> Why all do we get so many articles with missleading text arguing agains DNSSEC recently?

I don't know, but perhaps because DNSSEC deployment IS happening... and the critics want to stamp that down. :-(

Dan: which of the parent commenter's arguments do you buy?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact