1) DNSSEC does materially make DDOS reflection attacks worse. djb has gone on about this at length, so I won't.
2) DNSSEC also degrades a DNS server's ability to respond to requests. A regular non-DNSSEC DNS server can look up responses in O(1) time; see djb's TinyDNS for an example (it uses a CDB hash table to do it). A DNSSEC server must either use an ordered index to find "nearest match" for NSEC(3) ... and hence is O(log N), or it must sign the response on demand (which is against DNSSEC religion) and takes something like quadratic time.
It is a meaningful increase in both the computational and space complexities of serving a core infrastructural protocol - but for no meaningful benefit.
In this case: The constant coefficient in a DNS lookup is very small, whilst the constant coefficients in a DNSSEC response are very big.
1. DNS is vulnerable to man in the middle attack. Any authority with access to a root server or another router in the DNS resolution chain can tell a computer that a web page goes anywhere they want. This actually happens - for example, Turkey tried to go this with Twitter, and redirected Twitter's domain name to a government IP address instead for anyone within the country.
2. There is a dangerous vulnerability called DNS cache poisoning where UDP packets can be used to manipulate DNS servers to incorrectly respond when queried for a domain's IP address until their cache expires. This attack was described in 2008 by Dan Kaminsky, and on the whole, a methodology of preventing this sort of vulnerability would be good enough motivation for repairing the failings of present-day DNS.
The motivation is great, but DNSSEC is largely the wrong tool for the job - it's a blunt instrument with no support for nuance and it will make DNS harder. What Thomas explains in greater technical detail is that DNSSEC is a move towards even further internet centralization that will make implementation more difficult without really solving the principal issues.
The 2008 attack was not novel and was described by DJB about 7 years prior to Kaminsky, and DJB's utilities included a mitigation: randomized source ports. At present there is an equivalent vulnerability: UDP/IP fragments can be spoofed in such a way that poison records can be injected to resolvers. It is possible to mitigate this, but many DNSSEC proponents - including the Bind maintainers - refuse to consider mitigations because they feel that people should be moving to DNSSEC instead. It bears repeating that as-deployed; DNSSEC doesn't help protect against cache poisoning towards stubs - only towards caching resolvers.
I'm not sure I agree about the novelty of Kaminsky's attack. Ruefully: I remember arguing with him about that on Twitter, him getting me on the phone to explain how his attack refined the older cache poisoning attacks, me conceding it, and then bad stuff happening a few months later.
Are you referring to http://www.gonullyourself.org/ezines/ZF0/zf0%205.txt perchance?
Entirely false! ! ! Please get your facts right. [Not talking about the 2nd part of your comment]
Yes, the Turkish government hijacked the Turkish DNS resolvers, so they were returning incorrect IPs for the twitter.com domain.
Yes, DNSSEC would have helped. The Turkish government does not have any of:
- The keys for the root zone
- The keys for the .com TLD
- The keys for the twitter.com domain
As a result, the Turkish government had no way to forge signatures for the twitter.com domain. All they could do was send a fake AD bit.
Using DNSSEC to validate answers for twitter.com from Turkey during the hijack would have failed. That is the exact purpose of DNSSEC.
Edit to update: The Turkish Government forged DNS to block twitter. Even if clients did start validating DNSSEC - if the failure mode is that it still be blocked, but for a different reason, is that really useful? A Government could simply drop all queries for twitter.com too. How does DNSSEC meaningfully help?
In the case of the Turkish hijacking, of course hijacked resolvers would not send RRSIGs. This should prompt the client to request another resolver.
DNSSEC is a Government-Controlled PKI -
DANE is not DNSSEC. Why conflate the two?
DNSSEC is Cryptographically Weak -
Wait, what? That's like saying openssl is cryptographically weak, since it allows you to create weak keys. You can use ECDSA with DNSSEC. Minimal research would have revealed that.
DNSSEC is Expensive To Adopt -
It is up to the software to decide how to handle failure cases. Software with security in mind should do the proper checks. Others should display warnings. Others should probably just accept the answer.
DNSSEC is Expensive To Deploy -
What? DNSSEC is braindead simple to deploy. It literally takes a handful of minutes.
DNSSEC is Incomplete -
Unclear...not positive how DNSSEC relates to browser implementation.
DNSSEC is Unsafe -
I'll possibly give you this one, but stand by that you shouldn't hide in plain sight. That is, why put 'hidden' records in a public zone?
DNSSEC doesn’t have to happen. Don’t support it. -
DNSSEC has already happened. People ask for it. However, largely, I support the argument. Start a working group, propose a better alternative, and replace it. I'm all for that, believe it or not.
DANE (using DNS as an alternative to TLS CAs) is the primary real-world use case for DNSSEC.
No site in the world is secured purely using ECC DNSSEC, because the roots and TLDs rely on RSA. Moreover: find the most popular site that uses ECC DNSSEC and report back what it is. (It'll be tricky, because very few of the most popular sites use DNSSEC at all). And, of course, the ECC variants DNSSEC uses are already outmoded. Which is why I was careful to refer to modern deterministic Edwards-curve signature schemes. Also: did you really look at the links in that post and think it lacked even "minimal research"?
I don't see any other arguments in this response, but you can call me out if I missed any.
The country that owns/operates the TLD I'm under, or the registrar, can already swap my DNS, but DNSSEC makes it harder for others to. I don't know what more we can do about that - probably more in the realms of I2P or Tor, etc.
DNSSEC has definitely got some problems. The reflection problem, for one, which needs careful mitigation. And yes, it could use modern crypto badly, but that's waiting on standardisation; signatures are a way off in CFRG and then there's all the HSMs. It may be better as part of a nutritious layering breakfast; DNSCurve on top, maybe?
You won't see me arguing for plaintext anything, however, so if unsigned plaintext is the alternative, bugger that. QUANTUMDNS is just way too easy as a man-on-the-side otherwise.
BGP is a total clusterfuck and needs a lot of security it doesn't have - that's about all I can say as to that.
As for algorithms, you're insulting DNSSEC as it's been implemented, not as a protocol. DNSSEC reserves purposely up to 255 slots for algorithms, and has added them as it's evolved. But claiming it depends on RSA and doesn't support modern eliptical curve schemes is at worst wholly false, at best half false depending on the definition of modern.
It's a pretty weird rhetorical spot to put yourself in, by the way. Without DANE as a motivation for deployment, there is no reason to deploy DNSSEC. DNSSEC can't provide end-to-end security, and TLS and SSH already authenticate themselves without DNSSEC.
I don't know what your second paragraph means. You can see what I'm referring to empirically by playing with DNSviz. Like I said: go find the largest DNSSEC site that uses ECC records. Then, for fun, and not counting the TLDs themselves (sigh), the largest that use RSA-1024.
In other words, is the objection to DNSSEC the idea, or the implementation?
Lastly, what is our best hope for getting rid of CA's?
If I may suggest something, the article talks a lot about the implementation flaws in DNSSEC, giving the impression that if only it was designed better, or even redesigned, that we could actually have a secure DNS.
But everything else said here I totally agree with - if you don't control the root, you don't control a whole lot of anything.
If you want something that hides the contents of DNS packets in transit, DNSCurve at least does that, and is relatively easy to deploy - the current best server appears to be the curvedns in the freebsd ports (updated to use libsodium, etc.): https://svnweb.freebsd.org/ports/head/dns/curvedns/
What's of course valid for TLS too. Do you suggest people use HTTPS?
I live in .cz domain and almost every registrar there have DNSSEC in one click (without changing price) - adoption here is 40%! - see https://stats.nic.cz/stats/domains_by_dnssec/ and I am not experiencing any mentioned problems at all
Maybe DNSSEC is not best DNS proto in the world, but these arguments are not entirely based on facts.
It might be worse than securing it yourself, but it's still way better than no security at all.
DNSSEC doesn't offer any privacy guarantees, either, so at this point DNSSEC doesn't really do much for me.
I am saying "almost every registrar [in .cz] have DNSSEC in one click (without changing price)" is nothing like "letting GoDaddy manage your keys for you".
The first is a matter of cost. The second a matter of convenience.
It's not at all like letting a third party manage your keys in any reasonable way to interpret it.
When you're reasonably sure that the DNS response you're seeing is actually what the zone owner meant it to be, you can have records for SSL certificates (DANE, mentioned later), or GPG keys for users with email in that domain, all kinds of things which were not previously possible. Each of those may or may not be a good idea but it clearly enables meaningful uses.
DNSSEC is a Government-Controlled PKI
Sure. But once you're willing and able to forge registry information about a domain, you can also usually get a valid (DV) certificate for it. Assuming you don't control a CA to begin with. So at least we're being honest with ourselves and stop paying for the security we don't get.
And DNSSEC doesn't get in the way of any replacements, does it? You can sign the certificate with your personal key. Or you can use some sort of convergence mechanism to see if you're seeing the same public key as everyone else.
Authenticated denial. Offline signers. Secret hostnames. Pick two.
The first two, right? (Secret hostnames?)
Articles like this (whole lists of criticisms) sound lawyery and unconvincing to me. I can't tell which ones are the meat and which are garnish. Not that a committee design cannot be flawed in multiple ways (death from 1000 cuts style) but this feels as if the author didn't like the idea first, perhaps for a perfectly valid reason, and came up with a bunch of additional reasons later.
Full disclosure: I signed one of my zones and I run validating resolvers for my private needs (unbound, both on servers and my home network). Nothing blew up. Yet.
With DNSSEC, only the USA can, and other governments can only create TLS certs for a small and distinct subset of website domains. That's better.
(It would be even better if dns resolvers included certs for all TLDs they knew about, and did not rely on the root keys for those TLDs. Just like pinning TLS certs. Then, you can choose a TLD run by an organization you trust, and even the US won't be able to forge your certs.)
I fail to see how that is better.
And to your repeated refrain both in your post and here on HN that "DNSSEC has virtually no real-world deployment", I would again refer you to various DNSSEC statistics sites that do indeed show a good amount of deployment happening in different parts of the world.
Here's a list of those sites: http://www.internetsociety.org/deploy360/dnssec/statistics/
I know you wish it were NOT happening, but the reality is that DNSSEC deployment is going on... and many of us see it as a way to provide another solid layer of security on the Internet.
Yes, it's always great to be exploring other alternatives, too. My personal interest is in "securing the DNS"... and right now DNSSEC is to me the best tool we have going today. If we can develop other tools over time that are even better, that will be outstanding. Meanwhile, I want to see what we have today get better deployed.
(a) validate (include the AD bit) but strip the RRSIGs (i.e. regular DNSSEC resolver)
(b) omit the AD bit but include the RRSIGs (i.e. you have to validate yourself)
(c) omit the AD bit and omit the RRSIGs (i.e. regular non-DNSSEC resolver)
(d) validate (i.e. include the AD bit) and include the RRSIGs (i.e. alleluia!)
My own study  (using the Atlas network ) found 30% of resolvers doing (a), and 65% doing (b), including some overlap. There are people way more qualified than me doing this kind of stuff, namely APNIC's Geoff Huston, see  for instance.
Assume we are attacking example.tld
For the US government: 1a) Use the root zone keys -- that are essentially already under their control -- to sign fake zone keys for fake .tld nameservers. 2a) Continue with 2b) below.
For the government under whose jurisdiction the .tld zone is operated: 1b) Get the zone keys for .tld using legal measures (e.g. the local equivalent of national security letters). 2b) Use the .tld zone key to sign fake example.tld zone keys for fake example.tld nameservers. 3) Serve signed records for example.tld records (via man-in-the-middle attacks).
If you redelegate a zone to a new owner, that owner owns the domain for every intent and purpose, including generating signed TLS keys from pretty much any CA.
There are legitimate ways to delegate a zone, including selling it to someone else. It's not really an attack, and it's definitively not something you can protect against.
People do have mental models of domain ownership, though, which is founded in the contractual agreements they have with their registry. To them it feels like an attack when for a select group of people their domain lookups result in different records than for everybody else. And it makes it worse that selective (or tailored) man-in-the-middle attacks don't leave any traces behind.
> It's not really an attack, and it's definitively not something you can protect against.
Sure you can. See how namecoin cryptographically reserves domains for a certain owner. It is just a pretty big step away from the current practice of how the DNS is run.
That's only true using your very own definition of ownership.
A zone has an owner in a strict juridical sense of the word. You can read in detail what this means in the relevant agreements for registrars.
When people talk about government attacks against DNSSEC, I assume that means co-opting the TLD operator and feeding alternate, signed, validated-to-the-DNS-root copies of a particular domain to some target. Is that not the mode of attack being described?
If the signers for .LY or .COM misbehave, nobody has any solid recourse. .COM's role in the post-DANE certificate hierarchy is baked into the fabric of the Internet.
DNSSEC represents a massive move towards centralization of Internet trust, which is a baffling thing to get behind in 2015.
(edit) I understand DANE puts explicit trust on the registry, registrar, and the DNS root; but given the common use of domain validated certificates, that trust is already there, and I think it is better to have it explicit. Also, there are fewer parties to watch out for, the Belgium Root CA can issue a cert for my domain, but Belgium is unlikely to compel my registry/registrar unless I've chosen to have a .be domain. (My applogies to Belgium Root, if they're not affiliated with the government of Belgium)
Also, I don't think cert issuance can scale without domain validation or a large expense.
If anything, I suppose that should be an argument against consolidating DNS and TLS powers into single entities, which is exactly what DNSSEC and DANE do.
eg, You want to test some front-end changes in conjunction with changes to the CDN config. If the production site is origin.example.com, the staging site might be at origin-stage-asdgwdse.example.com because you (hope!) the changes don't leak early.
IPv4 literal addresses are often inconvenient / not feasible for larger stacks or cloud setups where you don't control the underlying infrastructure.
"The DNS security extensions provide origin authentication and integrity protection for DNS data, as well as a means of public key distribution. These extensions do not provide confidentiality."
> Zooko's triangle is a diagram named after Zooko Wilcox-O'Hearn which sets out a conjecture for any system for giving names to participants in a network protocol. At the vertices of the triangle are three properties that are generally considered desirable for such names:
> Human-meaningful: The quality of meaningfulness and memorability to the users of the naming system. Domain names and nicknaming are naming systems that are highly memorable.
> Decentralized: The lack of a centralized authority for determining the meaning of a name. Instead, measures such as a Web of trust are used.
> Secure: The quality that there is one, unique and specific entity to which the name maps. For instance, domain names are unique because there is just one party able to prove that they are the owner of each domain name.
> Had DNSSEC been deployed 5 years ago, Muammar Gaddafi would have controlled BIT.LY’s TLS keys.
(DNSSEC's, and thus) DANE's security can break down at every parent DNS zone. For bit.ly the parent zones are the .ly top level domain and the root zone. Each of them (but only these two) can forge valid DNSSEC records. With the current X.509 certificate infrastructure there are more than a hundred root CAs who can issue malicious certificates for use with TLS. Even though I certainly dislike the centralized nature of DNSSEC, it is hard to argue that reducing the attack surface from over a hundred institutions to two or three doesn't constitute a marked improvement. Specifically, domains like tumblr.com whose domain path is completely US hosted will no longer have to worry about "obscure" foreign CAs like "Unizeto Certum".
> In fact, it does nothing for any of the “last mile” of DNS lookups: the link between software and DNS servers. It’s a server-to-server protocol.
DNSSEC is decidedly not a server-to-server protocol. The signatures are created offline and can be verified by any party, even when they come out of a cache. In contrast, DNSCurve is a server-to-server protocol, which only secures the transport layer, but not the data itself. The post refers to the aspect, that stub resolvers (those software parts of your OS that look up the query at your ISP caching DNS server) where not envisioned by the designers of DNSSEC to verify the DNSSEC entries themselves. Nonetheless, there is nothing that hinders them from doing so and I would expect that feature to be included once DNSSEC is spread enough to justify the effort. In fact, you can already check how this would work using the dig request `dig @188.8.131.52 +dnssec example.com` which replies with the A record and the signature on the A record (you still need to get and verify the zone key of example.com though).
> DNSSEC changes all of that. It adds two new failure cases: the requestor could be (but probably isn’t) under attack, or everything is fine with the name except that its configuration has expired. Virtually no network software is equipped to handle those cases. Trouble is, software must handle those cases.
I'm curious about why software would have to handle these cases. For the end-user a binary "lookup went well, here is the result" / "there were some errors during the lookup, sorry I can't or won't give you a result" should be totally sufficient. Firefox should not confuse the user with detailed difference between "I can't verify this record, even though the parent zone says it is DNSSEC enabled" and "I can't resolve the name at all", so why does Firefox need this information?
Browsers and operating systems aren't going to add full DNSSEC resolving caches. They're going to use stub resolvers and delegate this problem to "DNS servers" (caches), like they always did. DNSSEC does not protect the last mile. You can dodge this point by redefining the concept of "last mile", but that's not a persuasive argument.
The US NSA controls both the CA hierarchy and .COM. It is amusing to see people try to rescue DANE from criticism by evoking the CA system to protect it.
I'm just asking for clarification for the statement; I'm in agreement with the crux of your argument. I just don't understand how the Government(s) control DNS more or less than any other aspect of the internet that is vulnerable to meatspace attacks like injunctions and arrests.
I think I understand the distinction now. Your argument is saying "the CAs are bad but we mustn't allow DNSSEC to replace them." And a domain's TLS certificate is safe from an individual government's subversion if the CA resides outside of that government's jurisdiction.
Am I understanding this correctly?
Fedora 22 feature change request: https://fedoraproject.org/wiki/Changes/Default_Local_DNS_Res...
(The link tptacek is referring to is https://gist.github.com/tqbf/e8d82d614d1fea03476b btw.)
If I had my say, the code you link to would not have to be changed at all, because the `gethostbyname` interface would stay the same and return a lookup failure on unsuccessful DNSSEC verifications, if the parent zone reports the domain to be DNSSEC enabled.
> Browsers and operating systems aren't going to add full DNSSEC resolving caches. They're going to use stub resolvers and delegate this problem to "DNS servers" (caches), like they always did. DNSSEC does not protect the last mile.
You've said so in your blog post, but you don't say why. Everyone that is seriously considering using DANE as a replacement to the existing CA hierarchy assumes they do include a DNSSEC enhanced resolver (I don't see why it has to be a "full" and caching resolver, though).
> DNSSEC does not protect the last mile. You can dodge this point by redefining the concept of "last mile", but that's not a persuasive argument.
When the "last mile" is on my PC (i.e. when my local resolver checks the DNSSEC signatures) then it is as protected as my OS.
> The US NSA controls both the CA hierarchy and .COM. It is amusing to see people try to rescue DANE from criticism by evoking the CA system to protect it.
While I find your enthusiasm to protect against the NSA encouraging and important, my point actually was that DANE improves upon the security of the existing CA system against attacks by non state-sponsored attackers and foreign state-sponsored attackers beside the NSA.
Let me state it differently: At the moment, example.de has to worry about attackers controlling the .de zone (, the root zone) and attackers controlling any of the >100 CAs (including many nation states). With DANE, example.de only has to worry about attackers controlling the .de zone and the root zone. As a more graphic example consider how at the moment the "Deutscher Sparkassen Verlag GmbH", the CA of a german umbrella organization of local banks, can issue certificates for amazon.com, wikipedia.org or any other domain world wide. This kind of nonsense is solved by DANE, not NSA spying.
(My apologies to German readers for not inflecting "Deutscher" in the above sentence. I keep the identifier verbatim to allow readers to look it up via non-fuzzy search.)
You phrase it like a fact. In practice we're seeing quite the opposite. Firefox clearly sees a fully validating client as the only reliable way to implement TLSA.
Is the point that it's possible to implement DNSSEC on end systems? Of course it's possible; everything is possible. But that's not what's going to happen, in large part because the DNSSEC operational community doesn't believe it should happen, and designed the protocol to make it easier not to.
You wrote that browsers, if they implemented DNSSEC, would choose to implement stub resolvers without full validation. In fact, both browsers who looked at implementing it clearly did so with the intention of doing full validation.
I did not say that they implemented it, or even that they will. I did say that the claim that there were plans to only implement a non-validating stub resolver in common browsers is not true.
Actually they already did. OS X for instance has this baked into mDNSResponder.
That's not altogether true and now also irrelevant.
mDNSResponder has DNSSEC support that isn't quite baked and was not enabled by default. The only way to use the support it did provide was by passing specific flags to a relatively low-level API. (You'd have to configure the system to use a DNSSEC enabled resolver as well of course.)
mDNSResponder has been replaced by discoveryd which does not have any DNSSEC support (other than silently accepting the validate flags). Perhaps it'll gain further support in the future. If it does I'd not bet on it being enabled by default any time soon.
> To perform Pin Validation, the UA will compute the SPKI Fingerprints for each certificate in the Pinned Host's validated certificate chain, using each supported hash algorithm for each certificate. (As described in Section 2.4, certificates whose SPKI cannot be taken in isolation cannot be pinned.) The UA MUST ignore superfluous certificates in the chain that do not form part of the validating chain. The UA will then check that the set of these SPKI Fingerprints intersects the set of SPKI Fingerprints in that Pinned Host's Pinning Metadata. If there is set intersection, the UA continues with the connection as normal. Otherwise, the UA MUST treat this Pin Validation Failure as a non-recoverable error. Any procedure that matches the results of this Pin Validation procedure is considered equivalent.
Unless I'm missing something, that allows you both to limit the number of CAs which you trust and if you go to the trouble of pinning specific certificates you can even limit damage from a compromised CA.
It's also wrong to say that public key pinning (which addresses PKI/TLS weaknesses) makes DNSSEC redundant. I suppose the proper comparison would be to the (optional) DANE?
It wasn't so much specific to pinning as a general thought that most sites will be in the situation where they need to deploy the TLS-specific measures because they have too many clients which can't assume DNSSEC but if you're already doing that, it's not clear to me that many places would see enough additional value deploying something which is less mature and harder to manage. This is particularly a big deal for anyone who doesn't control all of their infrastructure or works at a large organization.
(Edit: If they are caught.)
If the CA is coerced into issuing a cert, however, I agree with you.
1. DNSSEC is unnecessary, we have SSL.
Do you really want to argue that the CA PKI infrastructure is working? Placing your trust in your top domain solves 99.5% of the problem, roughly (considering there is about 500 CAs that you trust right now, that is not a made up number).
2. DNSSEC is a government-controlled PKI.
No, it's not. It is ICANN-controlled, which is under government mandate but not government control. You already trust ICANN for the root zone, and they have so far stayed out of politics, and has so far never meddled with domains even when the US administration wants them shut down. Because if they did, the Internet would route around them in a heartbeat.
This is actually one thing that DNSSEC did absolutely correct. If you want a cryptographic assurance that a domain belongs to an owner, who better to make the assertion than the TLD responsible for delegating it? Sure, if you get a Libyan domain, you have to trust the Libyan TLD to sign it. But they are already trusted to delegate it! They run the whois. If they decide to transfer the ownership to Ghaddafi himself, there is absolutely nothing you can do about it. If this is a problem for you, don't get a Libyan domain name.
3. DNSSEC is insecure.
It's ugly, but it's not insecure in any practical way. It could be a lot more modern, but it is in no way worse than SSL (see 1 above).
4. DNSSEC is expensive.
It is already deployed. You are already paying the price for this, whatever it was.
5. DNSSEC is hard.
It adds complexity to your operations, but your tools already supports it and it's included in courses already. No competing standard is.
6. DNSSEC makes for DDoS.
That wasn't actually included in the list, but this one is true. DNS is a DDoS vector, and DNSSEC strengthens it. That should be addressed. Better deployment of filters would do much to improve the situation.
7. DNSSEC allows for enumration of zones.
Also not included in the list. Yes, this is a real issue you need to be aware of. Don't store private data in public zones. Use split zones for this.
All technical decisions is a matter of pros and cons. Pro is that it builds on a proven infrastructure and is already deployed in most of the world. You could absolutely build a modern standard, but these things take at least ten years to roll out. Start today and build the successor to DNSSEC!
But remember that DNSSEC did get a few important things right: Assurance follows delegation. Your DNS operator does not have your key. It protects against negative answers. Make sure your successor inherits those.
1. I do not argue that the CA PKI infrastructure is working. In fact, I state explicitly that it is not. The problem is that DANE's replacement for the TLS PKI is controlled by world governments. At a broader level, the problem is that centralized PKIs don't work against nation-state adversaries. The solution for the TLS CA problem is (a) a race to the bottom for CA pricing, led hopefully by EFF, (b) widespread deployment of public key pinning, which serves both to thwart CA attacks and to create a global surveillance system against them, and (c) the eventual deployment of alternate opt-in TLS CA hierarchies, such that users could for instance subscribe to a hypothetical EFF CA hierarchy. DNSSEC accomplishes none of this.
2. Yes, DNSSEC is a government-controlled PKI. Your whole argument here is that if governments abused it, the Internet would "route around it". You can't make that argument without conceding my point.
3. I didn't write "DNSSEC is insecure" (though I think it is). I wrote that it's cryptographically weak, which it is: it's PKCS1v15 RSA-1024. Crawl signatures back to the roots: you'll see RSA-1024 all the way at the top of the hierarchy. TLS has already deployed forward-secure ECC cryptography and will by the middle of this year see widespread deployment of Curve25519 ECDH and Ed25519 deterministic signatures. DNSSEC simply will not: it will be RSA-1024 for the foreseeable future.
4. "DNSSEC is expensive" is not a subhed in my post. "DNSSEC is expensive to adopt" was, and it is: virtually no networking software currently handles soft failures from lookups, which are a universal shared experience among HTTPS users, and will occur just as frequently with DNSSEC (should it ever be deployed). "DNSSEC is expensive to deploy" is another point I made, which is also true: of the top 50 Alexa sites, why not go look how many are DNSSEC-signed?
5. "DNSSEC is hard" isn't something I wrote at all, so I don't know how to respond to it.
6. "DNSSEC makes for DDoS" is something I think is probably true, but also an argument I wasn't willing to go to bat for, because amplification attacks using ANY queries against normal DNS also work. But sure, if you want to add fuel to the fire.
7. "DNSSEC allows for enumeration of zones" is captured under "DNSSEC is unsafe" in my post. If you answer to this is "don't use DNSSEC for important zones", then we agree, modulo that I think you shouldn't use it at all.
I think that the concept (though not DANE itself) could be made to work, e.g. by using k-of-n systems to verify that a majority of governments agree to a particular fact. If the US, UK, Russia, Iran and Ghana all agree on a set of facts, those facts are likely to be true; if enough nation-states agree on a set of facts, then those facts might as well be true.
This could be used to secure the root for a nation-state's DNS, since that root is, after all, just a fact.
As an example, a majority of member states could certify that the ICANN board has authority for the root; a majority of the ICANN board could certify that 192.0.2.27 is authoritative for .example; in exactly the same way, 192.0.2.27 could then use its authority to delegate responsibility for example.example to 203.0.113.153, and so forth.
A similar scheme could be used for IP address ownership.
Yeah, any nation state is going to be able to lie about the identity of machines it is responsible for—but it's a government; it can do that anyway. Other approaches like TOFUPOP help there, but at the end of the day the guys who can point guns at a certificate holder have the ability to make him do anything anyway.
This needs to be better explained in depth. Do world governments control DNS because of ICANN's GAC? Does the USG control DNS because ICANN is contracted to the NTIA? Are the root key creators USG agents? What is your reasoning behind world governments controlling DNS and/or DNSSEC?
I can't argue against this statement because it's not clear to me what you actually mean.
1. Since you decide that the whole of DNS is "government controlled" (which is a statement of a kind that are useless to discuss furhter), you must also accept that "a race to the bottom in CA pricing" means certficates generated without human intervention. That would mean a very complex and error prone to implement the same security model as DANE gives you, where domain ownership is cryptographically assured.
2. That is only because "government controlled" is a meaningless phrase. ICANN is not more government controlled than any other organization operating in a jurisdiction. They have a mandate from the US Department of Commerce but that does not mean they are "controlled" by them. It could certainly improve, but the proposals for a multi stakeholder model has so far been thinly veiled attemts to politicize ICANN. That does not mean it can not be done. The fact that they root zone has a spotless record of political meddling so far should count for something.
3. This last statement is incorrect. DNSSEC is not RSA-1024 for the forseeable future. Implementating an alternative way to secure DNS with more modern cryptography is many orders of magnitude more difficult than switching algorithms in DNSSEC, which is completely supported.
4. Again a statement so general it holds true for implementing any world wide standard. If you want every software under the sun to handle lookup failures correctly, that is expensive in itself. Is DNSSEC more expensive than any reasonable alternative? No, probably less so. Failure modes are well understood.
Since the rest of the points weren't raised by you directly (but then again, many times by others) I think we can concede them from any further discussion.
ICANN is not the only problem. Even if ICANN is scrupulously honest, the most popular TLDs are controlled by other organizations.
Not only am I right about RSA-1024 in DNSSEC, but I'm obviously and empirically right about it, as a trip to DNSviz will show you. I'm amused by all the arguments that presume that I'm just Googling random stuff and posting them to see what sticks. Unfortunately, no: I'm an implementor and I've been writing about DNSSEC for more than 7 years. I'm also amused by the objection that because there's an algid somewhere in the DNSSEC RRs, RSA isn't a problem. DNSSEC in its current deployment trajectory is PKCS1v15 RSA.
I am also obviously right that a secure DNS system designed in 2015 would not look like DNSSEC. It would instead be based on online-signing, would use deterministic ECC signatures, and would do full end-to-end encryption of DNS requests instead of trying to shoehorn the whole protocol into standard DNS RRs.
You've also missed the point about software reliability. Software today gets away with a shortcut in which it can assume that failed DNS lookups are the result either of user error or lack of connectivity. Software does not generally have a recovery state machine for lookup failures. That cheat stops working when DNSSEC failures are introduced to the model.
Well, DNS exists. Most of what makes you uncomfortable with DNSSEC are really issues with DNS. But as long as DNS names are what you wish to secure, no alternative system could do that in a way you would be comfortable with. Unless you wish to replace the domain name system itself?
I guess what we disagree about here is if we need an alternative to the CA model. I believe we do, and I believe the most sound way is to cryptographically assert domain ownership. Such a model would be transparent in operation, not add any failure points except for the ones we already have in the domain name system, and be very clear to the end user in what it is we really secure.
(A common point made in the SSL CA system is that we want to secure "entities" and not domain names, and that is why a world wide PKI system should never sign keys based on domain names alone. I believe that is completely and utterly wrong. Most users actually do want to know that they are talking to the domain name bigbank.com, not the entity "Big Bang Co. Ltd.".)
Given that premise, DNSSEC at least solves the right problem. One could argue that its cryptographic implementation is not great, and it's not, but it's pretty much identical to TLS and IPsec which is what we will use for the forseeable future.
> Not only am I right about RSA-1024 in DNSSEC, but I'm obviously and empirically right about it, as a trip to DNSviz will show you.
I do not follow at all. We will be "stuck with RSA-1024 for the forseeable future" because that is what people use today. That does not follow at all.
You could just as easily have said "we will be stuck with SHA1 for the forseeable future" one year ago, but reality just 12 months later looks very different.
> It would instead be based on online-signing,
This is a point you can make, but I disagree. We have seen more and more centralization in DNS serving infrastructure. Amazon and Cloudfront and the other big players serve more and more of the domain name space.
I think giving them complete powers to fully sign replies on behalf of the domain owners would be a mistake. It would clearly be a step to a more centralized PKI.
(And if government interference is what you fear, you should absolutely want offline signing. Amazon has yielded to government interference more times than I can count, while ICANN has not. Not that an online signing model would free you from ICANN of course.)
> Software today gets away with a shortcut in which it can assume that failed DNS lookups are the result either of user error or lack of connectivity.
No, you missed the point here, which is that all reasonable alternatives are worse off. Unless your idea is to swap out the domain name system or stay with the broken CA model, of course.
Also Startcom. https://startssl.com (free TLS certs, each good for one subdomain + your base domain, for one year)
* They are only free for personal, non-commercial use. You can't (contractually) use them for your startup.
* They are free to acquire, but they charge to revoke them. So don't lose your key (even accidentally, via something like heartbleed).
* Their roots are not in the Windows XP base install. They are included in an optional update, but my experience with a web site that had old machines in its demographics showed that practically no one had it installed. Given that XP is no longer supported by Microsoft, this point is getting less and less relevant as the days go by.
Failing to revoke keys on any notification they've been compromised is a CA/B forum guideline reason to revoke their trust as a CA, actually. They still have valid unrevoked signs on thousands of Heartbleeded keys now, because they issued them free but won't withdraw them without charge.
Fortunately, Let's Encrypt will replace them.
No, it wouldn't. People have been calling for an to end spoofing for 15 years with virtually 0 progress.
Filters don't help in volumetric attacks.
These are obviously hard to design, with a bunch of trade-offs and decisions to make along the way. I'd argue that this makes practical research and competition more important, not less.
No he would have had their TLS public key, assuming they were deploying DANE, which he could have got by going to their website.
Once DANEs is deployed widely browsers should require both a certificate from a CA, and that certificate to be in DNS. Two factor authentication for SSL.
If it doesn't, you've given control over the certificate to Libya.
It is easier to see the problem when you look at the real issue, which is that the NSA controls both the CA hierarchy and .COM.
> If the CA signature means anything, you don't need DNSSEC.
If DNSSEC is fully deployed and supported, you don't need CAs.
> If it doesn't, you've given control over the certificate to Libya.
If it (DNSSEC) isn't, you've given control over the certificate to any trusted CA in the world.
> It is easier to see the problem when you look at the real issue, which is that the NSA controls both the CA hierarchy and .COM.
Neither DNSSEC nor the CA system can prevent the NSA from doing their evil stuff.
Centralized architecture leaves DNSSEC vulnerable to nation-state attacks. This is by design.
Decentralized architecture leaves the CA system vulnerable to attacks coming from any trusted CA. This is by design.
National Security Letters (and their non-US equivalents) leave the CA system vulnerable to nation-state attacks.
DNSSEC 2 - 1 CAs
Im not a expert but thats my take so far.
Project like Lets Encrpyt, CertCA on the CA side. Certificate Transparency on the standard side. Inside of the Browser you have HTTPS Everywhere, SSL Overservatory and things like Convergence.
Are this many people working on activlly innovating on DNSSEC and DANE? If they exists, I dont see them.
Also, even if they exists, once the system is centralised, its almost impossible to move it forward. In the CA system, I as a individuall can do more for my own security.
Of course DNSSEC is vulnerable to NSLs, but that is not relevant. What is relevant is:
- DNSSEC is vulnerable to nation-state attacks.
- The CA system is vulnerable to nation-state attacks.
- The CA system is vulnerable to attacks from any CA.
- I, as a user, have mean to circumvent or mitigate CA issues (using certificate patrol as one possibility, certificate pinning as another,...)
- There is no user work around for the DNSSEC vulnerabilities
Furthermore, I'd guess that the majority of CA attacks are nation-state attacks so that both boil down to the same. I don't know of any criminal attacks (such as attacks on online banking) on the CA's. Conclusion: I, as a user, don't gain anything from DNSSEC.
> DNSSEC is Unnecessary
> DNSSEC is a Government-Controlled PKI
And TLS is an all-governments-controlled PKI, where any of them can subvert it when they want.
> DNSSEC is Cryptographically Weak
Nope, it isn't. But yes, there are people using it with weak crypto... Exactly like TLS.
> DNSSEC is Expensive To Adopt
A bunch of overblow technical issues that won't make any difference on practice. If the domain is being attacked, the lookup failed, if there's a configuration problem, the lookup failed.
> DNSSEC is Expensive To Deploy
Oh, that's correct. TLS needs an entire hour to deploy for the first time, DNSSEC needs some 3 or 4.
> DNSSEC is Incomplete
Yep, it's goal is to do the same as the TLS PKI. There's DANE for the rest of it.
> DNSSEC is Unsafe
Yes, also because it's DANE's job to do that.
> DNSSEC is Architecturally Unsound
I can't make any sense out of that.
> DNSSEC is Cryptographically Weak Nope, it isn't.
So you're saying that 1024-bit RSA keys are just dandy? You're saying that PKCS1v15 padding is a good idea?
Many organizations are mandated (by statutes etc) to use elliptic curve based cryptography already. It is not because RSA is broken, but because the key sizes would have to be uneconomically large.
The key size requirements are picked based on the advancement rate of technology (mathematics, calculation power / price), and the time the data should stay protected. Many security related standards aim for several decades, and for RSA key sizes that does not look so good.
If the system is not already in wide usage, rolling out a major protocol that does not contain support for ECC at this time is a very, very bad choice. Many organizations have simply to disable it already because it violates the requirements. (Oddly I have seen being not encrypted not violate anything, because "that's the way DNS is"... Go figure)
I don't know, but perhaps because DNSSEC deployment IS happening... and the critics want to stamp that down. :-(