Standing reminder that friends don't let friends use DNSSEC, because it is crazy and provides no real security. With that in mind, it's fun to dive in and see where the problems are ...
Fundamentally this DANE idea swaps out the problems of a PKI CA hierarchy, the kind your browser is using when you actually log into gmail to read and send your e-mail, with the problems of DNSSEC.
The biggest problems with PKI are that there are too many CAs, who can each go rogue, and that X509 is a horrifically awful format that takes a gigantic TCB to parse. Mitigations include browsers firing CAs and severely impacting their business (so effectively, there is regulation), certificate transparency and more and more testing. Either way you pretty much have to deal with these problems today though, an awful lot of e-mail is sent/received over HTTPS.
The biggest problems with DNSSEC are that DNSSEC doesn't actually protect DNS queries at their most vulnerable points, that you have to trust your TLD provider, and the root operators completely, with no effective means to fire them, and finally that they almost certainly are using embarrassingly stupid crypto that even most ICOs could avoid. This very deck has several 1024-bit RSA keys, right there on a slide with a February 2018 timestamp, as if it's not an incredibly obviously stupid thing to do since the LogJam paper was published, or you know, when everyone stopped using them for TLS years ago.
Actually wait, there's even a 512-bit key on another slide, using alg-5. That's RSA + SHA1!! What's the point of using all of those SHA256 MACs at the TLS Layer if everything depends on utterly breakable RSA plus SHA1 anyway? It's hard to take this seriously.
PS. Yes, this is also my best impression of tptacek
What's the energy/time cost of breaking 512-bit RSA keys in 2018? I wonder if we could sign a message that says "DON'T USE DNSSEC" with their private key.
> This instance type has two Intel Xeon E5-2666 v3
processor chips, with 36 vCPUs in a NUMA configuration with 60 GB of RAM
A team built a configuration that could factor a 512-bit RSA in four hours for $75. That was August 2015, so it would clearly cost much less and proceed much more quickly today. As a plus, they call out DNSSEC explicitly in their paper:
> Either way you pretty much have to deal with these problems today though, an awful lot of e-mail is sent/received over HTTPS
Nitpick: I believe you meant to say TLS rather than HTTPS.
The majority of sending indeed takes place over TLS, at least between top-tier senders and ISPs. Google's Safer Email Transparency Report shows how many inbound and outbound connections to/from Gmail are protected by TLS, and the numbers have been climbing steadily in recent years:
> The biggest problems with PKI are that there are too many CAs, who can each go rogue, and that X509 is a horrifically awful format that takes a gigantic TCB to parse. Mitigations include browsers firing CAs and severely impacting their business (so effectively, there is regulation), certificate transparency and more and more testing.
Those are problems with XPKI, not with PKI in general. SPKI (RFCs 2692 & 2693) fixed them both — it really should have been picked up. It could be used to ensure that you really are talking to someone authorised to use a particular IP address and host name; it also uses a much lighter format than X.509's ASN.1 (and it's human-readable to boot!).
I encourage anyone building a modern protocol to take a look at SPKI. Some parts of it need to be modernised, but that's easily-enough done.
I'm not so worried about DNSSEC needing to trust TLD & root servers, because with domain-verified certs that's all you get anyway. I contend that domain verification (and IP verification, which to my knowledge no-one does) is good enough for most of what most of us need or want, and the customised SPKI certs could be used for the edge cases.
Beyond that, if I take the insanely bad track record of CAs as a group, and compare to the (as far as I know) complete lack of serious attacks on DNSSEC (including even those with 1024 bit keys), then I would put my money on DNSSEC+DANE.
For DANE you need to do client side validation. But that has been known for quite a while now. And there is code (for example getdns) to do that.
For e-mail there is the additional complexity that DANE seems to be the only way to transition to SMTP over TLS without downgrade attacks.
Why would there have been any attacks on DNSSEC? Literally no real-world system relies on it right now. It's "deployed" on a small subset of large networks, but not in ways where those networks are materially exposed if DNSSEC fails.
As for SMTP: the IETF is pursuing STS, which breaks that downgrade attack the same way HSTS does for the web (except far more powerfully, since the most important MTA relationships are far more stable than the web's), with an explicit design goal of not relying on DANE --- which nobody uses.
Plenty of published attacks are not all that practical. There are plenty of security researchers that would like to have some nice attacks on their CVs.
If you like the security of HSTS, then I guess STS is for you. Personally I'd like something better then just blindly trusting the network the first time you try to connect to something.
Security researchers generally do not in fact spend a lot of time attacking systems that nobody uses. But in the sense that you're referring to, BIND, the de facto standard Internet DNS server, has one of the worst records in all of software security.
You keep saying that nobody uses it, which isn't true. It's been deployed in the real world for some time now, securing some real systems. The threat model is different than for public CA TLS and not any less secure.
The fact that real world public systems such as SSH and SMTP rely on it has led to people relying on it in other protocols as well, where data is transferred between two parties and it is convenient to rely on established infrastructure for key rollovers and distributed trust.
It's not a bad idea where the trust model is a little more complex than two parties exchanging keys on a regular schedule. Public implementation leaves a little to be desired, but that doesn't mean the protocol is ill designed.
I did answer with two specific examples of public protocols that can rely on DNSSEC. The fact that I will not point out specific endpoints where those or similar policies are in use should not be too surprising. Real world people use SSHFP and DANE, and those would stop working if zones suddently didn't validate.
All you did was say that there are two protocols that have some kind of DNSSEC integration. That wasn't the question I asked. Lots of things have DNSSEC integrations that nobody relies on. Is the answer to my question "I don't know"?
No, it is not. I don't understand why you are so confrontative. The answer is that at the very least those who rely on SSHFP and DANE will rely on DNSSEC. There are also proprietary protocols which have chosen to rely on it.
Unless your real argument is that no one has ever used that in production, anywhere, that should be plenty by way of example.
Most (maybe all?) people who use SSHFP would be no worse off were SSHFP to suddenly stop working, since SSH relies on key continuity, and the introduction-connection security risk is one that virtually everyone who uses SSH already mitigates manually.
So, no, you have not actually established that anyone would be endangered were the root KSK private bits published on Pastebin. I suspect, in all seriousness, that the answer to my question is "in reality, nobody would be endangered".
OpenSSH does not rely on key continuity when used with SSHFP, it will not store key data at all since that would make the whole endeavour rather useless. Neither does Postfix when used with DANE, for example.
It's still an improvement over the way things where before (nothing at all, and complete blind faith in every part of the DNS and internet routing) and arguably better than CA which isn't that useful for email.
Furthermore, the browsers replace the TLD in who you trust with CAs and in many cases it's practically difficult (or impossible) to "fire" your browser.
I will agree that no one seems to use good crypto with DNSSEC though. The only people I know using it are using 512 bit RSA which is pretty horrifying.
Survey says: 5.2 million DNSSEC domains, and ~12,000 with 512-bit RSA keys concentrated at just two small DNS providers. So good job knowing only the few stuck with early 90's export crypto. :-) They'll be shamed into fixing this soon enough. An even larger provider that had 512-bit RSA keys no longer does.
And even a 512-bit RSA key is slightly better than nothing. One needs to remember that DANE in SMTP is a mechanism for downgrade-resistant transition from unauthenticated opportunistic STARTTLS to authenticated required STARTTLS.
Below is the frequency table for RSA key lengths in zone signing keys (primarily 1024-bit RSA keys):
In many cases (such as Let's Encrypt) the CA verification process is merely "controls some DNS resource" or "controls some HTTP response that directly depends on DNS". In that case it is theoretically impossible for the cert to be more trustworthy than DNS(SEC). [The cert's untrustworthiness is DNS's untrustworthiness plus the CA's untrustworthiness plus some other misc factors, such as TLS library untrustworthiness].
Not only that, but Firefox and Chrome have "fired" some of the largest, "most important" CAs on the Internet. We know there's no TBTF problem with the Web PKI. But there is literally no way to fire a DNS TLD operator. In this scheme, they are prêt-à-porter TBTF.
> We know there's no TBTF problem with the Web PKI.
Hardly.
Comodo is still around, and Symantec took years to be gradually distrusted. Hell, even the WoSign/StartSSL debacle started in Jan 2015, and they weren't removed before Jan 2017.
That depends entirely on best practice among CAs, something we've seen multiple CAs bend the rules on. Any one is enough to make the attack undetected.
An attack against the global public DNSSEC would be much harder to mount undetected, as it is logged and cached by nature of its design.
People say this constantly, as if attacks against DNSSEC would require intelligence services to alter the DNS across the entire Internet. But of course, that's not how states attack Internet connections and never has been: they subvert ISPs (which are often stated-owned, and almost invariably state-controlled) and pinpoint individual connections to manipulate. There's a hundred million dollar business of tools to facilitate this.
It's not one of my best arguments against DNSSEC, but it is a valid one: the fact that the people designing and advocating for it don't appear to understand the Internet threat model.
No one has made that statement. I pointed out that DNS by nature is cached and distributed. Not that it's impossible to manipulate, but that it in every circumstance is not worse than a TLS intercept, and in some cases less bad.
DNSSEC did make one important design decision: Signing is offline. That alone makes the threat model different than with TLS, and makes for some interesting use cases.
Signing was offline, until everyone realized that offline-signing was unworkable. Now it's a bastardized version of both online and offline signing, with an insane NSEC3-whitelies system of distributed password hashes and fake zone entries that is de facto BCP for real-world deployment.
I don't know how pure offline signing would have made anything better for DNSSEC (I think it makes a lot of things about it worse), but let's at least be clear that the offline signing design failed.
You didn't really address what I said. My argument, which I think is pretty straightforward, is that it is not in fact harder to carry out pinpoint attacks on DNSSEC than it is to do so on TLS. In fact: it's easier; in many ways, modern browsers act as hundreds of millions of sensors in a distributed CA surveillance system, one that has gotten multiple CAs killed. No DNS software works that way.
Your experience is different from mine then. I have seen a few real world deployments and they are all either completely offline signed or with the signer completely hidden.
Are we talking about the public CA system or TLS in general? The former should be fairly obvious that it is useless in the hards of end users, as Google even had to hard code policy for their own services in their client. The latter argument would depend on what type of attack we're discussing. A wide public attack where you have "hundreds of millions" of clients affected would be impractical to keep undetected in any system. Manipulating a single connection with stolen keys is likely to stay undetected in any case, but where having the information cached at least is a possibility.
Somehow, you claimed, DNSSEC's offline signing design had something to do with how hard it was to conduct pinpoint attacks against DNSSEC.
No, that's not true.
But also: DNSSEC isn't an offline-signing system anymore: the BCP mode of deployment requires online signing; otherwise, dumping all the hosts in a zone is about as hard as cracking a 1990s password file.
You've seen zones where not even that level of protection is employed. Ok? What does that have to do with whether it's easy to launch pinpoint attacks against DNSSEC?
Oh, but pinpoint attacks are possible against TLS. Yes? And? The premise of this subthread is your (false) claim that it's not straightforward to do that against DNSSEC too.
Again: not only is it straightforward to do that, but it's actually easier, because, once again, no end-user DNS resolver also functions as security telemetry surveilling the DNS PKI, unlike the Web PKI where that is in fact the case.
You stated as fact that DNSSEC was badly designed, specifically because its designers do not understand the Internet threat model.
I pointed out that the design for offline signing makes for interesting use cases. You now claim that every real world deployment is online signed, which is not my experience. At least you acknowledge real world deployments.
That has nothing to do with the complexity of pinpointing a connection to break, and nowhere did I make the claim that any attacks are straightforward. I did make the point that it is not easier and that it may very well be more complex. Pinpointing a TLS stream and inserting a false certificate is trivial if you posess a valid CA key, pinpointing the DNS query corresponding to a specific TLS stream is an added complexity (especially where multiple resolvers can be involved, where you also run the risk of being stuck in a cache).
I'm not sure what kind of TLS stack ever delivered telemetry on valid signatures post facto, unless you refer to web browsers on the public Internet which is a different question entirely.
> Mitigations include browsers firing CAs and severely impacting their business (so effectively, there is regulation)
Nit: Not when the browser developer and CA are part of one entity. Excepting some major change to corporate structure, there are presumably no or very nearly no circumstances under which Chrome will permanently remove the Google CAs. Their behavior is therefore essentially unregulated WRT Chrome users.
I agree with that. But that point is orthogonal to the (in)validity of the "CAs are effectively regulated by browser vendors" argument I was commenting on.
> DNSSEC doesn't actually protect DNS queries at their most vulnerable points
That would make it useless if true. Are you talking about trusting an external resolver? I haven't seen any real world deployments like that.
> you have to trust your TLD provider, and the root operators completely
You already have to trust them. They are the authoritative source of who owns a domain. In a perfect world, they could assert that ownership cryptographically.
The bit about weak crypto deployments however are completely correct. That needs to be fixed. Improving the situation takes time, and there are aligned economic and national interests in maintaining the global CA model with its numerous trust points.
> DNSSEC doesn't actually protect DNS queries at their most vulnerable points
That would make it useless if true.
Correct. Clients do not validate DNSSEC and clients cannot be made to do so until DNSSEC deployment is universal (at which point you still have several other problems).
You are saying that the ssh client that I have that locally validates SSHFP records using getdns doesn't actually work because DNSSEC is not universally deployed?
Strange if something I've been using for a couple of years cannot exist.
You are correct that using previously obtained information over a secure channel will allow you to continue connecting to an unchanged endpoint. Again, not helpful when connecting to new or changed endpoints over an untrusted path unless you assume that all endpoint implement the mechanism (and, of course, that you trust the CA).
SSH does not operate with trust-on-first-use when using DNSSEC. It will not store key information locally in that case, specifically to be able to operate with changed endpoints over untrusted networks.
Because of the architecture of DNSSEC, it's not in fact available until every DNSSEC resolver on the Internet supports it; until then, you have to keep bad crypto available, and expose users to downgrade attacks.
No, that's not true, and the way you know it's not true is that there's such a thing as a Qualys SSLLabs grade, where it makes sense to talk about individual systems being secured against specific cryptographic attacks.
DNSSEC is all-or-nothing. You can publish Ed25519 records (in theory), but for anyone to talk to you, you'll have to publish equivalent RSA records --- more importantly, every parent zone also needs to publish secure records, or else your Ed25519 records are just theater. TLDs still have RSA-1024 keys!
DNSSEC advocates really want DNSSEC's security to asymptotically approach that of TLS, but in fact DNSSEC is far worse.
How is that not equivalent to TLS? The fact that you can talk about invididual domains and their key length should be evidence enough that Qualys, or anyone else, could do the same.
You can get a ed25519 certificate for your web server today too, but hardly anyone would be able to communicate securely with you. Until deployment reaches critical mass you are in need of a fallback. Sunsetting protocols are even harder, so there's complete equivalency there. The use of TLS is just orders of magnitude more deployed than DNSSEC.
There's also no need to suggest I am an advocate of something when stating basic facts about it. Real world systems are dependent on DNSSEC today, and nothing you or I believe will change that. I will not discuss specifics, but I am sure you recognize its use within well known public protocols such as SMTP and SSH.
How is BEAST relevant to any of this? It might be to some theoretical encrypted DNS, but not to the signature my CA provides to my certificate ownership just as it is not to the signature my registrar provides to my domain ownership.
Again: individual site owners can reliably enable modern curve crypto in TLS. Nobody can do that with DNSSEC; all DNSSEC crypto is lowest-common-denominator, where the lcd is 1024 bit or even 512(!) bit RSA.
The 512-bit RSA keys are a distraction. You're allowed to not care about the security of your own domain, but if your DNS hosting provider does not, and you do, get a better DNS hosting provider, all but two (of any size) use stronger keys.
ECDSA P-256 and P-384 are widely deployed. New algorithms will take some time to be widely supported by peer resolvers, just as TLS 1.3 will take time to be widely supported by web sites (browsers are more agile, but many websites still don't do forward-secrecy with TLS 1.2).
I will refrain from begging you to answer the question.
It is sufficient to note that every PKI is as secure as its weakest link. If my CA is 512 bit RSA, that is the security I am going to get.
That public DNSSEC infrastructure is neglected in places is not really controversial or what we're discussing here, but the use of a singular trust root in place of hundreds should not be taken as evidence of incompetence from the protocol designer.
Those are public web browsers and not really relevant to the matter here. They do not, as far as I know, implement DANE and if they did they would likely not trust 512-bit RSA certificates either. (Who knows? That might even have been a good way to bootstrap a good policy baseline, had they supported it.)
* DV certificate issuance is based on a leap of faith (TOFU) by the CA. It is vulnerable to BGP hijack, DNS cache poisoning, ... as amply illustrated by the "domain control" "proofs" in ACME.
* The CA ecosystem is dying with Let's Encrypt's free certs removing the raison d'etre for most of the other CAs. Actual verification of identity does not scale, and DV "domain control" "verification" does not pay.
Crypto:
* The use of SHA1 in DNSSEC is not subject to collision attacks, only 2nd pre-image resistance is required, and there's no hint of SHA1 (or even MD5!) 2nd pre-images any time soon. While moving to SHA256 makes sense, there's no real problem with SHA1 in DNSSEC, it is adequately secure.
* The domains using 512-bit keys are a tiny minority hosted by a couple of DNS hosting providers (one of them "free", so you get what you pay for).
* Most zones have 2048-bit KSKs, and 1024-bit ZSKs. This can be addressed as more domains deploy ECDSA or 1280-bit and 1536-bit RSA keys (which have an estimated work-factor of 89 and 96 bits respectively, and there is ). The root zone keys are 2048 bits for both KSK and ZSK, and 1 in 3 TLDs has 1280-bit ZSKs, more will do that or switch to EC.
* Logjam researchers estimated the cost for a single widely used DH key (embedded as a constant in software): "The researchers calculated the cost of creating logjam precomputation for one 1024-bit prime at hundreds of millions of USD". This is somewhat specific to DH rather than RSA, and leverages the fact that many users used the same fixed DH groups. ZSKs are not fixed and vary across providers.
Finally: the talk is about improving on unauthenticated opportunistic TLS in SMTP. The right comparison is with downgrades to cleartext, not with the latest fashionable crypto in browsers. The browsers moving from 1024-bit RSA directly to 2048-bit RSA is a bit of power-of-two numerology and was overkill (better optics more than better engineering). They did it because signing and verification are still adequately fast, so they can get away with turning it up to 11. That does not mean that more modest key sizes are not effective. Since DNSSEC is used only to authentication, not key agreement, there's no risk from future key compromise once a key is retired, so keys don't need to remain secure for decades, just a few months (for a ZSK) is enough.
Generally, real-world attacks bypass the crypto and attack far weaker defenses. Why break the crypto when you can subpoena the private key material, install malware on the end-systems, break TOFU certificate issuance...
Smug attacks on DNSSEC ignore many deficiencies of the alternatives, and are I suggest a case of letting the perfect be the enemy of the good.
Fundamentally this DANE idea swaps out the problems of a PKI CA hierarchy, the kind your browser is using when you actually log into gmail to read and send your e-mail, with the problems of DNSSEC.
The biggest problems with PKI are that there are too many CAs, who can each go rogue, and that X509 is a horrifically awful format that takes a gigantic TCB to parse. Mitigations include browsers firing CAs and severely impacting their business (so effectively, there is regulation), certificate transparency and more and more testing. Either way you pretty much have to deal with these problems today though, an awful lot of e-mail is sent/received over HTTPS.
The biggest problems with DNSSEC are that DNSSEC doesn't actually protect DNS queries at their most vulnerable points, that you have to trust your TLD provider, and the root operators completely, with no effective means to fire them, and finally that they almost certainly are using embarrassingly stupid crypto that even most ICOs could avoid. This very deck has several 1024-bit RSA keys, right there on a slide with a February 2018 timestamp, as if it's not an incredibly obviously stupid thing to do since the LogJam paper was published, or you know, when everyone stopped using them for TLS years ago.
Actually wait, there's even a 512-bit key on another slide, using alg-5. That's RSA + SHA1!! What's the point of using all of those SHA256 MACs at the TLS Layer if everything depends on utterly breakable RSA plus SHA1 anyway? It's hard to take this seriously.
PS. Yes, this is also my best impression of tptacek