Instead, because DNSSEC makes security policy decisions at a very low layer, in a protocol that virtually all applications expect to work transparently so long as you have connectivity, you get this: instead of a browser popup, a connectivity outage so complete that it is indistinguishable to users from "the site you are trying to get to doesn't exist and never has".
DNSSEC is a terrible, terrible idea. Comcast should stop validating it, immediately. If someone from Comcast's network engineering team is reading this: I was a network engineer, and then worked for 4 years with tier-1 engineering getting Netflow monitoring deployed and scalable. I've been writing DNS security software since I was 19 years old, in the 1990s. You have reasons for whatever configuration you've decided on, but I will take as much time out of my day as is productive for you to convince you to stop working on rolling this out.
What happened yesterday with HBO Now, which had half of Twitter screaming "NET NEUTRALITY OHNOZ!", is going to keep happening over and over again.
See also: https://news.ycombinator.com/item?id=8894902
DNS has two inherent issues:
- Hijacking traffic via spoofing responses.
- Private information leakage.
With DNSSEC combined with HTTPS it is very reasonable to say that both you're connecting to the party you're expecting to connect to and in addition, no third party listening on the line either knows what host name or page you're visiting (only the IP address of the box).
A lot of people dismiss these limitations or just point wildly at certificate pinning to solve all of our problems (while ignoring that HTTP isn't the only type of traffic the internet was designed for).
Plus with NSA mass surveillance, DNS makes seeing what domains you're visiting and building a "picture" of you as an individual absolutely trivial. The IP addresses still may help them do that to some extent, but it certainly becomes very easy if they can see your DNS packets.
No, because clients don't validate DNSSEC responses. You'd need to be running a full resolver in your machine, which OSs usually don't.
Also, with SNI¹ they'll know the hostname anyway.
These problems don't call for a general solution. We should respect the fundamental truth behind the end-to-end principle and solve these problems as close to the application as possible. The likely threats, appropriate default choices and trust model are all best understood by each application developer and building these solutions into Layer-3/4 is bound to cause short-term pain and reduce our long-term flexibility to meet new risks.
For example, EFF's STARTTLS Everywhere project takes into account the fact that a huge percentage of the world's mail moves between a small enough number of providers that human verification of announcements is possible. It also recognizes that the MX configuration for these providers is reasonably static, meaning that changes that propagate in minutes do not need to be accommodated. Small specific solutions like this can be rolled out and provide a real, tangible benefit to users much faster than we can upgrade the entire DNS system to provide a more general solution.
I agree that we need a DNS privacy solution, one that hopefully doesn't eliminate the existence of caching infrastructure.
Please do take a look at what the folks are doing within the DPRIVE working group of the IETF:
They are working on mechanisms to bring privacy / confidentiality to the "last mile" of DNS connections. Any input you have would be useful. (There's a link there to a mailing list to which you can subscribe.)
From tptacek's FAQ ( http://sockpuppet.org/stuff/dnssec-qa.html ):
> What’s the alternative to DNSSEC?
> Do nothing. The DNS does not urgently need to be secured.
> All effective security on the Internet assumes that DNS lookups are unsafe. If this bothers people from a design perspective, they should consider all the other protocol interactions in TCP/IP that aren’t secure: BGP4 advertisements, IP source addresses, ARP lookups. Clearly there is some point in the TCP/IP stack where we must draw a line and say “security and privacy are built above this layer”. The argument against DNSSEC simply says the line should be drawn somewhere higher than the DNS.
I want a DNS we can trust.
I see DNSSEC as one of the many layers in any defense-in-depth security plan.
Yes, I acknowledge that there are some challenges with DNSSEC deployment... but those are what a good number of us are working on fixing.
I don't see "Do nothing" as a viable alternative.
You keep walking face-first into this rhetorical brick wall. If you're going to say "doing nothing isn't viable", you need to have a ready response to the extremely obvious observation that the Internet seems to function pretty OK without DNS security. It's not 1994 anymore. You can't wave your hands and suggest that the Internet is going to get more serious in 10 years and need better security. It already needs the best security it can possibly get. It's just that DNSSEC isn't part of that mix.
> It already needs the best security it can possibly get.
It's just that I believe that DNSSEC should be part of the mix.
For instance here's an example of some research out of CERT-CC back in September 2014 where hijacking of MX records is redirecting email to someone:
As far as I can see, deploying DNSSEC validation on the networks of the affected mail servers - and receiving DNSSEC-signed MX records - would prevent them from delivering mail to servers in the middle.
It's things like this that I want to prevent.
I want a more secure Internet - and in my view DNSSEC helps.
Once again: the 2015 Internet functions with no DNS security whatsoever. BGP announcements, themselves unencrypted, aren't protected with DNSSEC. Browsers don't use DNSSEC in any way and are in fact blind to it. Email will remain insecure with or without it. Credit card transactions are protected at a layer higher than DNS, one designed to assume that the DNS would always be insecure.
Why, specifically, is doing nothing to secure the DNS "not viable"?
Hmmm... you said " you need to have a ready response to the extremely obvious observation that the Internet seems to function pretty OK without DNS security "
I gave you one example. Here are some more:
- There are now over 800 XMPP servers with DNSSEC-signed SRV records that can be used to ensure they are talking to the correct servers. https://xmpp.net/reports.php#dnssecsrv
- On a related note, there are over 300 XMPP servers using DANE to provide a higher level of trust to TLS certs: https://xmpp.net/reports.php#dnssecdane
- There are now over 1,000 email servers using TLSA records (DANE) to provide a higher level of security to the TLS connections between email servers. (Viktor Dukhovni of exim)
These are very real cases where adding DNSSEC is, to me, increasing the security of DNS.
Because I'm around examples like these, I see value in securing the DNS. So to me, "doing nothing" is not an option.
And one of the ways you suggest it could help is to allow clients to override TLS certificates and instead trust the government-controlled DNS.
That is an amazing concession.
Another one is most of the email service providers in Germany, where DNSSEC / DANE are being seen as ways to have a more secure email environment.
The list can go on...
> And one of the ways you suggest it could help is to allow clients to override TLS certificates and instead trust the government-controlled DNS.
:-) Where did I say override, Thomas? You can, if you wish, use a different trust anchor than the current (broken) CA system, but the beauty of DANE is that it gives you a way to add another layer of trust to existing systems. So you can use a CA-issued TLS cert and put a fingerprint in a TLSA record as an added check during a SMTP transaction. You could also check certs with CT... and if it were HTTP you could do pinning as well.
As many layers as you want!
This is true, but only because BGP announcements don't involve DNS, and so all the DNS security in the world won't help. Agreed that there is a lot of scope for doing better on BGP security, though - and indeed DNS security.
Yes... and there we go down the path toward BPGSEC, RPKI and other tools that people are developing to help secure the routing infrastructure.
(Yes, I know DNSSEC doesn't try to address query privacy, so that problem isn't an argument for DNSSEC itself.)
No it does not work. I have been the victim of DNS poisoning with a flood of requests that almost took my server down.
If DNS poisoning is so easy then DNS is not working correcting. If you want to get rid of DNSSEC then you need to say how you would fix it instead.
That would not help my server - the DNS poisoning is acting like a DDOS. Sure the random victims know they are on the wrong page, but it doesn't help my server for them to know that.
FYI, there is a working group within the IETF working on the issue of providing confidentiality to DNS transactions. It's called DPRIVE and more info can be found here:
People compare it to SSL but it's not like SSL. It does NO encryption of the commuications. It's only validation (argubaly the most broken part of SSL).
A better ananlog to SSL would be DNSCrypt, which we've been using now for years, is slowly gaining traction and adoption by others, and provides real privacy from DNS messages from being intercepted or manipulated.
In short: with total objectivity, what are all of the possible, theoretical advantages to the people using order.hbonow.com of HBO using DNSSEC + https + (maybe) HSTS rather than them ONLY using https + HSTS?
I'm posting this (I think) leading question so that it might be possible to cut through VOLUMES of related discussion, and see a relatively simple, straightforward answer.
PS: I have read through your Against DNSSEC post, and the large amounts of related discussion, but I'm still seeking a technical yet succinct response that I can easily 'carry around' with me.
Toward the end of the article you'll note I point out that this is a great example of a hole in the operational process that has been identified as more and more people have deployed DNSSEC:
The process of updating (or removing) the DS record NEEDS to be automated. There is a group within the industry working on ideas around this now and I think we'll see that work happen inside the DNSOP Working Group within the IETF.
There is a public mailing list open to anyone interested to join.
Thomas, I agree that having the DNS resolver only return a regular SERVFAIL without a hint of WHY there was the failure is a challenge. I wasn't around the DNS part of the IETF when this design decision was made. It seems to me that a separate error message would have been preferable... but I don't know the discussions that were had at that time.
There have been several suggestions about ways to include additional diagnostic information with the SERVFAIL so that browsers and applications could take better action. One such proposal was from Evan Hunt at ISC (makers of BIND):
The draft expired back in 2014 but he indicated recently on the DNSOP mailing list that he would be open to reviving that draft if people thought it would be useful.
But dropping DNSSEC leaves the problem of securing DNS unhandled. What would you propose in its place to address securing DNS?
Some protocols are well-situated to make security policy decisions, and some aren't. This is an obvious engineering point: however you secure BGP, you're going to have TCP delivering records. Should we then secure TCP, so that we can ensure availability and prevent attackers from exploiting it to introduce byzantine routing failures? And, if we create TCPSEC, some form of unencrypted IP traffic will be used to deliver those segments to end stations. Should we then secure IP? And then ARP? And then Ethernet headers?
Secure BGP makes a lot of sense. BGP is already more of a policy expression framework than a routing protocol (the routing algorithms used by BGP are basic, and virtually all of the complexity comes from two decades worth of hacks designed to express policy). Fully every single router running defaultless BGP is managed by a team of people engaged intimately with BGP security policy.
DNS, however, runs on every single Internet user's computer, often in different places, and at a layer that isn't fully exposed by APIs, because those APIs were designed with the assumption that DNS fails only due to connectivity failures. DNS is a terrible place to enforce policy.
That will, for obvious reasons, never be the case with BGP. Because it is far too low a layer. Hence my question.
Step 2: Return to drawing board
Step 3: Wait for better solution to emerge
Step 4: Repeat until value of new DNS security system exceeds cost of deployment
Step 5: Profit
I expect never to reach Step 5, but who knows?
> Step 3: Wait for better solution to emerge
I, and I suspect many others, eagerly await the better solution to the DNS security issues we see. If you have one (beyond "do nothing"), please explain it.
Lacking that, we focus on deploying DNSSEC because that is what we have AVAILABLE today.
DNSSEC is a terrible, terrible implementation. All of your criticisms are valid but none of them are inherent to the idea of authenticating DNS resolution.
That basically means we need to start over from scratch, which is unfortunate, but now tell me if we did we couldn't address your complaints:
1) It isn't necessary. Sure, we can work around the lack of DNS authentication, but then we end up with the horrible CA system. Can we really not do any better than that?
2) It's centralized and government-controlled. So don't sign the root. Instead of hard-coding the root key in the resolvers, hard-code the keys for each TLD. Being able to pick which government can forge your signatures is the best you're going to be able to do; the authority doing the signing is going to exist in somebody's jurisdiction.
3) DNSSEC's cryptography is weak. So use different cryptography. Elliptic curve keys and signatures are an order of magnitude smaller than RSA anyway, which also reduces the DNS DoS amplification potential by that amount.
4) Resolver APIs don't provide good information about why resolution failed. So provide a new API that provides better errors. Put it right in the new RFC. Then you only get validation if you use the new API and nothing changes for existing applications.
5) Deployment is expensive. Why isn't deployment automatic? The default configuration for a DNS server should be to generate a signing key for each domain on first run and then automatically sign all the records with it. If you're paranoid and you want to keep your signing keys offline then you can configure that manually, but nobody has to. And the higher level domain should be able to get the signing key from the lower level domain as soon as you add the NS record, and then confirm with the administrator that it's the right key the same way as with ssh host keys.
6) DNSSEC isn't validated by the endpoints. As far as I can tell there is no actual reason for this even with existing DNSSEC. The client can ask its DNS cache for each of the signing keys up to the root and then check the signatures. A new pseudo-RR that would return the entire chain would make this more convenient, and couldn't really be used for DoS because only recursive resolvers and not authoritative servers would answer that query. Validating clients without validating caches could still fall back to asking for each individual record.
7) Authoritative denials leak information. NSEC5 is supposed to fix this, but there is a much easier way: Sign a denial key which can itself only be used to sign denials and keep it on the authoritative server. The idea that someone who compromises your authoritative servers will be able to deny service to your DNS clients is already sort of implied.
The problem is, even though you could theoretically do all of those things to DNSSEC itself, those aren't even the only problems, and trying to patch all the warts in something nobody is even really using is only going to make something which is already unnecessarily complicated even worse. What is needed very much is a clean slate.
But that doesn't mean it isn't worth doing.
Moreover, how can it possibly be sane for us to deploy a security system that protects end-users from NSA only if Google is willing to move Gmail off of COM?
If NSA subverts a CA today and uses it to MITM Gmail, a substantial fraction of all browsers on the Internet will detect that and alert Google, because of key pinning. When that happens, Google will nuke that CA from orbit. If NSA is dumb enough to subvert a CA that's hard to nuke, Google will start a process of employing code-level restrictions on that CA that will for a substantial portion of all Internet users make that CA asymptotically approach "useless" for NSA's purposes.
If NSA does a QUANTUM INSERT-type attack to selectively poison .COM lookups in order to use TLSA to get a target to eat a fake certificate, what does Google do? Nuke COM from orbit?
DNSSEC is a terrible, terrible idea.
But I'm trying to understand your objection here. DNSSEC/DANE replaces domain validated certificates. I understand your objection to be that we don't want the registrar to be in the chain of trust; but they already are. If you can forge the target's DNS records from the registrar's servers then you can get a domain validated certificate from any CA. The ability to control the DNS records of the domain is the thing they're verifying. The difference with DANE isn't that the registrar is in the chain of trust, it's that the CA isn't. It causes you to have to trust strictly fewer third parties. There is no less vulnerability to or recourse against the registrar than there is now.
To do better than that you need to do something more than domain validation. But how does replacing domain validated certificates with DANE prevent any such additional checks from being done?
> DNSSEC/DANE replaces domain validated certificates.
DNSSEC/DANE can be used to replace CA-issued certs, but it can also be used to add an extra layer of validation to existing CA-issued certs. To me this is actually the strongest use-case for DANE, as it provides a means to use DNSSEC to ensure that you are using the correct TLS certificate.
More info is here:
The four modes are:
0 – CA specification – The TLSA record specifies the Certificate Authority (CA) who will provide TLS certificates for the domain. Essentially, you are able to say that your domain will ONLY use TLS certificates from a specific CA. If the browser or other application using DANE validation sees a TLS cert from another CA the app should reject that TLS cert as bogus.
1 – Specific TLS certificate – The TLSA record specifies the exact TLS certificate that should be used for the domain. Note that this TLS certificate must be one that is issued by a valid CA.
2 – Trust anchor assertion – The TLSA record specifies the “trust anchor” to be used for validating the TLS certificates for the domain. For example, if a company operated its own CA that was not in the list of CAs typically installed in client applications this usage of DANE could supply the certificate (or fingerprint) for their CA.
3 – Domain-issued certificate – The TLS record specifies the exact TLS certificate that should be used for the domain, BUT, in contrast to usage #1, the TLS certificate does not need to be signed by a valid CA. This allows for the use of self-signed certificates.
Modes 0 and 1 work with current CA-issued certs and assume that normal PKIX X.509 validation is occuring.
People involved in DNS standardization clearly believe this isn't the case, and that there's a spectrum of different ways DNSSEC will interact with the CA system. They also believed in Interdomain IP Multicast and SNMPv3. The track record of DNS standards people on browser technology is not good. In this case: I suggest taking AGL's word for it.
> It can not be used this way.
Actually, it can be. There's a modified version of Firefox maintained by the team at the DNSSEC-Tools project called "Bloodhound" that does DNSSEC validation of every link and does DANE checks on TLS certs:
> If there are 4,392 trusted CAs today, DNSSEC will make it 4,393.
Hmmm... I guess I see that only if you were using modes 2 and 3 of DANE. If you are using 0 and 1 you are just using DANE as an additional check for the CA-issued CERT.
The value to me is that I am in control of the TLSA record in that I am publishing that in my own zone file on my own DNS servers. I can specify there precisely which TLS cert I want to use or which CA I want to be trusted for my domain.
My choice is then cryptographically signed via DNSSEC and bound into the global chain of trust via DS records going back up to the root of DNS.
The "versus being redirected to bogus sites for phishing or malware" part here is funny. Because that generally happens when some scammer registeres hb0now.com. Not when someone is intercepting your DNS.
And actually, in case of the latter, all bets are off anyways. Because client resolvers DO NOT CHECK DNSSEC.
(Yes, I know you are on your custom configured Linux box or OpenBSD firewall and that DNSSEC works wonderful for you. But the majority of the internet using world with OS X or Windows behind a $15 DSL router or WiFi AP does not.)
Stop. The. DNSSEC. Nonsense.
> And actually, in case of the latter, all bets are off anyways. Because client resolvers DO NOT CHECK DNSSEC.
... please visit APNIC's DNSSEC Statistics site where you will see that about 12% of all DNS queries globally ARE being validated by DNSSEC:
In Sweden this is about 71%:
Slovenia 67%, Estonia 55%, Denmark 48% ... on down to the USA at 23%:
So DNSSEC validation very definitely * IS * happening out there!
It's all incremental ways of reducing the attack surface. Get DNSSEC validation happening at large public DNS services... then at ISPs... then on network edge devices... then into operating system (in stub resolvers) ... then perhaps into applications themselves.
Each step reduces the attack surface a bit more. I wrote about this at:
This is why some people are using libraries like the GetDNS API to build DNSSEC validation directly into apps:
And solving the "last mile" problem is exactly why the DPRIVE working group was chartered within IETF:
(And anyone is welcome to contribute to the group.)
I've updated the post to more clearly note the fact that this was not just an issue on Comcast networks. (I mentioned that it would be an issue on any network performing DNSSEC validation, but then only gave Comcast as an example.)
Google Public DNS has been performing full DNSSEC validation since May 2013: http://www.internetsociety.org/deploy360/blog/2013/05/confir...
However, when I do "dig +dnssec @220.127.116.11 ds hbonow.com" I don't get any DS records back, indicating that the issue should be fixed on Google's PDNS.
Can you tell what DNS servers you are using on your Android device? It may be that your ISP is doing DNSSEC validation and experiencing a similar TTL issue.
And you're right - HBO Now is only on Apple devices right now... so getting to the website from Android wouldn't matter.