The IETF fervently requests that you enroll your domain names in their DNSSEC program. Doing so will provide your users with essentially no security improvements, even at the margin, that they wouldn't get from simply using LetsEncrypt.
At the same time, once enrolled, the weird little pockets of the Internet that actually look for DNSSEC records when they do DNS lookups will instantly become susceptible to forgetting that your domain exists entirely if any misconfiguration occurs. That's not a hypothetical concern: the week HBO launched HBO Now, their cord-cutting all-online streaming service, it vanished for Comcast users whose DNS servers at the time enforced DNSSEC.
Now, faced with the requirement to perform what needs to be a totally routine management operation for the DNSSEC PKI --- periodically rotating keys --- ICANN is postponing for months because ISPs can't reliably handle the rollover. DNSSEC usage is growing every day, the DNSSEC advocates say. Huge portions of the Internet now resolve DNSSEC records. But if ICANN tries to rotate keys, the whole system will break.
The reality is that if the root DNSSEC private keys were leaked to Pastebin today, not a single mainstream site on the Internet would be jeopardized. To a first approximation, nobody on the Internet relies on DNSSEC. TLS was designed to assume the DNS was insecure. No browser implements DNSSEC. Your operating system --- thankfully --- probably won't disappear sites based on DNSSEC misconfigurations, or, for that matter, check DNSSEC at all.
DNSSEC is 1990s crypto. Every new DNSSEC resolver that gets deployed makes it that much harder to eventually deploy modern cryptography to solve real security problems in the DNS, whatever they may be. Enough is enough! Kill DNSSEC and go back to the drawing board.
Interestingly, Comcast had their own DNSSEC outage earlier today:
> Enough is enough! Kill DNSSEC and go back to the drawing board.
Yeah, at this point it boggles the mind at how people can continue to support this foot cannon -- even the main backers can't get it right. I don't blame them for outages because DNSSEC is absurdly overly complicated, to the point it almost seems designed to fail. Part of me wonders if elements of the consensus design process made it that way, trying to prevent secure DNS from coming about. So far they've succeeded.
Instead, they took a dented-up whiffle ball of a protocol that some rando at TIS Labs hucked over to them in 1995 and have spent the last 20 years trying to play Major League Baseball with it by wrapping more and more duct tape around it.
I don't think a lot of DNSSEC advocates really appreciate how clear the lineage of this protocol is back to a DoD contract project from the 90's. That really is what they're trying to deploy.
It's only a route to the same thing, but with weaker crypto, no Certificate Transparency, and no ability to distrust bad actors (distrusting Symantec's Verisign CA is feasible, but dropping Verisign's `.com` is not).
You cannot get .tk operator to sign .jp domain (unless they're both controlled by same authority, which usually isn't the case).
Being one of those "DNS people" (having given a few talks etc on the subject), I also have been telling people that I cannot privately deploy DNSSEC, simply because the failure model is too high.
Currently with BIND and Knot one can setup automatic signing of zones, which makes things a bit easier, but it relies on setting up a master-slave relationship. My current deployment of auth dns servers is simply a bunch of master nsd's, that get rsynced with their configs and reloaded (after a config test :). If one of them breaks, the others keep on running happily. In the case of master-slave though, if the master (the sole thing with the current keys), breaks (box dead, IP down, routing issue), the slaves after a while also do not know what to do anymore....
Thus, after all your hard work you go on vacation for a bit.... you come back and everything is gone, as your domain does not properly resolve anymore, no valid keys are there, not properly signed, not properly rotated...
The failure model is too big; even if you run a 24/7 NOC, they will only notice problems when they hit you; and they will have to monitor DNSSEC verification specifically to notice them, and not get notified through twitter about it.
From a client perspective the failure model is a whole lot worse: when DNSSEC verification fails the answer is a flat-out denial of service. At least with TLS when a cert is invalid the client gets a chance to peek at it and go 'meh, looks okay to me' (even though that is a 'badidea' for chrome users ;) )
The problem with DNSSEC though is that with how the DNSSEC system works, with delegation in mind, it is hard to come up with something 'better' than NSEC3, this as you want to be able to avoid people from listing your whole DNS zone, but you also want to be able to delegate subdomains to other folks.
And that is also what makes the root special: the keys have to be deployed in the resolvers world-wide and everybody needs them before they can use them. (Similar to TLS root certs); but the same goes for crypto-options: everything needs it before they can upgrade; for the Web, that is 'solved' by having aggressive browser-updates, though as can be seen from ssllabs those are not the only clients, and people primarily update their Chrome and Firefox, thus there is a long-tail there too, hence why people do not normally configure the Mozilla Modern TLS configuration as they break those other, not updated, clients.
For resolvers this is worse getting new crypto in there, let alone keys: they are embedded typically in the OS (OSX mDNSResponder (which has bunches of problems over the years), Windows has it's own, on Linux it depends on the day which one you get); but worse: large swaths of people rarely run upgrades on those systems...
Plus to add more pain to this: there are these people running Docker and other container images that never ever get updates. Oh, and then there is this magic called Android, congrats on 15% deployment for a 1 year old OS....
To finalize, my rant: unless somebody figures out an easy way to 'upgrade the world' in a relatively short time frame (~2 to 3 months), we'll always be stuck with older software/configs(keys,etc)
And older software means: broken implementations that do not rotate keys properly, that do not have the latest keys, that do not have the latest TLS certs, that do not have security properties fixed.
And thus also, that even if somebody replaced/fixed DNSSEC: there will always be clients that will not work along...
Just like TLS with HSTS headers or similar stuff.
That's what security is made to look like today. It is wrong. Things shouldn't simply disappear or get broken, but it's by no way a DNSSEC only problem.
Forgive my obtuseness but I still do not understand.
With the availability of high speed encryption for individual DNS packets between stub resolver and authoritative nameservers with DNSCurve[FN1] why would I want to encrypt "connections" to caches with TLS? (Names in a shared cache will be deemed pseudo-authoritative because of DNSSEC? And third parties get the ability to censor any name?)
FN1. Some still doubt DNSCurve. Elsewhere in this thread someone posted a pointer to ianix.com. That site runs DNSCurve. For the doubters, here is what end-to-end encryption of DNS looks like:
# already have root.zone from ftp.internic.net so we already have addresses for tld's such as com
# resolution of ianix.com takes 2 queries (nonrecursive, RD bit unset)
# com servers are not running DNSCurve (yet) so first query is unencrypted
# however com.zone is public data that anyone can request from verisign
# thus there are ways to get ianix's authoritative nameservers w/o using DNS
197 bytes, 1+0+2+2 records, response, noerror
query: 1 ianix.com
authority: ianix.com 172800 NS uz5cjwzs6zndm3gtcgzt1j74d0jrjnkm15wv681w6np9t1wy8s91g3.ianix.com
authority: ianix.com 172800 NS uz5pn8hy1fy1d2nn445s2m1udbvtytp5kp65mutgn9nggq9njvfg7f.ianix.com
additional: uz5cjwzs6zndm3gtcgzt1j74d0jrjnkm15wv681w6np9t1wy8s91g3.ianix.com 172800 A 184.108.40.206
additional: uz5pn8hy1fy1d2nn445s2m1udbvtytp5kp65mutgn9nggq9njvfg7f.ianix.com 172800 A 220.127.116.11
# 2nd query is end-to-end encrypted, no third parties needed
1 ianix.com - streamlined DNSCurve:
229 bytes, 1+2+2+2 records, response, authoritative, noerror
query: 1 ianix.com
answer: ianix.com 3600 A 18.104.22.168
answer: ianix.com 3600 A 22.214.171.124
authority: ianix.com 259200 NS uz5cjwzs6zndm3gtcgzt1j74d0jrjnkm15wv681w6np9t1wy8s91g3.ianix.com
authority: ianix.com 259200 NS uz5pn8hy1fy1d2nn445s2m1udbvtytp5kp65mutgn9nggq9njvfg7f.ianix.com
additional: uz5cjwzs6zndm3gtcgzt1j74d0jrjnkm15wv681w6np9t1wy8s91g3.ianix.com 259200 A 126.96.36.199
additional: uz5pn8hy1fy1d2nn445s2m1udbvtytp5kp65mutgn9nggq9njvfg7f.ianix.com 259200 A 188.8.131.52
DNSCurve is not really deployed, thus there will always be alternatives springing up.
DNSCurve is also not end-to-end, until all authoritative servers support it.
Google has their DNS over HTTPS thing btw, which is scary but is another alternative if you want to hide what your doing (except to the server you ask questions to, but you can do that from Tor ;) ).
The best alternative you currently and for a long time will have is VPN/Tor though: get a tunnel to a host/net you trust to not betray the content of your connections (be that logging or network analysis).
Passive DNS will always exist (as it happens in the recursor, hence dnscurve does not help). And due to the caching and scalability properties of DNS it will never internally be encrypted, otherwise those two properties will be gone. The moment they are gone it won't be DNS anymore, and maybe that is a good thing and also possible in the world of today where bandwidth is less of an issue and most people use google to search for things.
Heck, google could just include the IP addresses of the servers in the HTTPS response, that way, one only needs to know where Google lives, the rest will be transported over HTTPS....
Long live that the web is not only web though. And I think there is a great future ahead for .onion-alike sites when their usability and accessibility rises as currently it is mostly BBS days: you need to know the correct number, and DNS is human readable, and Google is what most people use to find sites.
You do trust the origin site to send you to the correct next site right? :)
The big problem here is that you'll always still need DNS in a lot of cases, as webpages have long not been single-origin resources; most have to load all those tracking pages; also this would require all webpages to include that method, and also only works for web, the Internet is more than that.
I am looking forward to "DNS" pointing to more than just IPv4 and IPv6 though like in the above silly example ;)
I, for one, welcome our new DNS overlords.
> it vanished for Comcast users whose DNS servers at the time enforced DNSSEC.
I would hardly call Comcast a "weird little pocket of the Internet".
I saw the postponement late last night, and my immediate but completely untested pet theory is that the reason they're getting so many 'broken' resolvers running such new software (RFC 8145 was published in April 2017, unbound 1.6.4 was released June 27, 2017) is short-lived container instances that only have the old root trust anchor reporting in before they've had a chance to obtain the new KSK - if they even stay up long enough to get it.
From the root-dnssec-announce mailing list, it sounds like more information is due Friday:
"Duane Wessels started the research on this and has done a great job. He's presenting it at DNS-OARC in San Jose on Friday."
(Aside: this was submitted previously but got no traction: https://news.ycombinator.com/item?id=15362516
Is this the dupe-detector-avoider in action?)
The whole point is that at least CAs enforce DNSSEC, so no one can get a certificate for my site except for me.
Who can, the site owner? Then you missed the point of trust agility as described by moxie.
You have to trust the site owner, who has to trust his hoster, his DNS provider, his domain registry.
Transitively, you already trust all these people.
DANE with DNSSEC does not add any requirement for new trust and it does not require that you trust any involved party more than before.
It merely removes one entity from the equation.
TLS today depends entirely on DNS – because anyone can receive a certificate for your domain if they control your DNS.
Convergence distributes this verification, so the provider would have to give the same false result to everyone, otherwise a notary will catch the discrepancy.
What? Any trust point being compromised would lead to the same result. Why would anyone use a non-binary comparison?
The provider can request a lets encrypt certificate for the same domain already, and then give Let's Encrypt a wrong IP.
This allows the same provider to get a fake certificate, and MitM only specific users.
I don't see any additional attack surface.