Hacker News new | comments | show | ask | jobs | submit login
DNSSEC KSK Rollover Postponed (icann.org)
47 points by tptacek on Sept 29, 2017 | hide | past | web | favorite | 42 comments



Let's just be clear:

The IETF fervently requests that you enroll your domain names in their DNSSEC program. Doing so will provide your users with essentially no security improvements, even at the margin, that they wouldn't get from simply using LetsEncrypt.

At the same time, once enrolled, the weird little pockets of the Internet that actually look for DNSSEC records when they do DNS lookups will instantly become susceptible to forgetting that your domain exists entirely if any misconfiguration occurs. That's not a hypothetical concern: the week HBO launched HBO Now, their cord-cutting all-online streaming service, it vanished for Comcast users whose DNS servers at the time enforced DNSSEC.

Now, faced with the requirement to perform what needs to be a totally routine management operation for the DNSSEC PKI --- periodically rotating keys --- ICANN is postponing for months because ISPs can't reliably handle the rollover. DNSSEC usage is growing every day, the DNSSEC advocates say. Huge portions of the Internet now resolve DNSSEC records. But if ICANN tries to rotate keys, the whole system will break.

The reality is that if the root DNSSEC private keys were leaked to Pastebin today, not a single mainstream site on the Internet would be jeopardized. To a first approximation, nobody on the Internet relies on DNSSEC. TLS was designed to assume the DNS was insecure. No browser implements DNSSEC. Your operating system --- thankfully --- probably won't disappear sites based on DNSSEC misconfigurations, or, for that matter, check DNSSEC at all.

DNSSEC is 1990s crypto. Every new DNSSEC resolver that gets deployed makes it that much harder to eventually deploy modern cryptography to solve real security problems in the DNS, whatever they may be. Enough is enough! Kill DNSSEC and go back to the drawing board.


> it vanished for Comcast users whose DNS servers at the time enforced DNSSEC

Interestingly, Comcast had their own DNSSEC outage earlier today:

https://ianix.com/pub/dnssec-outages/20170928-comcast.com/

> Enough is enough! Kill DNSSEC and go back to the drawing board.

Yeah, at this point it boggles the mind at how people can continue to support this foot cannon -- even the main backers can't get it right. I don't blame them for outages because DNSSEC is absurdly overly complicated, to the point it almost seems designed to fail. Part of me wonders if elements of the consensus design process made it that way, trying to prevent secure DNS from coming about. So far they've succeeded.


It seems patently obvious to me that if they had the luxury of a clean slate design (which they do, but have denied themselves in as clear an instance of what Sartre would call "bad faith" as I've seen since I was forced to read Sartre in high school), the IETF would come up with a very different system than DNSSEC. They'd eliminate the "offline signer" requirement and come up with a massively simpler system.

Instead, they took a dented-up whiffle ball of a protocol that some rando at TIS Labs hucked over to them in 1995 and have spent the last 20 years trying to play Major League Baseball with it by wrapping more and more duct tape around it.

I don't think a lot of DNSSEC advocates really appreciate how clear the lineage of this protocol is back to a DoD contract project from the 90's. That really is what they're trying to deploy.


Forgive an uninformed comment, but isn't DNSSEC the only viable route away from globally trusted CAs? And the CA model is just fundamentally wrong in many ways (still better than nothing or trust on first use though).


DNSSEC has a CA-like trust model.

It's only a route to the same thing, but with weaker crypto, no Certificate Transparency, and no ability to distrust bad actors (distrusting Symantec's Verisign CA is feasible, but dropping Verisign's `.com` is not).


>DNSSEC has a CA-like trust model.

Not really.

You cannot get .tk operator to sign .jp domain (unless they're both controlled by same authority, which usually isn't the case).


Who cares? The majority of commercially important domain names live under parts of the DNS hierarchy that are de jure controlled by the Five Eyes governments.


The TLS CA has some functionality specified for non-global trust. It is just that nobody implements it.


Well said.

Being one of those "DNS people" (having given a few talks etc on the subject), I also have been telling people that I cannot privately deploy DNSSEC, simply because the failure model is too high.

Currently with BIND and Knot one can setup automatic signing of zones, which makes things a bit easier, but it relies on setting up a master-slave relationship. My current deployment of auth dns servers is simply a bunch of master nsd's, that get rsynced with their configs and reloaded (after a config test :). If one of them breaks, the others keep on running happily. In the case of master-slave though, if the master (the sole thing with the current keys), breaks (box dead, IP down, routing issue), the slaves after a while also do not know what to do anymore....

Thus, after all your hard work you go on vacation for a bit.... you come back and everything is gone, as your domain does not properly resolve anymore, no valid keys are there, not properly signed, not properly rotated...

The failure model is too big; even if you run a 24/7 NOC, they will only notice problems when they hit you; and they will have to monitor DNSSEC verification specifically to notice them, and not get notified through twitter about it.

From a client perspective the failure model is a whole lot worse: when DNSSEC verification fails the answer is a flat-out denial of service. At least with TLS when a cert is invalid the client gets a chance to peek at it and go 'meh, looks okay to me' (even though that is a 'badidea' for chrome users ;) )

The problem with DNSSEC though is that with how the DNSSEC system works, with delegation in mind, it is hard to come up with something 'better' than NSEC3, this as you want to be able to avoid people from listing your whole DNS zone, but you also want to be able to delegate subdomains to other folks.

And that is also what makes the root special: the keys have to be deployed in the resolvers world-wide and everybody needs them before they can use them. (Similar to TLS root certs); but the same goes for crypto-options: everything needs it before they can upgrade; for the Web, that is 'solved' by having aggressive browser-updates, though as can be seen from ssllabs those are not the only clients, and people primarily update their Chrome and Firefox, thus there is a long-tail there too, hence why people do not normally configure the Mozilla Modern TLS configuration as they break those other, not updated, clients.

For resolvers this is worse getting new crypto in there, let alone keys: they are embedded typically in the OS (OSX mDNSResponder (which has bunches of problems over the years), Windows has it's own, on Linux it depends on the day which one you get); but worse: large swaths of people rarely run upgrades on those systems...

Plus to add more pain to this: there are these people running Docker and other container images that never ever get updates. Oh, and then there is this magic called Android, congrats on 15% deployment for a 1 year old OS....

To finalize, my rant: unless somebody figures out an easy way to 'upgrade the world' in a relatively short time frame (~2 to 3 months), we'll always be stuck with older software/configs(keys,etc)

And older software means: broken implementations that do not rotate keys properly, that do not have the latest keys, that do not have the latest TLS certs, that do not have security properties fixed.

And thus also, that even if somebody replaced/fixed DNSSEC: there will always be clients that will not work along...


Check CAA specification: https://tools.ietf.org/html/draft-ietf-acme-caa-02 DNSSEC gives you additional security against BGP hijacking, if you use CAA records with DNS set as the only ACME verification method, or you limit the ACME account-uris that can renew the certificate.


> DNS lookups will instantly become susceptible to forgetting that your domain exists entirely if any misconfiguration occurs

Just like TLS with HSTS headers or similar stuff.

That's what security is made to look like today. It is wrong. Things shouldn't simply disappear or get broken, but it's by no way a DNSSEC only problem.


No, that's not what happens with HSTS. TLS misconfigurations come with dialogs explaining that the site exists but is unreachable due to misconfiguration. DNSSEC misconfigurations make sites vanish from the Internet as if they never existed at all.


Besides DNSSEC there are also those who are still pushing DNS over SSL/TLS for www domainnames.

Forgive my obtuseness but I still do not understand.

With the availability of high speed encryption for individual DNS packets between stub resolver and authoritative nameservers with DNSCurve[FN1] why would I want to encrypt "connections" to caches with TLS? (Names in a shared cache will be deemed pseudo-authoritative because of DNSSEC? And third parties get the ability to censor any name?)

https://dnsprivacy.org/wiki/x/E4AT

https://dnsprivacy.org/wiki/download/attachments/1277971/dns...

FN1. Some still doubt DNSCurve. Elsewhere in this thread someone posted a pointer to ianix.com. That site runs DNSCurve. For the doubters, here is what end-to-end encryption of DNS looks like:

# already have root.zone from ftp.internic.net so we already have addresses for tld's such as com

# resolution of ianix.com takes 2 queries (nonrecursive, RD bit unset)

# com servers are not running DNSCurve (yet) so first query is unencrypted

# however com.zone is public data that anyone can request from verisign

# thus there are ways to get ianix's authoritative nameservers w/o using DNS

   1 ianix.com:
   197 bytes, 1+0+2+2 records, response, noerror
   query: 1 ianix.com
   authority: ianix.com 172800 NS uz5cjwzs6zndm3gtcgzt1j74d0jrjnkm15wv681w6np9t1wy8s91g3.ianix.com
   authority: ianix.com 172800 NS uz5pn8hy1fy1d2nn445s2m1udbvtytp5kp65mutgn9nggq9njvfg7f.ianix.com
   additional: uz5cjwzs6zndm3gtcgzt1j74d0jrjnkm15wv681w6np9t1wy8s91g3.ianix.com 172800 A 69.195.157.182
   additional: uz5pn8hy1fy1d2nn445s2m1udbvtytp5kp65mutgn9nggq9njvfg7f.ianix.com 172800 A 104.207.139.192
# ianix.com authoritative nameservers are running DNSCurve

# 2nd query is end-to-end encrypted, no third parties needed

   1 ianix.com - streamlined DNSCurve:
   229 bytes, 1+2+2+2 records, response, authoritative, noerror
   query: 1 ianix.com
   answer: ianix.com 3600 A 104.207.139.192
   answer: ianix.com 3600 A 69.195.157.178
   authority: ianix.com 259200 NS uz5cjwzs6zndm3gtcgzt1j74d0jrjnkm15wv681w6np9t1wy8s91g3.ianix.com
   authority: ianix.com 259200 NS uz5pn8hy1fy1d2nn445s2m1udbvtytp5kp65mutgn9nggq9njvfg7f.ianix.com
   additional: uz5cjwzs6zndm3gtcgzt1j74d0jrjnkm15wv681w6np9t1wy8s91g3.ianix.com 259200 A 69.195.157.182
   additional: uz5pn8hy1fy1d2nn445s2m1udbvtytp5kp65mutgn9nggq9njvfg7f.ianix.com 259200 A 104.207.139.192


DNSSEC is primarily, like TLS, about message integrity. TLS adds encryption of content so that sniffers can't read along. With DNSSEC everybody still reads along.

DNSCurve is not really deployed, thus there will always be alternatives springing up.

DNSCurve is also not end-to-end, until all authoritative servers support it.

Google has their DNS over HTTPS thing btw, which is scary but is another alternative if you want to hide what your doing (except to the server you ask questions to, but you can do that from Tor ;) ).

The best alternative you currently and for a long time will have is VPN/Tor though: get a tunnel to a host/net you trust to not betray the content of your connections (be that logging or network analysis).

Passive DNS will always exist (as it happens in the recursor, hence dnscurve does not help). And due to the caching and scalability properties of DNS it will never internally be encrypted, otherwise those two properties will be gone. The moment they are gone it won't be DNS anymore, and maybe that is a good thing and also possible in the world of today where bandwidth is less of an issue and most people use google to search for things.

Heck, google could just include the IP addresses of the servers in the HTTPS response, that way, one only needs to know where Google lives, the rest will be transported over HTTPS....

Long live that the web is not only web though. And I think there is a great future ahead for .onion-alike sites when their usability and accessibility rises as currently it is mostly BBS days: you need to know the correct number, and DNS is human readable, and Google is what most people use to find sites.


TLS is not "primarily about message integrity". To see why this isn't true, observe the targets of most (all?) of the recent TLS attacks: recovery of session tokens.


You'll still need to encode both IP and host somehow encoded in the URL to skip DNS lookup from Google, but it's not even that far fetched.


<a href="https://www.example.com" addr="ip/192.0.2.1 ip/2001:db8::1 tor/examplecomrewwwi.onion">Example</a>

You do trust the origin site to send you to the correct next site right? :)

The big problem here is that you'll always still need DNS in a lot of cases, as webpages have long not been single-origin resources; most have to load all those tracking pages; also this would require all webpages to include that method, and also only works for web, the Internet is more than that.

I am looking forward to "DNS" pointing to more than just IPv4 and IPv6 though like in the above silly example ;)


Absolutely, not to mention load balancing that many do via short lived DNS entries and other subtleties. It's not an easy one :)


s/Names/Records/


A lot of technology is stupid and unnecessary. Doesn't mean people won't keep on using it. Just look at a web browser.

I, for one, welcome our new DNS overlords.


> the weird little pockets of the Internet

> it vanished for Comcast users whose DNS servers at the time enforced DNSSEC.

I would hardly call Comcast a "weird little pocket of the Internet".


AFAIK Comcast has about 0.8% of all Internet users.


Are they the largest ISP? What's your source, doesn't seem like something one could guess?



Thanks, 4th largest according to https://gigaom.com/2010/07/28/top-ten-broadband-providers/. China Telecom being the largest with ~60M.


Obviously some resolvers are misconfigured and a depressingly large number of production DNS servers around the internet will be running remarkably outdated software that couldn't handle the KSK rollover via the RFC 5011 method properly (though how many of these actually have DNSSEC enabled is an open question). For example, I've heard of resolvers breaking because they mishandled the transition and replaced KSK-2010 with KSK-2017 as soon as they got it.

I saw the postponement late last night, and my immediate but completely untested pet theory is that the reason they're getting so many 'broken' resolvers running such new software (RFC 8145 was published in April 2017, unbound 1.6.4 was released June 27, 2017) is short-lived container instances that only have the old root trust anchor reporting in before they've had a chance to obtain the new KSK - if they even stay up long enough to get it.

From the root-dnssec-announce mailing list, it sounds like more information is due Friday:

"Duane Wessels started the research on this and has done a great job. He's presenting it at DNS-OARC in San Jose on Friday."


I'm looking forward to the talk on Friday (and the easy conversation starter at NANOG... we can't keep talking about dying clock silicon forever)... but I absolutely agree with tptacek on this one, DNSSEC is worse than useless (solves no actual security problems; introduces many new problems; absurdly complex and poorly implemented across the board), and this doesn't help the optics one bit. I've gotten myself down to one DNSSEC zone I have to run (a .gov, lord help us all) and have been trying to get the resign going since the new key became available with literally zero forward movement. Lots of fun.

(Aside: this was submitted previously but got no traction: https://news.ycombinator.com/item?id=15362516 Is this the dupe-detector-avoider in action?)


So, how do you suggest to replace DNSSEC then, assuming that your enemy is a nation state intercepting your DNS queries and responding with NSA QUANTUM before the legitimate server does?


Replace it with nothing.


Great, then my entire TLS setup is useless, and I can just use plaintext HTTP.

The whole point is that at least CAs enforce DNSSEC, so no one can get a certificate for my site except for me.



[flagged]


you can switch in seconds to another if you need to.

Who can, the site owner? Then you missed the point of trust agility as described by moxie.


Then moxie missed the point of how trust with websites works.

You have to trust the site owner, who has to trust his hoster, his DNS provider, his domain registry.

Transitively, you already trust all these people.

DANE with DNSSEC does not add any requirement for new trust and it does not require that you trust any involved party more than before.

It merely removes one entity from the equation.


No, the owner of a site does not have to trust their DNS provider and registry. SSL and TLS were designed from the beginning to assume that the DNS was insecure --- they had to be, since the DNS was insecure when they were invented (and, of course, remains so today).


That’s completely wrong.

TLS today depends entirely on DNS – because anyone can receive a certificate for your domain if they control your DNS.


This is addressed in the second paragraph of the post I cited.


No? It’s not addressed at all how the current CA + DNS validation model is in supposedly better than DANE+DNSSEC.


Treating trust as binary is folly. DANE means the provider could send the correct certificate to most people, but a fake one (along with a fake IP) to specific victims.

Convergence distributes this verification, so the provider would have to give the same false result to everyone, otherwise a notary will catch the discrepancy.


> Treating trust as binary is folly.

What? Any trust point being compromised would lead to the same result. Why would anyone use a non-binary comparison?


That quote needs the context of the rest of the post to be understood. My point is that you can trust someone more if you increase the cost of violating your trust.


And?

The provider can request a lets encrypt certificate for the same domain already, and then give Let's Encrypt a wrong IP.

This allows the same provider to get a fake certificate, and MitM only specific users.

I don't see any additional attack surface.


Yes, DANE is not necessarily worse than what we have right now. But what we have right now is what moxie is criticizing. His trust agility argument is for a new proposal (called Convergence), not to defend CAs vis-a-vis DANE.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: