Hacker News new | past | comments | ask | show | jobs | submit login
Some bad news for DANE and DNSSEC (apnic.net)
81 points by okket 5 months ago | hide | past | web | favorite | 56 comments

> certificate transparency logs appear to be completely ineffective in catching CA-related name hijack events in real time.

In what sense is this true? They're effective because browsers only accept certs which can first prove that they were submitted to a CT log, and it's easy to search the CT logs for e.g. gmail.com to find potential misissuance events.

This is the first time I've heard CT described as ineffective. What gives?

It's unclear what Huston means by "real time", but in the comments section he cites the fact that it took 6 months for the misissued Symantec certificates for example.com to be noticed.

That's because the owner of example.com wasn't monitoring CT. Had they been monitoring, they would have been alerted quite quickly. As a case in point, when Certinomis misissued a certificate for test.com, my monitor notified me 16 minutes after issuance, I filed a report with Mozilla two hours after issuance, and a representative of Mozilla responded less than 3.5 hours after issuance.[1] That's pretty fast.

And now Mozilla is kicking Certinomis out, providing yet another example of how CT is improving the Web PKI. CT works.

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1496088

That was really interesting to read. As a relative layment in this area. Do you have any recommendations on reading material in this area? How is your monitoring setup?

https://blog.cloudflare.com/introducing-certificate-transpar... is a very good overview of Certificate Transparency.

I've written my own Certificate Transparency monitor called Cert Spotter. I use both the standalone open source version (https://github.com/SSLMate/certspotter) and the hosted service (https://sslmate.com/certspotter) to monitor my own domains as well as several test/example domains (example.com, test.com, etc.).

An earlier edit of your post seems like you didn't understand enough about how X.500 Distinguished Names work to search the DNs (presumably in crt.sh) so learning about that might not be time wasted if you care what's inside a certificate other than domain names. Be warned that although the CAs are required by CA/B rules to get most of those other details right when they're included there is limited appetite for enforcing such a rule.

There are Google Groups (because a lot of this is a Google project or spins out of Google projects) for a lot of this, including mozilla.dev.security.policy (the public discussion of Mozilla's root trust store policy), certificate-transparency (general discussion of CT) and crtsh (discussion of Rob's crt.sh monitor database and site). They're all usually quiet enough that you could read back a few months and/or subscribe and have them in your background.

Yes, sorry abut the edit, I thught I was quick enough to ermove it before anyone read it as I figured out what I was doing wrong.

Thank you very much for the links.

He doesn't understand what CT does.

Yeah, it's technically true that CT isn't designed to directly prevent most attacks.

THe largest effect of CT is that CAs get pressured to get their stuff in order. It has been a major milestone in improving the ecosystem, particularly because it also led to many real distrusts, including of one of the largest CAs at all (Symantec).

Yes, CT doesn't try to prevent most attacks directly. It tries to create an ecosystem where CAs fear getting distrusted when they don't prevent attacks in the first place. And it works very well.

Huston has re-set the bar at "real-time", and implies that DANE would provide such real-time assurance, when in fact even DANE advocates acknowledge that, were DANE to see wide use, we'd need to develop a DANE equivalent of CT.

I guess I don't even buy that CT is ineffective in real-time. If attacks don't happen because of the knowledge that misissuance would necessarily be publicly noticed and then severely punished, then it seems your users have been protected against real-time attacks, even if that's an emergent social phenomenon rather than a protocol guarantee.

In practice I found new certs being available for download from CT logs hours after they've been issued. This is more than enough to perform an attack.

Another point is that CTs alone won't prevent the attacks nor inform you about a problem - they need a monitoring system.

(yes I know that's not the problem with CT - it's great, just trying to justify a strong opinion of "it doesn't work")

> The efforts of the CAB Forum to instil some level of additional trust in the system appear to be about as effective as sticking one’s fingers into a leaking dam. The number of trusted CAs has extended conventional credibility well beyond the normal boundaries and has pushed the unsuspecting user into a fragile state of credulity.

> Efforts to improve this mess, such as Extended Validation (EV) certificates, have gained no traction with users, as they are largely immune to subtle changes in the content and colours of the browser’s navigation bars, and certificate transparency logs appear to be completely ineffective in catching CA-related name hijack events in real time.

That’s a pretty extreme opinion. I am surprised it appeared on official apnic.net site. I do wonder if it has anything to do with the fact that most major browsers, who actually decide who get to be CA, are US based.

Its hardly extreme, your browser quite literally trusts hundreds of CAs.

How much do you trust:


Turkiye Bilimsel ve Teknolojik Arastirma Kurumu - TUBITAK

Unizeto Sp. z o.o.

XRamp Security Services Inc [1]


etc, To have not been compromised, or selectively delivering certs to "interesting" targets. Because they can tell you practically any host on the internet is a valid SSL domain.

Stuff like DNS CAA records, and certificate transparency help somewhat, but are either trivially stripped by MITM attacks or not implemented at the CA or software level.

[1] Appears to have went out of business entirely, how safe is that SSL cert now?

> How much do you trust ...

I trust them no less than I trust the domain resellers who administer DNS.

It used to be that all you had to do to be a CA was knock on some browsers' doors and say "I'm a CA, plz add me, kthxbai." But that hasn't been true for quite some time; now you have to go through audits and several other hoops to be added to the lists, and miscompliance results in browsers dropping often fatal ban hammers, as DigiNotar and WoSign can attest. Browsers have also developed fairly stringent policies for enforcing minimum cryptographic rules and audit mechanisms that allow for strong policies.

The goal of many DANE proponents is to replace the entire CA infrastructure and PKI system, but they haven't really put in the work to actually demonstrate the ability of clients to enforce strong policies in practice. The root keys are no longer unacceptably weak, but there is still a proliferation of weak keys. And it's not like DNS providers have done a good job of actually enforcing policies around things like name spoofing or enforcing UTS#46 validity.

Instead, we're just supposed to take it at their word that DNS providers make better CAs than our current CAs, and drop all of the auditing and the like that happens today. And if the Libyan government decides it wants to MITM a popular link shortener, we're supposed to just shrug our shoulders and say "nothing we can do," I guess.

At least the Libyan government can only MITM .ly in DANE world, and not a Libyan CA and consequently every other domain in existence.

Also, if the Libyan government decides they want to seize the domain of a popular link shortener (like the US government has seized some domains under US TLDs), they can just get a certificate from Let’s Encrypt. The Web PKI system isn’t really set up to stop them from doing that. The only scenario where the current CA system possibly has an advantage is if they apply their MitM covertly, only to a subset of clients. If this is discovered, CAs who participate in the scheme can be blacklisted, whereas with centralized control you’d have to basically blacklist all of .ly…

Let’s Encrypt participates in CT, so no, they won’t. This will be discovered, and (I guess) they will revoke the cert really quickly

They might, but if so it would be a special exception from their usual policies. In general, deciding who owns a particular domain name is a matter for the corresponding TLD's registry. If NIC.LY decides to seize bit.ly, there's nobody that can declare that illegitimate per se, or force them to give it back. And to quote Wikipedia on the semantics of DV certificates (which are the normal kind):

> The sole criterion for a domain validated certificate is proof of control over whois records, DNS records file, email or web hosting account of a domain.

The big difference is that in DANE, they get to MITM forever. And in current world, the MITM will be discovered and country’s registrars disabled pretty quickly.

As Adam Langley has observed, fully implementing DANE doesn't get rid of these "hundreds of CAs", but rather adds a hundredsth-and-first --- the DNS hierarchy.


Meanwhile, CT has done more to protect the Web PKI in a couple years than DNSSEC ever has, and without forklifting out infrastructure. In fact, if rumors are true and at least some of the CA misissuance was detected due to broken pins, you can say the same thing about HTTP certificate pinning, and that technology wasn't even successful.

It adds one more, yes, but it is exclusive. That is, if a domain wants to use DNSSEC and DANE, then that is mandatory for clients that support it, and for them WebPKI becomes a non-issue.

The fact that DNSSEC is a true PKI, and the only true Internet-scale PKI, and one that is 100% aligned with the domainnames that users understand and which are embedded in URIs and email addresses, and the fact that DANE gives one a way to opt out of the WebPKI mess -- that all makes DNSSEC/DANE superior.

If someone today wanted to use DANE exclusively, opting out of the rest of the Web PKI, what percentage of Internet users would be able to make HTTPS requests to their site?

If browsers someday decide to implement some DANE-like feature, it would quickly become a fairly high percentage. There would still be a rather long period where browsers would have to continue to trust Web PKI certificates in general, even assuming they decided to phase them out. But... during that time, anyone using DNS over HTTPS with a trustworthy resolver would presumably get the right TLSA records, so their browser would reject attackers’ certificates even if validly signed by the Web PKI, no?

That's an answer to the "hypothetical someday" question, but I'm interested in what the number would be today.

Then the answer is approximately 0%, right? But what does that prove? Perhaps DANE is not ready yet, or perhaps it’s struggled to get adoption because its design is fundamentally flawed; you’re versed on the details, I’m not. But to me that seems pretty orthogonal to whether “adding one more” is good or bad, or whether DANE’s design is fundamentally superior to the CA model as cryptonector claims. Or at least it’s a very oblique way to make a point about that.

My objection, which, mea culpa, I stated obliquely, is to 'cryptonector's subtext that DANE is a reasonable thing for an actual operator to pursue in 2019. We can't even generate a workable draft for how a browser would implement DANE in 2019, let alone see it implemented in a browser. Huston is probably right that the failure of this draft takes DANE off the table for another couple years† --- a couple more years in which DNSSEC can't do the one thing DNSSEC people think it's most valuable for.

How many years have to pass before we give up on this boondoggle? We're coming up, within this next 12 months, on twenty five. Do we wait for year thirty to finally, formally declare time of death? I'm fine either way. But I'd say we passed the mark where people can casually pretend that DANE is something anyone should expect to get value out of at least five years ago.

So I take umbrage at the tone of that comment above, that "it's OK that DANE adds one more CA to the mess of CAs we already have because some people will lock their sites down to DANE-only". Well, no, no they won't, because that's akin to taking your site off the Internet, and that will remain the state of play indefinitely (probably forever).

and that assumes that after the IETF figures out how to specify stapled DANE that the major browser vendors will adopt it; the evidence points the other way.

That's a very strange thing to be interested in.

Doing anything to detect or prevent cert mis-issuance would have resulted in better outcomes, because DNSSEC has nothing to do with HTTPS. All the previous attempts were lame, optional hacks that had obvious workarounds.

I keep repeating the following and people ignore it, but it's a guaranteed way to stop mis-issuance cold - not detect it after a few hours, not make it less likely when a domain and website opt to jump through five separate hoops.

The registrar should coordinate between domain owners and certificate authorities to provide a second factor, second party, cryptographic verification of the intent to issue a cert. Rather than say "ok, this organization says who owns this domain", and then "ok, this other organization controls who can make domain certs", you actually validate the second thing based on the first thing. A hacker can control a domain, but that doesn't mean they own it, or that they control the private key(s) used to purchase or transfer the domain.

And you can automate this a dozen different ways for large organizations, it's not difficult. The difficult part is getting the registrars, CAs, and browsers to accept it.

Several of the most engaged and well-known people in the DNSSEC community, including Geoff Huston, who wrote this blog post and said as much in it, not only believe that DNSSEC does have something to do with HTTPS, but that HTTPS is essentially DNSSEC's raison d'être.

I trust that the first time those would try to do something bad, CT will catch them, and in a few weeks, they will no longer be trusted.

Entering this list costs hundreds thousands of dollars, an attack using a different channel is going to be more economical.

CT only covers a small subset of CAs, and an even smaller subset of TLS clients.

> CT only covers a small subset of CAs

I’m curious what you mean by this. CT is required for a cert to be trusted by Chrome. It seems like most CAs would participate, considering the low value of a cert not trusted by Chrome.

All major public CAs participate. There are some smaller subCAs which do not participate at all because they've defined their problem space such that Chrome + Safari is irrelevant, e.g. GlobalSign is responsible for some legacy subCAs that issue for a single company, that company doesn't care about web browsers for these certs and so logging (which isn't required by policy) is not interesting to them. GlobalSign expects these legacies to go away over the next year or so.

Some of the large public CAs offer, either explicitly in their ordinary "purchase" journey or by special arrangement, not to log pre-certificates. And a few in-house CAs at very big outfits like Google deliberately don't log pre-certificates.

You do NOT need to log pre-certificates to be trusted by Chrome, the reason to do that is to produce a drop-in experience for non-technical subscribers. If you have technical staff (which obviously Google does) you can log certificates retrospectively and staple the SCTs to the TLS setup as needed, allowing you to log on a "just in time" basis at a cost of needing a bunch more smart people to run your web servers. If those servers serve google.com and gmail.com you already needed a large international team anyway. So these latter certs do get logged, but literally it may be seconds before they're used (and inevitably Google has screwed up and not logged one and that broke some of their sites in Chrome briefly... closed course, professional drivers, do not attempt)

I don't think Geoff Huston argued for more CAs or a different distribution. His view is, in my experience, shared by many, regardless of nationality, though not as vocal. There are simply too many examples of compromised CAs, and the webpki system makes anyone vulnerable to the weakest of them.

I find it hard to square the paranoia about CAs with the imposed death penalty on Symantec's certificate business. If no one is watching the CA's, how did Symantec get caught and punished? And for what seemed like relatively minor infractions?

I think its possible that the average web developer underestimates the degree to which the web PKI is monitored by private and government organizations. And public cert logging only makes that easier.

Are compromised CAs worse than other software bugs though?

Yes, we have too many CAs, and some of them are pretty suspicious. It would be great to be able to give them TLD limits. But becoming a CA is still very slow and expensive, and certificate transparency will catch bad behavior. So there is a very strong incentive for each CA to be good.

Compare this to random server exploit, which is deployed anonymously and has no direct monetary harm to the maker company.

No wonder there has been very few cases of CA-based compromise compared to good old software exploits.

"More CAs" == worse. I don't know what "different distribution" means.

Anything that doesn't have a strong hierarchical binding to the names users understand (domainnames) is simply not going to have better security than WebPKI.

DNSSEC is the only true PKI we have. It has one set of roots and is workable even if we have competing roots, provided they sign roughly the same TLDs. The DNSSEC PKI is also truly hierarchical, while the WebPKI is absolutely not.

The web is absolutely built on naming of services that users can mostly understand -- that is, domainnames. Users cannot understand x.500 naming. So the web absolutely needs a reliable way to authenticate domainnames. WebPKI is not reliable, no way, not even with CT. That leaves DNSSEC as the only remaining alternative.

It's DNSSEC or WebPKI, and WebPKI is no good. So it's DNSSEC or nothing.

> The web is absolutely built on naming of services that users can mostly understand -- that is, domainnames.

Users don't actually understand domain names. If Google were to make the result of searching for "facebook" return a facebook.google.com that MITM'd Facebook, they could capture ¼-⅓ of Facebook's traffic without any software complaining about it.

Your comments have a strong sense of "if we just made sure that the server is truly using the domain name, then everything would be hunky-dory." In fact, as far as I can tell from actual security problems, fully securing the domain name (or even authenticating the server properly in the first place) wouldn't improve security meaningfully for most people.

By your logic, why use HTTPS in the first place? (Or at least why care about certificate validation at all?)

Hint: Because ISPs are in a position to intercept and even MitM traffic, but generally not in a position to decide what URLs you go to.

Only what is possible can be done.

We have the technology to actually ensure that https://news.ycombinator.com/ is really https://news.ycombinator.com/ but we do not have technology that could ensure this is where you get when you type "Hacker News" into Bing.

Part of what has to happen to civilisations around technology is that they adjust to what is possible. Different adjustments are possible, the egg washing thing is an example: in some countries you purchase eggs in the state they're laid (which means sometimes there is dirt on the shell), and you don't keep them in a refrigerator; in other countries you purchase eggs "washed" clean and they're refrigerated. Either of these approaches works fine, but mixing them up doesn't work.

So, for example, teaching people to bookmark sites is an adjustment that works with our technology, whereas teaching them to get everywhere by following sponsored links on Google not so much.

WebAuthn is another example. A Security Key doesn't care that you thought mybnak.example was your bank, doesn't care if you got there from a phishing email, from a bogus advert or you just can't spell when typing. It will resolutely refuse to provide mybnak.example with your mybank.example credentials because those strings are not the same. You can press the button over and over, swear, throw things, but because it's based on the URL it won't let you screw yourself.

> By your logic, why use HTTPS in the first place? (Or at least why care about certificate validation at all?)

Privacy is valuable and relevant, and you need some modicum of authentication to get privacy. The main threat here is "someone who's watching all the traffic go by on the coffee shop wifi"

I work in IT and confidently say that if you MITMd\phished Facebook traffic to resolve to notFacebook.imstealingyourshit.com you'd get your 1\4 to 1\3 results.

If you spoofed an idn that looked like Façebook.com, you get damn near 100%

This is a long and interesting summary of a DNS operations research workshop. The "bad news" bit is just one part of it.

The "bad news" here is that the TLS dnssec-chain-extension was dropped by tls-wg. Chain-extension staples DNSSEC records to TLS handshakes, so that you don't need to do extra DNS lookups to get them. This is important because a pretty big fraction of Internet users are on networks that can't reliably look up DNSSEC records using the actual DNS protocol.

If you're hungry for the details (of course you are!), I believe they can be summed up as follows:

1. The point of dnssec-chain-extension is to enable browsers to use DANE, either as an alternative or an enhancement to the X.509 CA hierarchy.

2. That means the threat model for dnssec-chain-extension has to include attackers with valid certificates; define them out of the threat model and you've defined DANE out of having a point.

3. An attacker with a valid certificate can strip dnssec-chain-extension out of a TLS handshake.

4. So they had to reinvent certificate pinning to make dnssec-chain-extension make sense.

5. But certificate pinning is already a dead letter as a browser standard, because of operational concerns around abuse and also consequences of misconfiguration.

If you're just here for another dose of my DNSSEC snark, I will observe for you this: Geoff Huston, a giant in the Internet/DNS operations research community, concedes in this post that DANE is the whole "reason DNSSEC is worth the effort". He also believes that dnssec-chain-extension was vital to getting it deployed in browsers (I'm skeptical Google, Microsoft, or Apple were going to dip their toes in this swamp-water again, but whatever). And Huston apparently didn't notice until today that the tls-wg killed this draft last year.

If you're keeping score, the DNSSEC "stick-a-fork-in-it-ometer" is currently registering:

* The elimination of DNSSEC from macOS, Mozilla, and Chrome (in that last case accompanied by a statement about why the team doesn't believe DANE is workable).

* The success of DNS-over-TLS and DNS-over-HTTPS, both of which accomplish 96% of the bottom-up, fuck-DNSSEC goals of DNSCrypt (or whatever it was Dan Bernstein called it).

* The success of Certificate Transparency and, more broadly, Google's success at cracking down on CAs and getting CT deployed, which further deflates the impetus for DANE.

* LetsEncrypt (and, I guess, Amazon's Certificate Manager), which took money out of this whole contest (I don't think money was ever the big deal in the real world that IETF people thought it was, but it sure drove a lot of dumb mailing list posts).

* MTA-STS, the SMTP strict transport security system that locks down TLS between MTAs, which was standardized by all the major email providers, specifically (stated in the draft!) to avoid the need for DNSSEC, and which is now being rolled out at GMail.

* And now I guess we'll pretend that DANE in browsers was somehow on the table and that it just died. I'm just happy we agree it's not happening.

I don't think I'm quite ready to stick a fork in it, but it's getting close.

My favorite detail from this writeup, by the way, is the fact that Sweden and the Netherlands got their DNSSEC adoption by paying people to use the protocol.

Money is a big deal for ease-of-use and access. Tracking down a credit card and the necessary approvals to use it, even if the amount of money is no big deal, is a lot of friction for many people. Could be corporate red tape, could be that you don't have a credit card. Could be because you're in a country where financial sanctions prevent you from transacting with a foreign CA.

All of this friction would lead to many sites not using HTTPS because it's just too much of a hassle. And it's the visitors to those sites that suffer most.

TLS certificates are free now and that is unlikely ever to change again.

You stated that it wasn't a problem before they were free:

"I don't think money was ever the big deal in the real world that IETF people thought it was, but it sure drove a lot of dumb mailing list posts"

I am pointing out that I disagree, it was a very big deal and it's why I created Let's Encrypt.

I think Lets Encrypt is a big deal. I think making TLS certificates free is a big deal in terms of getting people to deploy HTTPS. I don't think they're a big deal in terms of getting people to use some other protocol like DNSSEC instead. Sorry, I worded that imprecisely.

The point of dnssec-chain-extension is to enable browsers to use DANE without performing additional DNS record lookups and incurring the associated latency penalty. We can invalidate all the counter points if DNS servers supported additional answers or push.

With DoH we get DNS push and a browser could do DANE without incurring the associated latency penalty. They could peak ahead in the stream and see if a DANE (or SRV) record exist. No latency, no middle box, no additional secure connection, no dnssec-chain-extension, no certificate pinning. DANE support in the browser would then be purely a optimistic code path that improves the security if the DoH server happens to send it.

We'll be reviving draft-dnssec-chain-extension as an individual submission.

With a pinning mechanism? Also, out of curiosity, what’s taking so long?

The draft "pinning mechanism" was not like HPKP and insisting on portraying it this way was a recurring part of the bad faith effort to derail the working group efforts earlier.

It's a "pinning mechanism" like HSTS, so it doesn't work as a ransom tool, and the risk of a foot-gun is much more manageable. You're apparently very enthusiastic about MTA-STS, which has roughly the same pinning.

I like pinning. It's the browser vendors who don't.

Not sure I see how making easily-hijackable DNS the authority for TLS certificate trust would solve these problems. In fact, I would expect that would make it worse, by putting all your eggs into one basket.

It’s already the authority for TLS trust. How do you think domain validation works?

Yep. Let's just cut out the middlemen already.

Any change in how PKI is used in a browser must be done by browser vendors, no change has been their decision.

The idea spread that TLS should be used after Snowden is a bizarre antipattern. HTTP cache appliances have been rendered obsolete, and domain-validated certificates only provide assurances that you are connecting to the same server that LetsEncrypt had connected to.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact