Hacker News new | past | comments | ask | show | jobs | submit login

Note that these attacks involve compromised accounts with authority servers, so despite being the most visible and impactful DNS attacks of the last few years, DNSSEC would have done little to defend against them; in fact, even in the DNSSEC fantasy-world where DANE replaces X.509 CAs, these attackers would still have accomplished their goals.

after reading the headline I immediately thought of "14 DNS Nerds Don't Control the Internet" [0].

[0] https://sockpuppet.org/blog/2016/10/27/14-dns-nerds-dont-con...

That article is peddling bullshit.

Yes, DNSSEC is not adopted. But what the intention with it is to stop people hijacking DNS requests (re-routing then to rogue servers for instance,) and then returning spurious answers.

That’s a relatively simple attack, and it can have fairly serious reprocussions. Just return an A record for the domain and host straight HTTP for example. Or re-divert emails with MX records. Publish fake CAA records to bypass that safety lock if you want to supply a cert obtained elsewhere.

The stuff about the US Govt controlling sites is the most facetious of all. As the original (non) story above shows, controlling the DNS is all you need to control a site in the X.509 world. Extended validation is a joke, controlling the domain is the only barrier to getting TLS certs issued for any domain.

We implicitly need to trust the root DNS. That’s a given. So why couldn’t it be the root of trust for secure browsing? Browsers trust something like 1500 CAs out of the box these days, is it really better to create a system where that many orgs need to be honest, and not get hacked, to be effective?

To claim that the current system, with no way to for people know the DNS answers they receive are valid, poses no security risk, is extremely foolish.

Here's a story about a DNS hijacking attack unprecedented in scale for which DNSSEC is powerless, and your conclusion is that DNSSEC is an important priority.

If you believe control of the DNS is straightforward without DNSSEC, and that control of the DNS is all you need to get an X.509 certificate issued, go get a GOOGLE.COM certificate misissued. Or FACEBOOK.COM. If you actually manage to do it (you won't), turn the timer on your iPhone on so we can measure how long it takes for Google to kill the CA you got it from, with no notification or further intervention from you.

We do not implicitly trust the DNS roots. In fact, it's a core feature of modern Internet security (modern since the late 1990s) that we do not trust DNS at all. It is a small faction of standards zealots, whose pet standard failed for almost 30 years to either gel or get traction in the market, who have decided that their spurned work turns out to be critical to all Internet security, and they're the ones revisiting that long-decided question.

You made this argument in, I think, 3 other places in this thread, and I'd just like to say that I put some effort into making sure my rebuttals relied on different arguments each time. Collect them all! I wrote them I think a little snarkily, but I tried to exceed the bar you set by claiming I'm "peddling bullshit".

Sure for google.com it’ll fail. But you could do it for many, many others. The reality is that control of a zone is all it really takes for someone to get a cert issued for it. In that context you are most certainly dependent on the accuracy of the DNS.

I didn’t for one minute suggest DNSSEC would help in relation to the attack detailed in the article.

I am just saying that to claim securing the DNS is pointless is, in my opinion, a fallacy.


This response breaks down as follows:

1. An incivility directed at me.

2. Another incivility directed as the people whose sites were hijacked.

3. The concession that a misissuance of GOOGLE.COM or FACEBOOK.COM would be detected and unlikely to be successful.

4. The claim that that's only true for sites like GOOGLE.COM and FACEBOOK.COM without further refinement or evidence.

5. Five paragraphs of irrelevant detail about the mechanics of Google's response to a misissuance that have nothing to do with his or my argument.

6. A repeat of the concession from (3).

7. A final claim that a CA getting killed, as Google recently did to the largest, best-known CA in the market, is a "Hollywood Action Thriller style sequence of events", to which I will only respond, check out "First Man", it's great, and a much more interesting show than watching Google respond to misissuance.

> misissuance of GOOGLE.COM or FACEBOOK.COM would be detected and unlikely to be successful

Eventually detection is almost certain, but whether it's "successful" would depend very much on what somebody was doing with it and why.

We have some examples to work with in analysing this, where certificates for Facebook or Google names were issued at various times without Facebook or Google knowing about it - and maybe I'll do that analysis later, but for now I want to focus on your Hollywood Action Thriller scenario.

Google did not "kill" the "largest best-known CA in the market".

Back in January 2017 Andrew Ayer wrote to m.d.s.policy about some certificates Symantec had issued for names like example.com (sic) which Andrew had verified were not asked for by example.com's legitimate owners. This gradually spiralled, with Mozilla producing a fairly substantial document listing well over a dozen distinct problems, both newly discovered and dating back a little way, with Symantec. Overall the impression we got was that Symantec management were not delivering the oversight role needed to ensure their CA achieved what a relying party should expect.

Symantec management didn't like where this was going and tried to "go over our heads". I have no idea whether this worked for Microsoft and Apple, and for me there isn't anyone "over my head", but at Google it appears to have made things worse.

In summer 2017 Google's plan asked Symantec to replace their infrastructure and institute bottom-up change to their organisation in order to restore our confidence in the CA. For practical reasons (it's hard to stall your customers for perhaps 1-2 years while you fix things) Symantec would have needed to continue selling certificates during the period when we did not trust their management to operate a CA, and so they'd need to find another large CA to provide us with the assurances we need while retaining Symantec (or Thawte, Verisign, etcetera, all brands of Symantec) branding.

Symantec negotiated with DigiCert to provide this capability over summer 2017 (very small Certificate Authorities would not have been able to practically do what was needed) but at some point during that negotiation they pivoted to instead selling the business to DigiCert.

Once the initial agreement existed in October 2017, DigiCert and Symantec sought permission to go ahead, and received it on some simple conditions (Mozilla's concern was that this might be something akin to a "reverse take over" in which Symantec would dodge the intended management changes and instead seize a new brand, key people at DigiCert were able to assure us that this was not going to happen), then all the usual business stuff happened, and in parallel DigiCert began building a new issuance infrastructure for the ex-Symantec brands, more or less as they would have under the original concept but with them keeping the profits.

In practical terms Symantec chose to exit the CA business a bit less than a year after Andrew's original post to m.d.s.policy, after many months of discussion across about all the issues raised.

Now, if you want you can speculate about how _hard_ it is for incompetent and untrustworthy people to become competent and trustworthy, but Symantec decided they weren't interested in that path so we'll never know. Nobody killed them, they decided they weren't interested in reform.

This is just more irrelevant detail. Your essential rhetorical strategy here is to concede the argument I've made, but pretend otherwise by marshaling hundreds of words of details that don't address the point you're claiming to rebut.

Nobody cares who wrote to m.d.s.policy about the misissuance or the precise dynamics of Symantec getting out of the CA business --- though surely you'll want to claim otherwise to preserve the notion that you've rebutted me.

The simple facts:

* Symantec was a full thirty percent all of TLS certificates in 2015.

* Google was made aware (through multiple channels) of misissuance.

* Google arranged with Mozilla to distrust Symantec.

* Symantec is now out of the CA business.

If you're trying to claim that Symantec is out of the CA business because it simply wanted to be, and so somehow gracefully exited by selling to Digicert, no, that is not what happened.

Otherwise, none of the detail you're offering has anything to do with this thread.

Your claim was that Google would "kill the CA you got it from" if somebody obtains a certificate for the name GOOGLE.COM and that they'd need to "turn the timer on your iPhone on so we can measure how long it takes" with "no notification".

I've explained this is ludicrously far from reality, spelling everything out so that people can see this imaginary lightning fast reaction doesn't exist. Would the GOOGLE.COM certificate itself get revoked? Yeah, probably. Might even happen the same day if you're lucky.

Would anything at all happen to the CA, ever? Probably not, though it would depend on what exactly the sequence of events was. If it did, as we saw with Symantec it would take months to decide what that should be, and it's very unlikely to be a complete distrust.

Your scenario is something that belongs in a thriller, I gave a nice example where a Vernor Vinge novel does almost exactly this, in a fictional future California, and I explained that er, no, that's not how it works. You are welcome to keep living in a dream world, but if you're going to threaten people with imaginary consequences for doing things you don't like, maybe say you'll launch a fireball at them with your mind or something so nobody thinks you're talking about the real world.

I'm pretty comfortable at this point with what this thread says about my argument and your rebuttal and am happy to leave it here.

What a strange article. I thought it was leading up to saying that control of DNSSEC is decentralized, or has a transparency process, or something.

But instead of 14 nerds, it's the US government (for .com).

I need to read up on DNSSEC.

>But instead of 14 nerds, it's the US government (for .com). //

So it's much worse than it being 14 random nerds then!


There's a more technical takedown of DNSSEC linked at the bottom of that post.

Much better than (CA0 || CA1 || ... ). All it takes is one CA out of 10s of independent CAs to misbehave to insecure whole tls.

In DNSSEC/DANE, world only has to watch one entity rather than 10s of entities.

No. You have to trust all the CAs, and the governments that control the DNS.


> No. You have to trust all the CAs, and the governments that control the DNS.

Not in DNSSEC. .xxx need only trust dnsroot. yyy.xxx need only trust .yyy and dnsroot.

firefox/chrome/etc with support from important orgs with high value names (google.com/bankofamerica.com/etc) would then make sure that dnsroot/.com/etc do not abuse the trust. They have incentive and methods of punishment. There is no legal authority that clients need to map DNS . to existing root keys. A client can map a.b.c to any key it wants.

The risk of gov overreach is same for both tls and DNSSEC. DNSSEC just trusts fewer entities. The only people who benefit from current system, are CAs who are getting $$$ for nothing.

> https://www.imperialviolet.org/2015/01/17/notdane.html

This is orthogonal. Weak Keys are not required or implied characterstic of DNSSEC.

If browsers start mapping cert trust to something besides the DNS roots... it’s not DNSSEC, it’s something else entirely, it’s “our current system, maybe with some slight tweaks”

I am not suggesting every client do their own mapping, that is not a naming system at all. There has to be very large consenus for a naming system to be effective. I just pointed that out to show that dns is not under any gov control. Its under a control of an entity that can be punished.

However who gets to have dnsroot is just a value of a config in DNSSEC. The value itself should not be used to criticize DNSSEC cause its changeable.

How do you punish .com if they misbehave? Move every site off .com?

No. You just map .com to another key with an agreement that new .com owner pre signs and map existing .com subs the right way. An unaware xxx.com does not need to do anything. As long as its done publically with a bang and enough consensus, disruption should be minimal.

Again this is unavoidable in any system that need trust. Thats why I like PoW DNS.

Who is "you"? The people we're afraid of manipulating .COM control the DNS. Google can't "map .com to another key". Their option would be to leave .COM; that is the gun DNSSEC would give to the USG to hold against Google's head.

You is firefox/chrome/etc. Yes you can. The ownership of .com is not as exclusive/protected as .xxx or xxx.com. Thus the firefox/chrome/etc can map it to anyone they feel. Considering so many high value .com subnames, .com can be transferred to neutral party or even dnsroot. USG do not own ".com" string. No one does. Just like ".".

Your claim here is that a browser vendor could somehow fork the DNS and use its own .COM? Explain how that could possibly work.

Anyone can fork DNS. Its just a (name, key) map. As long as its done with enough consensus, it can be done. Mismanagement of .com is serious enough to demand that kind of change.

Lets say .com gets mismanaged. Community is infurious. firefox/chrome/etc demands that . remap .com to new more trustable entity. If . does not. firefox/chrome/etc then remap . to new more trustable entity, because .com must be as trustable as ., because .com is that important. New . give back ownership of all tlds to their previous owners. Except for .com. .com goes to the more trustable entity as intended. New .com then does again similar import of all good xxx.com.

In this whole incident, no one loses the ownership of their names except for .com and possibly . .

Now no gov can touch *.com. Though its different for cctld. Those are owned by their respective govs. Same goes for gtld. But no one gets to mess with . .com .org .net.

It sounds like what you’re proposing is for browser vendors to, in unison, overthrow IANA and the related organizations and stage a coup where they start running their own DNS root authority. And then claiming that this would happen without impact to end users / owners-of-individual-domains.

Browser vendors (specifically all DNS users) have the option. They can do it, if IANA fails at the job of being a dnsroot. Disruption is inversely proportional to consensus. If everyone do it, there is no disruption. Some disruption is unavoidable. Its fair price to pay for stable and solid global naming system.

Ultimately its about deciding who gets to own "x.y.z" string brand globally/contextlessly. World obviously need a single naming system. Either that or expect to have multiple owners to "google.com".

My suggestions are required otherwise why would someone build a global brand if ownership is not safe or guarnteed enough. Future is way more chaotic. Without crypto, a global naming system is not going to survive.

OK. I don't think we have to debate this any more. We can just leave it here: you think DNSSEC is a workable solution to our problems as long as the browsers can, if they ever need to, create their own alternate DNS for the web.

The linked article is about why you can’t simply trust the DNS roots, even if you were naive enough to want to.

If you can’t trust them then the whole thing crumbles anyway.

All you need to obtain valid TLS certs for any domain is to make a CA think you control the domain. So the CA’s trust in the DNS root is already functioning as the basis of X.509.

You manifestly can't trust them today, couldn't yesterday, or for the last 30 years, despite the rise of e-commerce and the gradual shift of all applications to the web with its domain-validated WebPKI. Google doesn't DNSSEC sign. Facebook doesn't DNSSEC sign. No major bank I've found DNSSEC signs. AT&T doesn't DNSSEC sign, nor does Verizon. Some part of Comcast does, or did, and the net effect was that DNSSEC errors broke HBO NOW on launch day for Comcast users (and only Comcast users).

Tell me more about how the whole thing crumbles away? Because I'm pretty sure I'm typing into a TEXTAREA on the real HN, and not some facsimile a DNS hacker created to fool me. The Internet seems to be working fine without the government-run PKI you're saying we have to have.

That’s not pure DANE being discussed by a hybrid in which CAs are still playing a role.

In pure DANE you need only trust the DNS root.

The whole premise of AGL's article is the fact that you can't have pure DANE. Literally, "a hybrid of DANE and CAs" is just a restatement of the sentence "you have to trust all the CAs and the DNS". You haven't said anything in this comment.

Except if that one entity misbehaves, even if you catch them, you can't do anything about it, because they own the TLD.

Yeah but because they own the TLD they can get X.509 certs issued for any domain under it, because controlling the domain is the only check CAs really perform before issuing a cert for a domain.

The DNS is already acting as the root of trust for X.509. X.509 does not make the scenario of a rogue TLD operator any different.

You have to trust somone under DNS. The only trustless naming system I can think of is over a PoWChain (eg example.btc).

Still you have 3 choices in DNSSEC/DANE,

  - get a .xxx, trust dnsroot.
  - get a .xxx (when .xxx is as easy to register as xxx.com), trust dnsroot.
  - pick one tld out 1000s and get xxx.ttt, and trust ttt and dnsroot.

I’ve got those choices if I use DNSSEC for my trust, correct. Or I use the existing system, where if a CA misbehaves, we boot them out of the browser trust stores and site operators don’t have to change anything.

The CAs, for the most part, only require you prove you control a domain to issue a cert for it.

So you’re already trusting the DNS, whether protected with DNSSEC or not, in the existing system.

And yet when attackers want to misissue certs for small sites (for big sites, misissuance is detected automatically and gets CAs killed), they don't exploit vulnerabilities that DNSSEC defends against. Why is that? And given that's the case, why pursue DNSSEC?

And how is any of this, any of it all, relevant in a world where registrars can simply speak RDAP to CAs? If you believe the problem is that the Internet will (to use your turn of phrase upthread) crumble away unless we secure the DNS for domain validation, why should we forklift out the entire DNS to do so, when we can just get a small group of organizations to deploy RDAP, something they're planning on deploying anyways, and then add that to the 10 Blessed Methods?

No part of DNSSEC makes any sense.

Because the DNS as it is allows for the potential to do something similar (by getting a CA to accept fraudulent DNS response, leading them to issue a cert,) without someone seizing control of a domain otherwise.

It makes no sense not to try to secure the DNS.

Securing the DNS (a) doesn't fix the underlying problem for TLS (as you can see by the last 2 waves of CA-missuance takeover attacks, neither of which relied on wire-level DNS hijacking) and (b) adds nothing to any secure protocol, which already has to do end-to-end verification today. Despite that, DNSSEC is already the most expensive proposal we have on the table today, requiring every major site and every major piece of software to upgrade or reconfigure.

Deploying RDAP and adding it to the CA/B Forum Blessed Methods gives CA's themselves an end-to-end ability to validate domains, decisively solving the DV problem, and doesn't require any of that expense.

Explain to me again why we should choose the former over the latter?

Except there has to be a crypto proof why Google owns google.com not me. That means we need to secure dns. Then why need CAs at all ? Whats the point ?

A group of certifying singers that aren’t directly controlled by the United States Government is the obvious reason.

Current: Google need to watch all CAs.

DNSSEC: Google need to watch .com and dnsroot.

Which one is better ?


(I am ratelimited so posting here rather than reply to the child post by tptacek https://news.ycombinator.com/item?id=18889809)

Of course they can. There is literally no legal or otherwise difference between Verisign and .com. Chrome can do whatever it want, cause its Google's browser not .com's.

In case when .xxx becomes dishonest, you can just move to your own gtld or .more-trustable tld. In current system, there is no concept of ditching a CA. If a CA decided to missmap a name and you are too small, you are fked.

> it’s actually 1, or 1 AND 2

No you can have DNSSEC without CAs. I have explained that already without changing much of the tls. Basically example.com DNSSEC key become CA for example.com. example.com then would create a tls cert in the usual way. No pain.

“You can just move to your own <other TLD>” isn’t even remotely plausible. Any site with worthwhile traffic isn’t going to just forklift to a new TLD and convince all their users to switch over. Imagine if .com was considered untrustworthy and suddenly every user in the US had to use google.othertld, facebook.othertld, etc.

Yeah but if .com is untrustworthy then the game is up.

The operator of .com can use their control over it to get a valid TLS cert issued by any number of CAs.

So the situation is no different currently, trust in the DNS is essential.

Again if that's true then the game is up, because the USG obviously controls .COM; they theatrically demonstrate that every time they take down a piracy site. But, spoiler! The game turns out not to be up.

The former, for several reasons, among them the fact that those actually aren’t the options (it’s actually 1, or 1 AND 2), and the fact that Google can’t end .com they way they did Verisign.

But feel free to ask the relevant team at Google, who will give you the same answer.

There are about 1500 entities in the X.509 game, not 10s.

DNSSEC has the unique advantage of permitting offline signing. If you go this route, even somebody controlling your authoritative servers wouldn't be able to modify your records.

It doesn't matter if you use offline signing for your zone if someone owns up the account you log into to control your domain with your registrar, or owns up the registrar. So no, even with offline signing, DNSSEC did nothing here.

But it's worth keeping in mind that most organizations can't use offline signing, because the duct-tape-and-baling-wire solutions DNSSEC applies to people dumping zones with NSEC records all require online signers.

Depending on the registrar, updating glue records can be a separate process that requires additional authentication. Not long ago my registrar required me to contact them directly to update glue records.

Offline signing is a very useful feature precisely because it makes it easier to differentiate security domains. For example, I could use offline signing for foo.com (along with a registrar lock) but delegate the subdomain dyn.foo.com to a separate SOA that uses real-time signing (or none at all) for use by internal services.

The problem with the modern web PKI is that, as a practical matter, everybody is forced to put their private keys not only online, but unprotected (because HSM and PKCS#11 support isn't that great, yet).[1] Key rotation and certificate expiration doesn't really solve the problem; in fact, rotation exacerbates the problem by 1) forcing you to keep the CA keys online, and 2) incentivizing increasingly loose authorization policies.

Offline signing makes it easier to manage risk in a more robust manner. It's a tool, not a panacea; a tool conspicuously missing from TLS infrastructure. Some newer projects like Wireguard have effectively turned asymmetric key authentication systems into something that walks and quacks exactly like shared passwords. They do it because key management is a hard problem. But I'm not ready to throw in the towel, and the option (both officially and as a practical choice) of offline key signing in DNSSEC is under appreciated. From a security perspective, allowing people to enumerate my subdomains is a small price to pay for permitting me to keep my private keys offline. I don't expect everybody to make that calculation, but it bothers me that people fail to see the value at all.

[1] People faithfully recite the mantra "encrypt at rest" as if that means something. Data at rest is useless. If your data is worth anything then you're going to actually be, you know, using it, and if it's not protected in use then it's all just security theater. This is most clear with the private keys (e.g. stored "encrypted at rest" in KMS) used by cloud services for acquiring access tokens. It's 2019 and industry is still basically using shared passwords--tons of them, a complex web of passwords dutifully pushed around the network by layers of complex software. As if any of it matters to someone who has figured out how to penetrate your network; as if 5 minute or even 5 second password rotation matters to the guy who already figured out how to automate penetration onto your systems.

With the root account on every registrar I have access to, I can trivially defeat DNSSEC for my zone. Tell me which registrar you're talking about where you believe their security controls would make DNSSEC resilient.

(You addressed the first half of my comment and not the second).

Obviously the signing of example.com by .com must be secure itself. Otherwise no crypto delegation is secure, including tls signing.

> where DANE replaces X.509 CAs

Much easier migration actually. Just patch all firefox/etc to accept example.com's DNSSEC key as root ca. Then example.com can create its own tls cert. A very simple and minor patch to tls codebase.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact