Hacker Newsnew | past | comments | ask | show | jobs | submit | packetized's commentslogin

I dare say that it’s annoying you as a result of cognitive dissonance about your employer paying six figures to migrate to it.

If you’re being paid well, obviously you’ll be annoyed by concepts contrary to your work.

e: edits are hated, but the downvotes prove out: Kubernetes is simply OpenStack for hipsters.


> It is difficult to get a man to understand something when his salary depends upon his not understanding it.

- Upton Sinclair.


The finding in TLS-01-003 is surprising to me, mostly because it presupposes a lack of sophistication among users of this library who are also using X.509 NameConstraints. From RFC5280:

  For example, a name constraint for "class C" subnet 192.0.2.0 is represented as the octets C0 00 02 00 FF FF FF 00, representing the CIDR notation 192.0.2.0/24 (mask 255.255.255.0).
As they mention in the findings:

  Typically, subnet masks should be contiguous and the presence of a non-contiguous mask might indicate a typo (such as 225.255.255.0 vs. 255.255.255.0), or potentially an attempt to bypass an access control scheme. Therefore, it is recommended to treat certificates containing non-contiguous subnet masks in their name constraints as invalid.
This seems to run counter to the intent in the RFC. By allowing for a four-octet subnet mask, instead of simply an int to represent the a contiguous CIDR mask, the RFC authors may have intended that more complex IP-based NameConstraints could be constructed. This certainly would make a huge difference for something like an intermediate (CA:TRUE), where it becomes much more economical to specify a sparse mask for a highly templated network. Think certs for network equipment or VoIP phones with regular, repeatable IP addressing across many locations/networks. E.g., a VoIP provisioning system that has an intermediate issuing CA with the following NameConstraints: IP:172.16.0.0/255.255.1.239.

If any change comes from this specific finding, I would hope that it's simply a flag to allow or disallow the use of discontiguous masks. I do understand that this is specific to WebPKI; having said that, if a client is implemented using rustls (with these recommendations enabled) and it happens across a perfectly valid certificate issued by an intermediate with a discontiguous mask in the NameConstraints, presumably it would fail or otherwise break. And yes, I have previously configured precisely this in an intermediate CA.


Leaving aside the question of whether other software will work reliably with non-contiguous subnet masks (which led to this recommendation), in general, most software does not deal well with NameConstraints. Some libraries ignore it, some libraries fail hard if a constraint exists, and in general, I'd expect a certificate chain involving NameConstraints to be poorly supported at best and insecure at worst.

I wish that NameConstraints were better supported, to make it easier to support intermediate CAs; for instance, prove you own example.com and you could then have a CA restricted to *.example.com. But right now, that just doesn't seem feasible.


I spent significant effort writing the name constraint implementation in mozilla::pkix (used by Firefox) and by webpki (used by Rustls) to fix this situation. The result is summarized by Netflix:

> The result was that every browser (except for Firefox, which showed a 100% pass rate) and every HTTPS client (such as Java, Node.JS, and Python) allowed some sort of Name Constraint bypass.

- https://netflixtechblog.com/bettertls-c9915cd255c0

Since then, Google Chrome has implemented and deployed a new certificate processing library on some (most?) platforms it supports, and I bet they have similar or better name constraint support. I believe Apple has also improved their implementation.

> I wish that NameConstraints were better supported, to make it easier to support intermediate CAs; for instance, prove you own example.com and you could then have a CA restricted to *.example.com. But right now, that just doesn't seem feasible.

Since the aforementioned improvements have shipped in production browsers, it is much more practical, from a technical standpoint, to do that. The real problem now is browsers' CA policies. As I understand it, they do not want you to be able to get your own name-constrained intermediate CA certificate. The CA that issues you the intermediate CA certificate would be on the hook, with the consequence of being removed from the root CA programs, if you issued a malformed certificate. And there are other issues with the policy. I hope there are improvements to the browsers' CA policies to make this practical, but I wouldn't hold my breath.


Thank you for working on this!

> Since the aforementioned improvements have shipped in production browsers, it is much more practical, from a technical standpoint, to do that.

What about non-browser HTTPS/TLS libraries? Even if browsers support it perfectly, this doesn't seem like something CAs should start deploying if widely used libraries have problems with it. And based on the test results from BetterTLS, it looks like widely used libraries still have problems with it.

Also, the BetterTLS article gives an example of a NameConstraints for "192.168.0.0/16"; I don't think that's something a public CA could reasonably issue, for a variety of reasons (conflicts with routers, IoT devices, and many other things). We need some reasonable solution for "local network" devices, and in particular for devices where a user may be able to get hold of the private key, but in the meantime I don't think publicly valid IP-address certificates make sense.

> The CA that issues you the intermediate CA certificate would be on the hook, with the consequence of being removed from the root CA programs, if you issued a malformed certificate.

Given that today, such an intermediate CA could be used for arbitrary MITM, that's entirely understandable. If there are additional constraints that an intermediate certificate needs to have, we need to have enforcement mechanisms to support such policies, so that an intermediate CA can't create certificates that may lead to MITM or similar attacks.

- Root CA policies for proving access to (star).yoursite.example such that you can get an intermediate CA.

- Root CA policies for the valid duration of intermediate CA certificates (no longer than X, no longer than the proven ownership of your domain...). This already constraints the valid duration of certificates underneath the intermediate CA.

- Policies for requiring Certificate Transparency (e.g. any such intermediate certificate still has to use CT logs, such that individual certificates must appear in the CT logs to be considered valid). This could be done by policy in browsers, such that any CA opting into issuing intermediate CA certificates must use CT for everything; that's where we're going for all CAs anyway.

- Do we have mechanisms to ensure that an intermediate CA can't issue another intermediate CA certificate?

- Policies for wildcards underneath intermediate CAs. If you have an intermediate CA for (star).yoursite.example, under what circumstances can you issue a wildcard certificate underneath it, and for what domains?

- Under what circumstances should you be able to get an intermediate certificate for something like (star).yoursite.internal? Should you, or should all names chain up to a domain you prove ownership of (e.g. (star).internal.yoursite.example)? There's potential value in internal-only domain names, and this would reduce the scope of attacks (preventing the intermediate CA from issuing certificates for your public site), but ownership issues might become tricky.

- Related: Today, if you want to MITM example.com, you need to get a root CA to sign your certificate. If example.com obtains an intermediate CA, it's potentially easier for an adversary to somehow obtain a certificate for example.com or www.example.com signed by that intermediary, depending on example.com's infrastructure. (It's also potentially easier for an adversarial jurisdiction to force example.com to surreptitiously mint a certificate; CT would help there, but this still seems like substantially more attack surface.) Is there some way we can make that less likely and more difficult as an exploit vector?

- Policies for the duration of certificates underneath intermediate CAs.

- Policies for handling revocation of certificates underneath intermediate CAs.

- Policies for the use of dedicated HSMs for intermediate CAs. Should we make that a policy requirement?

- Eventually, when NameConstraints becomes not only usable but best-practice for intranets, browsers and libraries could start defaulting to only loading certificates from the system certificate store iff they have NameConstraints that meet some appropriate policy.


I'm not completely certain, but my guess is a lot of software and midleware doesn't work well with non-continuous subnet masks.

It's one of this (many) thinks where in theory the RFC has intended support for this but in praxis it's close to unusable and better to block for most application use cases.

Like e.g. many valid email-addresses are in practice unusable and it's recommended to not support them because today they are basically only used to intentionally cause problems (I mean email addresses using quoting like `"a b"@example.com`, you always should support internationalized mail addresses, except maybe as mail provider where it depends on your target customers).


I commented on this at https://github.com/briansmith/webpki/issues/130. This is nothing to be concerned about, for quite a few reasons. I don't expect any compatibility issues with being stricter since Google Chrome's new X.509 processing does enforce the stricter syntax.


This is absolutely a false dichotomy. A home-cooked meal is not one conventionally thought of as being from items hyperlocally sourced. At least, not in the last 70-100 years. We’ve had Sears Roebuck and the like for quite some time.


> A home-cooked meal is not one conventionally thought of as being from items hyperlocally sourced.

Exactly. To allow the category of "home-cooked meal" to exist at all, beyond professionals and a few dedicated hobbyists, we have to loosen the criteria. Similarly, "home-cooked apps" don't exist in the strict sense, but do exist with looser criteria, where we allow people to work with standardized, mass-market tools (e.g. drag-and-drop website and form builders). Expecting lots of people to start from raw source code is like expecting home barbecue to start from the pig.


That you would have to communicate this to a customer via HN is pretty damning.


Au contraire, I sincerely appreciate the personal touch. Probably nothing to do with them, but he’s offered a conduit for them to send additional details, and committed to looking into it. That’s more than OP seems to have got through official channels.

Every company of some size has “WTF” edge cases, IMO it matters how we handle them, even if sometimes handling it takes an unusual or unofficial form.


The fact that the OP had to resort to unofficial channels to get satisfaction is the problem, full stop.


Actually, it's better than the norm. I've seen many stories like this where nobody reached out.


I would say that is much more damning of their state of customer interest.


I don't disagree.

It's just that I don't recall seeing anyone from Google or PayPal reaching out like that.


Vault is a phenomenal tool, full stop.


On mobile, this site has a “toggle high contrast” widget on the left side that seems to directly contravene the points made in the article.


Not just mobile — it's on desktop too. I also saw this toggle and thought it a little ironic. OTOH, this is an opt-in feature that a reader can activate, as opposed to a presentation that must be absorbed by everyone in the audience as-is.


Agreed, I just found the juxtaposition of the two confusing at first.


This position seems at odds with a recently opened WHATWG issue.

https://github.com/whatwg/html/issues/4986


The first two nota benes explicitly describe this document being outdated and not what most people expect when it comes to “memory controller”. I am not certain that citing this is a great example.


What about this? Seems to do what they want.

https://serverfault.com/a/949045



I’m curious to know what benefits accrue from sending the user’s private key over the wire (even encrypted). It seems a strange concept, at odds with ephemeral key usage.


It helps to keep the whole crypto system simple and understandable.


Arguably, private keys should not ever leave the device on which they were generated.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: