Hacker News new | past | comments | ask | show | jobs | submit login
Using Let's Encrypt for Internal Servers (2018) (heckel.xyz)
253 points by GordonS on March 10, 2019 | hide | past | favorite | 120 comments



Author here. Just wanted to comment on the Certificate Transparency issues that some of you have raised:

My company (Datto [1]; we're hiring, see [2]) sells backup appliances which our customers place in their company network (much like you would place a router in your own network). Since we do not control anything other than our appliance inside our customers' networks, we went this route to provide a secure web interface to our appliance. And since the 65k servers/appliances (now more like 80k) are located in tens of thousands of different networks, leaking the internal IP isn't bad at all -- especially since there is no way to correlate them with the customer.

For normal "internal servers" within a company, I'd probably recommend using a wildcard cert and an internal DNS server.

Also: Yeyy, I'm on HN!

[1] https://www.datto.com/

[2] https://news.ycombinator.com/item?id=19071727 or https://www.datto.com/careers/


>And since the 65k servers/appliances (now more like 80k) are located in tens of thousands of different networks, leaking the internal IP isn't bad at all -- especially since there is no way to correlate them with the customer.

Maybe not directly - and I say that as a maybe, since I'm not so sure of this premise - but you're leaking a bunch of data this way that can be used by attackers that have some level of information or access to get additional information about the network, the services running on it, etc. The idea that this "isn't bad at all" is pretty much patently false. "Not too bad in most situations" is probably a bit more accurate.

>For normal "internal servers" within a company, I'd probably recommend using a wildcard cert and an internal DNS server.

Why? For internal servers within the company you can run your own PKI and CA and use an internal cert, giving you much more flexibility and control over things than just using a wildcart cert and internal DNS server would, without much increase in effort.

>We went this route to provide a secure web interface to our appliance.

The far better option is to give the customer a way to provide their own cert. Or, since you are providing the appliance, have them trust your CA on the machines that are administrating the appliance.

I'll be frank: Your choices here and general suggestions around DNS would worry me if I was in the market for your appliance. I would not find it at all acceptable that internal details of my network are leaking out, no matter how convinced you are that it's not bad at all.


> Why? For internal servers within the company you can run your own PKI and CA and use an internal cert, giving you much more flexibility and control over things than just using a wildcart cert and internal DNS server would, without much increase in effort.

Running an internal CA those days terrifies me, its a hell of a responsibility to keep that secure. People check medical records, financial data, and god knows what else on their work devices, and if your CA is compromised that can all be eavesdropped on. Either by an attacker or disgruntled insider. No thanks.


> Running an internal CA those days terrifies me [...]

Better not have Active Directory then, because it has a built-in CA used by every Windows client bound to the domain.

Technically, X.509 allows for constraining things [1], but from a practical perspective [2] it's not really implemented.

[1] https://tools.ietf.org/html/rfc5280#section-4.2.1.10 [2] https://security.stackexchange.com/questions/31376/


Source on that claim? I know you can install ADCS but it is not installed by default.


Not a Windows guy, but I assumed that it was by default. Could be wrong.


>and if your CA is compromised that can all be eavesdropped on. Either by an attacker or disgruntled insider. No thanks.

Get an HSM - they're pretty cheap these days - and store the CA's private key in the HSM.

Or an offline CA, if you don't want to deal with the HSM.

Or, even better, do both.


So to better explain my perspective - I'm coming at this from the angle of a consultant to small to mid sized organizations. Nearly all the times somebody talks about setting up a CA, it's so they can man-in-the-middle all their network traffic to sniff it. Yet the organization struggles to do much more basic things correctly. For better or worse, though, the bias in the comments here are from competent people working for competent orgs - not smaller shops that exist in a perpetual state of garbage fire.

So absolutely, yes, you can run your CA reasonably today. But in practice, outside very large and competent enterprise shops (i.e., your typical business) - no. The typical business I bump into struggles with internal DNS and DHCP - the skills just aren't there.


Yep!

If you're an absolute tightarse it's even possible to use a smartcard/yubikey as a poor-mans HSM


I think the argument here is given for servers that can only be accessed via an internal network; ie those eavedroppers would have to be on the internal network.


What lostapathy means is, because X.509 Name Constraints don't work [1], there's no way for Example Inc's administrators to limit their corporate CA to only issue certs for subdomains of example.com

So if Example Inc puts their cert on every employee's machine, then a hacker or rogue insider could issue a cert for paypal.com and MITM every employee.

Starting an internal CA introduces the risk of this bad thing happening.

[1] https://security.stackexchange.com/a/31382


It sounds like using Let’s Encrypt makes sense for your use case since people are accessing these appliances via a web browser, but for “normal internal servers” the right answer is almost definitely to use your own internal PKI, not to use wildcard certs.

See:

https://github.com/cloudflare/cfssl

https://github.com/hashicorp/vault

https://github.com/smallstep/certificates


> leaking the internal IP isn't bad at all -- especially since there is no way to correlate them with the customer.

You seem to be making many assumptions here, which some may not agree with.


Agreed. While the secrecy of IPs should never be relied on as a security mechanism, keeping them secret can force the attacker to use a scan, which can trigger IDS and tip their hand. If the attack is going through a proxy host (a vulnerable app server proxying an attack at the database for example), an unknown DB IP address could make the attack infeasible.


I was just about to bring this up. Do none of the customers use public addresses internally? My employer does. And, as far as I know, everyone who runs IPv6 does.


You're going to want to let people use their own certs if you want to sell to financial institutions and the like

And wildcard certificates imply sharing a private key around which is terrible advice

If you're a business spend the money on certificates, if the business pushes back ask them "how much is your data worth? How much will reputational harm cost us?"

Lots of people suggest rolling your own CA but I bet most of the internal CAs out there are terribly secured and not audited.


You should be aware of Certificate Transparency if you go down this route.

While I agree that it probably shouldn't matter it could help an attacker gather knowledge about your internal network. Perhaps it's of interest that you use a specific monitoring tool or mail server internally - that could make phishing emails easier to craft.


Even without CT, you're still setting public DNS records -- so better yet, don't rely on "internal" hostnames staying private if you want them publicly trusted. Full stop.


It's very common for internal hostnames to be resolvable only internally (such as with split-zone or dedicated internal DNS servers).

Edit: Or do you mean as part of the challenge-response mechanism? Yeah, that would have to be public.


Only needs to be public for the challenge though. We use split DNS with an external server that has an empty zone. Internal clients are limited to setting their respective _acme-challenge.subdomain TXT record and delete it afterwards.


Out of curiosity, how is this implemented?


Not the person you asked, but:

Besides possibly being a function of a provider's API; DNS server security policies can be used to limit updates to certain domains and/or record types based on preshared key. Since the DNS-01 challenge only needs to make a TXT record with a predetermined name you can configure a zone like so (using BIND syntax as an example):

  key "example-key" {
      algorithm hmac-sha512;
      secret <KEY_HERE>;
  };

  zone {
      ...
      update-policy {
          grant "example-key" name _acme-challenge.example.com TXT;
      };
  };
another option is to have a CNAME from _acme-challenge.example.com to a dedicated challenge zone like challenges.example.com that has similar restrictions. This coupled with something like acme.sh makes it easy and relatively secure for machines to generate their own certificates.


It is common, but as dns over https becomes standard it won’t be able to be.


Why would that preclude the ability to host certain zones only on internal DNS servers?


If the default option allows a fallback then yes, only split dns (where you get different results internal and externally) will be affected.

However how long before the default option is via https only.


I still don't understand. How does DNS over HTTPS differ from DNS over port 53 in a way that would impact the ability to host internal zones (or even split DNS)?


It appears that some clients just universally query third-party servers, thereby giving out a list of every host name that they're looking for to a third party.


Not enterprise devices, though. Some enterprise networks even block third-party DNS servers (yes, even over HTTPS—all traffic must go through the corporate proxy).


Considering how many office networks use 192.168.. or 10..., I'm not sure that in practice you're REALLY* leaking all that much... I mean it's a little bit of a hint, but if you can control a bridged device with internal network access, you'll probably get the same info (roughly) from a port scan. The "breach of info" is overblown imho.


In practice internal only DNS names will always leak to external resolvers in one way or another. Pretending otherwise is very naive.


I'd rather use an internal pki for internal-only stuff at work, but for my own homelab I do use a wildcard cert that is something like * .homelab.mydomain.com.

Sure, someone can see you have internal servers under * .homelab.mydomain.com, but they aren't going to know what specific hostnames exist there.


This is not exclusive to LetsEncypt - you'll get this with any official CA that participates in CT.

You could use a private CA or official wildcard certs.


Private CA's are the way to go. There are free tools for this sort of solutions. Requires some careful preparing but certainly possible. One should use one master CA which is used only when the signer CA's needs to be renewed etc.


Yep. Until Google abandons CT or the industry grows a pair (and a brain) we're stuck with not being able to use stock web PKI for confidential internal hosts.


I disagree. CT has exposed a ton of issues in PKI and held CAs accountable.

If you're hostnames are sensitive then Web PKI is not the correct solution, like you have identified. At that point you should be running an internal PKI and be responsible for running/maintaining to whatever standards you deem fit.

Making Web PKI less transparent to fit your use case is not the answer.


No, CT is stupid. Transparency is not necessary for security, but preventing invalid certificate issuance is. CT doesn't even do that. It just alerts you to it after the fact - but you have to monitor it to be useful anyway, so it's actually opt-in security. And it creates annoying side-effects like this public/private thing, or actually preventing CAs from issuing certs when their CT endpoints go down. It's ridiculous. CT is a bank alarm that goes off after the bank's been robbed, that you won't hear unless your radio is tuned to pick it up.


> Transparency is not necessary for security, but preventing invalid certificate issuance is.

> CT is a bank alarm that goes off after the bank's been robbed

> There are better solutions

How do you propose to prevent invalid certificate issuance? In particular, how do you propose to observe whether any CA accepted by browsers has issued a certificate to someone other than you for a domain you control, if you presume that there are malicious actors willing and able to compromise certificate authorities? Your threat model is nation-states. Assume someone else is willing to put up the cash, what's your solution that works strictly better than certificate transparency, and in particular (since you called attention to it) prevents invalid certificate issuance rather than just logging it for subsequent audit (and CA trust revocation)?


And that alarm has constantly been going off. CAs have consistently failed to abide by BR policies which would have otherwise gone unnoticed.

Things like certlint have come about to help prevent misissuance, but I would wager that most CAs have not added it to their issuance pipeline.

I agree that CT is not the solution and ideally it would not be necessary, however, the number of issues found and still being discovered justifies it. Trusting CAs to just issue proper certificates has been a failed policy.


Chrome and Safari (the only browsers enforcing CT logs at the moment) can both be configured to disable enforcement for certain URLs.

https://twitter.com/sleevi_/status/1102306640072716288


But you can't stop Let's Encrypt from publishing CT records for your internal-only certificates.


We had a similar need a while back, and open sourced our solution: [alley-oop](https://github.com/futurice/alley-oop) is a Dynamic DNS server with an integrated Let's Encrypt proxy, enabling easy HTTPS and WSS for web servers on a local network (LAN).

It effectively automates the process that's described in the article. Once you have it set up (you need a few DNS entries and a tiny Go server running somewhere), using it is as simple as issuing an HTTP API call to your alley-oop server with the LAN IP you have, and the DynDNS domain you want associated with it, and getting a valid cert in return. You're then ready to spin up your HTTPS server with it, and start serving LAN traffic with a valid cert.


If you're writing Go, using the DNS challenge is really easy with CertMagic (disclosure: my library). And if you're running a web server, believe me when I say that the best solution is one that is integrated directly: use Caddy. It will spare you some grief and frustration.

Both Caddy and CertMagic support nearly 3 dozen DNS providers, and adding your own is relatively easy as well.

CertMagic: https://github.com/mholt/certmagic

Caddy: https://caddyserver.com


Is there any open source self-hosted CA that supports ACME? Boulder (the LetsEncrypt system) has big disclaimers in its docs about it not being a good fit for most companies.

In most large companies, issuing certificates remains a manual process. Generate a CSR, create a ticket, wait for response, download certificate, install on host. ACME seems like it should be the standard API for fixing this.


Since version 2003, Windows Certificate Services has automated issuance and renewal to domain joined clients managed via Group Policy. It just works and many orgs use this for managing VPN certificates to fleets of Windows clients, as well as internal TLS server certs. It uses Kerberos/AD for validating certificate names.

No ACME of course, but considering the MSFT certificate management protocol pre-dates ACME by more than a decade that’s not surprising. Traditional manual issuance after CR submission workflow is also supported for any OS.


I'm in the process of writing a tool for automating this: https://github.com/evenh/intercert. It’s still in the early stages though.

EDIT: Based on the fantastic CertMagic library by mholt


Neat, I hadn't known about this!


For those curious why this is so important:

A service I sell is a callcenter SAAS. It uses twilio to make outbound calls, but this won't work on internal servers. (I prefer to deploy the service as appliances that get shipped to the customers).

So this is a great thing for me, since it means I can start offering my customers the ability to make outbound calls from their appliances.


Why won’t twilio outbound calls work on internal servers?


Because chrome won't allow microphone access to non-ssl sites.


I wish there was a way to just get a wildcard and use it to issue "sub certificates" of some sort. ie: *.internal.mycompany.com could be used to issue valid certs for git.internal.mycompany.com...


That would be a CA certificate w/ a nameConstraints extension; from RFC 5280, §4.2.1.10[1]:

> The name constraints extension, which MUST be used only in a CA certificate, indicates a name space within which all subject names in subsequent certificates in a certification path MUST be located.

That is, you get a CA certificate signed by some trusted CA; that essentially makes you a CA; the nameConstraints section restricts it to a certain set of names, so the rest of us are okay w/ you being a CA as we know you can't issue for google.com. E.g., a nameConstraint of ".mycompany.com" allows you to issue under mycompany.com.

Sadly, and this is the key bit, AFAIK, this is completely unsupported by browsers¹ and CAs. So, it's not possible to get one, and AFAIK, even if you did, it wouldn't work. I really wish this was different, since it would make the whole certificate thing considerably easier for a lot of usecases, IMO, and is a more sensible way of doing things.

[1]: https://tools.ietf.org/html/rfc5280#section-4.2.1.10

¹I think Firefox might support them, but I think that might be it.


It's mostly policy not technology that forbids this

If you control a name constrained subCA for example.com, then new leaf certificates can appear under there at your whim, but there are a bunch of rules that need to be followed for leaf certificates, so how do those get enforced? The root CA is responsible for ensuring they are.

One option is you could say OK, they just don't. So then certificates under these constrained subCAs are _worse_ than the rest of the Web PKI, why should we trust these subCAs? Clearly browsers should forcibly distrust them in that case.

Another option is, the root CA has physical control and all issuance goes through them anyway. But if you do this (and a few outfits have done it) then there's no practical benefit to the constrained subCA existing at all. It's just extra baggage.


While I admit I don't know all the rules that govern CAs, can you name one that would be at issue here? I would presume that, at the time you're issuing the name-constrained CA certificate, you would issue a challenge to prove ownership of the domain, much as you would for a non-CA cert. Why would that not suffice?


The "proof of ownership" needs to be fresh enough for each leaf issuance. It's not permitted to just check you have example.com in 2015 and then keep issuing you certificates for the next ten years for names under example.com

But that's not a huge obstacle, account based issuance exists today in the corporate space, some tech person proves control every so often and as long as they keep that up to date all other account users get certs whenever they want.

It's the other rules that we don't trust you to enforce once given your own subCA.

1. Lifetime is an easy example. Having checked you own example.com the maximum longevity for a leaf certificate is 825 days. A subCA that itself only lasts 825 days would be kind of annoying (you'd need to replace it say once per year, then swap all your issuance to the new one each time), so presumably you're going to want it to last longer, whereupon nothing prevents you using it to issue yourself leaf certs that last longer even though that's forbidden.

2. Key checks are another rule for lead certificates that the root CA is supposed to check but if you have physical control how can they?

If you ask Let's Encrypt for a certificate for your Debian weak key because you forgot that the archaic Debian system you were trying to certify has a broken OpenSSL install out of the box it will just refuse. That key is no good. But how can the root be sure you run such checks every time on your own subCA ?

3. Or how about compatibility. The trust stores and the Baseline Requirements both say only to issue specific types of certificates, e.g. no SHA1 for security reason, but also no SHA3 because their software doesn't grok that yet. Or equally No putting arbitrary EKUs in certificates without explaining why. Rules like this also can't be enforced on the subCA.


Interesting, and thanks for taking the time to detail all that; I agree now that it is not exactly straight-forward.

Regarding 1: that's interesting. I guess I figure that part of validation should be to check that the expiry of a cert is within the expiry of the issuing CA certificate. But if this is an issue… what prevents CAs from doing this today, and if it's just "good behavior", why not codify this in the validation rules? (That is, any certificate can't expiry after the expiry of the certificate above it in the chain?)

Regarding 2 & 3… is not saying "it's the responsibility of the domain owner" not sufficient? Issuing poor keys only hurts them (and perhaps those who want to communicate with them, which that might be an issue). (I agree that this isn't a great response, and it would be better for the world as a whole if we got these checked, but is it really our job to force domain owners to do this? Also, IDK if I really trust that major CAs are doing a good job of checking these sorts of things given all the other issues we see with them.)

SHA1 is going to be ignored by user agents soon (already?), so a subCA won't issue those solely b/c they won't work. What's wrong with a subCA adopting SHA3 sooner than major CAs? (Given how long the switch off of SHA1 took … that seems like a plus?)


You seem very focused on the approach of just putting all the work in clients to detect things that are a problem and reject/ ignore them. But we do not like this because it's only ever one mistake away from no prevention at all and you can't tell by looking.

Better then to reject this whole "Let's give subCAs to random people to save them some work" approach altogether.

Can you trust that major CAs are doing a good job? Well, if you suspect that they aren't all the data is there to go look for yourself, please report anything untoward that you find to m.d.s.policy and/or the issuer.

For the final point - the problem is compatibility. We do not want to create myriad non-interoperable systems, that's why "the Internet" won in the first place. If not for the grave security problems with SHA1 (demonstrated collision for a relatively affordable sum of money) we'd be using that forever, even though newer hashes are nicer.

SHA3 in particular might never go anywhere. Its design (a sponge not another Merkle–Damgård construction like MD4, MD5, SHA1 and SHA2) has appealing properties such as preventing length extension attacks, but you can get many similar benefits from things like SHA512/256 (run SHA-512 but throw away bits to keep only 256 of them).

https://www.imperialviolet.org/2017/05/31/skipsha3.html


Not sure what you mean by unsupported by browsers. Chrome, edge, and firefox all check and enforce nameConstraints in a root cert.


I mean that they do not properly validate / IIRC, they raise validation errors when they should successfully validate.

> nameConstraints in a root cert

I'm talking about nameConstraints in a non-root certificate; IDK if this has an effect on how browsers would or would not validate it. My comment is simply "last time I tried it, it didn't work." It was a few years back, so perhaps I should try it again.


As opposed to just using the wildcard itself?


If one of Google's servers gets hacked, I'd rather the hacker get the mail-sor-f41.google.com certificate than the *.google.com certificate :)


But certificates are PUBLIC keys...


It's evident that GP is referring to the private keys.


No, it's not.


And how would one use the cert on a server for a secure connection without the corresponding PRIVATE key exactly?


You know wildcard is not working two levels down, you are getting .yourdomain.com but not .*.yourdomain.com.

But yea maybe parent poster could consider scheme: git-internal.yourdomain.com.


I wrote some shell scripts a while back [0] for almost this purpose. I have a backup MX that does not run any Web services, but I wanted TLS anyway for the SMTP.

It uses the let's encrypt client (certbot) and nsupdate so I can use any RFC2136 compatible DNS server, in my case, Bind. This has been running on the backup MX since I wrote it, without any hickups so far.

This can also be used for. Internal services, since you do not expose anything to the internal machine.

[0] https://github.com/zyberzero/certbot-rfc2136


To address the security issue of Certificate Transparency mentioned by another comment, you can combine Lets Encrypt wildcard certs, and SSL termination at the LB.

Set up a Lets Encrypt wildcard cert, and use it on your internal LB. Set up your internal DNS to autoresolve custom internal hostnames. For example, if you own example.com, set up internal hostnames such as test1.example.com, test2.example.com

These hostnames are internal, so they won't be resolvable ( or known about ) outside your private network. The wildcard cert combined with SSL termination at the LB means all your hosts now have ssl certs.


In this case, all your internal servers would have the same cert with the same private key. If one appliance is compromised, every other would be compromised. Not a good idea.


Correct.

The solution I suggested above is for home/personal use. But if you're doing anything for production, you really shouldn't be using LB ssl termination anyway, and might even benefit from running your own internal CA service.


The article states:

>"To increase this number, you have to either request a higher rate limit or get your domain added to the public suffix list (note: adding your domain here has other implications!)."

Can someone say what those other implications are? I'm not sure why the author would mention this and then fail to state what those are.


You can start by reading bullet points here: https://publicsuffix.org/

In a nutshell adding your domain there makes the subdomains completely isolated (they can't set cookies for higher level domain).

This is a good idea if you're hosting user pages as subdomains. See also PRs https://github.com/publicsuffix/list/pull/722


Thanks your point about the inability to set cookies for a higher level domain(shorter)is seen as a net positive though correct?


My understanding is that intermediate CAs can be limited to issuing certificates only for specific domains.

Is it not possible to get a root-signed intermediate CA for someone who can prove control over a domain? This would allow you to issue certificates for "xxx.internal.mydomain.com", without the need for a wildcard certificate, and without the need for using a public CA for every individual certificate?

CT would still be a problem unless the CA could be flagged to "allow non-CT certificates", and the browser ignore those requirements as they ignore CT requirements for manually installed root certs

The benefits to this is 1) there's no need to manually install a root certificate on each client device 2) Your internal domains are not reliant on an external CA


> My understanding is that intermediate CAs can be limited to issuing certificates only for specific domains.

Technically, X.509 allows for constraining things, [1] but from a practical perspective [2] it's not really implemented.

[1] https://tools.ietf.org/html/rfc5280#section-4.2.1.10 [2] https://security.stackexchange.com/questions/31376/


That thread does indicate things are changing though and it's becoming more and more accepted.


My solution for my own private intranet at home is to use nginx's built in ip filtering and a vpn so that only browsers that are behind my wan ip can view my internal sites. Another interesting thing is that you can have the non approved ip addresses "see" a totally different site on a url than those 'in the know' do. I'm not aware of any downsides to this method. If there is one, would be good to know.


Using DNS verification also for a couple of non critical production websites which are hosted on GitLab Pages.

It's really nice not to upload a challenge file for doing so.

Tutorial here: https://about.gitlab.com/2016/04/11/tutorial-securing-your-g...


This is a good conceptual overview of how you'd accomplish this with ACME. From a more concrete perspective, I'd suggest using acme-dns[1]. It's fairly easy to set up and is supported by a number of existing ACME clients like ACME.sh and Certbot (via an authentication hook).

[1]: https://github.com/joohoi/acme-dns


There is an obvious technical solution that is IMO far superior: get all the browsers to support CA certificates that are scoped to a particular domain and its subdomains. Then get your organization one of these CA certs (which is no more dangerous than a wildcard certificate) and issue your own subdomain certificates.

Sadly, I don’t think any browsers support such a thing.


Isn't this a whole lot of words for installing traefik with DNS challenge and let it do basically all of the work?


Also useful for smtp/imap servers.


Self signed internal CA for private use all day + mandatory client certs. Internal approved CA for business. Never see a need to change this. This Lets encrypt stuff is faddish to me.


I've set up my configuration management system to drive certbot and to handle certificate distribution:

https://github.com/irtnog/certbot-formula

https://github.com/irtnog/salt-states/blob/production/certbo...

https://github.com/irtnog/salt-states/blob/production/salt/f...

There are a few bits I haven't published mostly due to laziness on my part, like how I handle certificate assignments, but it's all pretty straightforward. The hardest part has been developing the necessary configuration scripts for Windows. It takes some effort to get keying material installed in the appropriate certificate store, which you can see part of here:

https://github.com/irtnog/salt-states/blob/production/wincer...

And then there's how things that use CryptoAPI/CryptoNG reference the desired certificate by SHA-1 thumbprint, which you can see part of here:

https://github.com/irtnog/salt-states/blob/production/iis/ce...

https://github.com/irtnog/salt-states/blob/production/rd-gat...

In some cases there isn't a clean API for certificate installation:

https://github.com/irtnog/salt-states/blob/production/rdp/in...

In other cases one must run the necessary PowerShell cmdlets under user accounts with the correct privileges, which is something I haven't quite figured out (e.g., for Exchange 2007 and newer).

Sometimes the services in question require the keying material be structured in very specific and slightly odd formats (e.g., Splunk). Sometimes, I just stick a load balancer in front of it, terminate the TLS connection there, and call it a day (e.g., Tableau), perfect being the enemy of good and all.

I've given a lot of thought to the question "how would an attacker use CT logs against me", but I think the probable losses of something untoward happening due to certificate transparency is extremely small compared to attackers getting onto my networks and eavesdropping on sensitive, internal comms. I'm also not confident in my ability to run my own internal CA, both because it'd be a lot of work to secure and because the bus factor would be so damn high. At least this gives the rest of the team some incentive to tie new things into the configuration management system because they get certificates for "free". Maybe there's a better way. I don't know.


The article doesn't discuss this; what are the advantages to using Let's Encrypt for internal services over deploying internally signed certificates? Are there any disadvantages to using Let's Encrypt?

The rate limits alone seem to be a potential danger if they need to reissue new certificates for their 65,000 servers.


The obvious advantage is that you don't need to deploy your root cert on client devices, or manage your own PKI.

Reissuing certificates shouldn't be a problem since Let's Encrypt has a rate limit exception for renewals, though if I were managing 65,000 servers I'd be a little hesitant to put that kind of burden on Let's Encrypt's infrastructure without contacting them first, just as a matter of courtesy.


We didn't just contact them before regarding a rate limit increase, we are also a silver sponsor and pay/donate yearly. See https://letsencrypt.org/sponsors/


The disadvantages are:

it’s less secure — you’re trusting (many) third parties with your security and relying on the security of DNS

it’s less flexible — you can’t sign certificates with internal names (e.g., *.cluster.local) and certs must be for 90 days, etc

it’s kind of hacky — you have to work around rate limits and whatnot because Let’s Encrypt wasn’t designed for this use case

The advantage is it’s easier. But that’s arguable. What the article describes isn’t easy. Using something like cfssl (https://github.com/cloudflare/cfssl) or vault (https://github.com/hashicorp/vault) or step certificates (https://github.com/smallstep/certificates) (which I work on) is probably easier and definitely better for internal services.


Installing certs on your own computer is easy. Installing certs on random testing phones? Not so easy. Let's encrypt just works.


>it’s less secure — you’re trusting (many) third parties

You're trusting those third parties regardless of if you use your own PKI, because browsers already trust all those root CAs anyways.


Why do you think trusting third parties is less secure than operating your own pki infrastructure?


Because we are introducing a third party into a situation that didn't require a third party.


And if you just rely on the default trust stores you’re actually introducing over 100 third parties, some of which are known to be controlled by (or at least work with) the NSA, China, etc.


With certificate pinning, that is not a high risk.


Very few sites use certificate pinning. Chrome dropped support for dynamic certificate pinning. Web based certificate pinning is often recommended against because it's very hard to do right.

Also, using Let's Encrypt doesn't stop you from certificate pinning.


Certificate pinning (HPKP) is browsers is effectively dead, and was never a good idea to begin with.

There are far too many ways to self-DoS accidentally with HPKP. Also, an attacker who briefly gains control of your public DNS or web server can DoS a hostname semi-permanently.


If you’re talking about services that you’re accessing via a browser, true. If you actually do pinning (properly).

For service-to-service stuff and APIs the number of TLS clients that respect pinning is approximately zero.

Either way, the other problems remain, and running an internal PKI is really pretty easy.


Won't all the client machines already have hundreds of 3rd party CAs (including Let's Encrypt) loaded onto them anyway?


Yea, by default anyways, but that doesn’t mean you need to use/trust them for a particular request. I’m talking mostly about service-to-service traffic fwiw, not browsers.


Let's Encrypt specifically does not rate limit renewals. See the renewal exemption in the 6th paragraph of [0]. A "renewal" is any certificate that would count as a duplicate – that is, using the same set of names as another valid certificate. It doesn't count the keys. So you can always reissue existing certs.

[0] https://letsencrypt.org/docs/rate-limits/


Also, since last week, recent renewals won't count against your new issuance rate limits.

https://community.letsencrypt.org/t/rate-limits-fixing-certs...


The convenience of not having to manage your own CA (doing so is complex and nuanced).


If you are going to manage your CA and don't have an expert available, you could do much worse than https://www.vaultproject.io/docs/secrets/pki/index.html

Even if you do have an expert available, it's a sound choice.


If you use your own CA you have to make sure all your apps trust it. With the public CAs that's not a problem because all the standard libraries usually trust them, and often 'apt-get install ca-certificates' is all you need to do.


There is the alternate way for internal networks to use a self signed cert authority (registered with each machine), and then use that to sign a machine's individual cert. All other devices that are signed using the CA are accepted. Nothing is leaked using this method.

I remember seeing a good tut on this once... but can't find it atm.



Why not set the A record for the external DNS server to something like 1.1.1.1 and the internal DNS server has the correct IP address (or a view on the same DNS Server). This way you won’t leak internal IP addresses to the Internet.


(Deleted. Wrong.)


ACME doesn't give the issuing CA access to your private keys. That'd be a terrible idea.


You would not, but you don't give your private keys to them. Why do you think so?


Since when do you give a CA your private keys to get a certificate signed?


Right, sorry, forgot how the key generation works.


Why would you be sending your private keys to your certificate authority?


Anyone have a decent process / library for renewals ? Let Encrypt is great but the certs need to be renewed every 3 months which is annoying.


You should automate the process and not manually request certificates. They are saying this for a long time. Every client can auto-renew.


I’ve been trying to automate this for a while but don’t know how to do it. Am using split DNS with BIND. The device in question accepts a private key and certificate chain but won’t allow arbitrary software to run so I need to do a manual TXT challenge every 3 months, which is super annoying.

Any pointers?


Lego [1] can do DNS (dns-01) challenge automatically, you can run it on a different device and transfer the certs to the device which needs it. This way you expose the secret key to some danger (secret keys should never leave the requester machine), but might be acceptable risk, it depends.

[1]: https://github.com/xenolf/lego


Lego can use a CSR, so you don't need to send private keys anywhere.

(Secret keys and private keys aren't the same kind of thing. lyrics to U2's "The Fly" helps me remember, "A Secret is something you tell one other person, so I'm telling you" - somebody else needs to know the _secret_ key for it to work, but nobody at all knows the private key)

A CSR is a signed document which says "I want a certificate for this identity with this public key". The signature proves you know the corresponding private key, but that key isn't transmitted anywhere.

If you're willing to have relatively long lived keys (2-3 years isn't too scary for 2048-bit RSA) you can generate a private key and a CSR once, and have Lego or similar CSR-capable software obtain certificates from Let's Encrypt every couple of months with that CSR.


Thanks! Yes, the risk is acceptable because the appliance can’t generate a key pair by itself. All machines involved (including the DNS server) are on the same LAN and physically under my control.

Took me a while to figure out that Lego indeed supports BIND (via the RFC2136 DNS provider).


AFAIK there's extensions for about every platform and language at this point... You go through the pain of automating once, and let it self-update. From IIS to NGINX and Apache, there are integrated extensions. There's even solutions for reverse-proxy apps/appliances as well, from Caddy to load balancers.

Just search for: TECHNAME let's encrypt

You should be able to find something. With libraries, you should be able to roll your own. If you're distributing keys to a cluster of servers, you'll need to integrate your own solution, or terminate at the LB and trust your internal connections, or use PKI behind the LB.


The article starts: Let’s Encrypt is a revolutionary new certificate authority

... and in my opinion fails at that precise point.

I think that SSL/TLS itself is the correct starting point and not a particular implementation. The article could have been titled "Using TLS for internal servers". The LE implementation could have been one of a few.

Don't forget that you can manage your own Certificate Authority lists and really ought to do so if you actually give a shit about IT security. Abrogating your responsibility to MS, Apple, Mozilla, Google etc is way too easy, dangerous and probably immoral.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: