
Using Let's Encrypt for Internal Servers (2018) - GordonS
https://blog.heckel.xyz/2018/08/05/issuing-lets-encrypt-certificates-for-65000-internal-servers/
======
binwiederhier
Author here. Just wanted to comment on the Certificate Transparency issues
that some of you have raised:

My company (Datto [1]; we're hiring, see [2]) sells backup appliances which
our customers place in their company network (much like you would place a
router in your own network). Since we do not control anything other than our
appliance inside our customers' networks, we went this route to provide a
secure web interface to our appliance. And since the 65k servers/appliances
(now more like 80k) are located in tens of thousands of different networks,
leaking the internal IP isn't bad at all -- especially since there is no way
to correlate them with the customer.

For normal "internal servers" within a company, I'd probably recommend using a
wildcard cert and an internal DNS server.

Also: Yeyy, I'm on HN!

[1] [https://www.datto.com/](https://www.datto.com/)

[2]
[https://news.ycombinator.com/item?id=19071727](https://news.ycombinator.com/item?id=19071727)
or [https://www.datto.com/careers/](https://www.datto.com/careers/)

~~~
forgotmypw
> leaking the internal IP isn't bad at all -- especially since there is no way
> to correlate them with the customer.

You seem to be making many assumptions here, which some may not agree with.

~~~
freedomben
Agreed. While the secrecy of IPs should never be relied on as a security
mechanism, keeping them secret can force the attacker to use a scan, which can
trigger IDS and tip their hand. If the attack is going through a proxy host (a
vulnerable app server proxying an attack at the database for example), an
unknown DB IP address could make the attack infeasible.

------
mjlee
You should be aware of Certificate Transparency if you go down this route.

While I agree that it probably shouldn't matter it could help an attacker
gather knowledge about your internal network. Perhaps it's of interest that
you use a specific monitoring tool or mail server internally - that could make
phishing emails easier to craft.

~~~
mholt
Even without CT, you're still setting public DNS records -- so better yet,
don't rely on "internal" hostnames staying private if you want them publicly
trusted. Full stop.

~~~
organsnyder
It's very common for internal hostnames to be resolvable only internally (such
as with split-zone or dedicated internal DNS servers).

Edit: Or do you mean as part of the challenge-response mechanism? Yeah, that
would have to be public.

~~~
zeeZ
Only needs to be public for the challenge though. We use split DNS with an
external server that has an empty zone. Internal clients are limited to
setting their respective _acme-challenge.subdomain TXT record and delete it
afterwards.

~~~
someone13
Out of curiosity, how is this implemented?

~~~
w7
Not the person you asked, but:

Besides possibly being a function of a provider's API; DNS server security
policies can be used to limit updates to certain domains and/or record types
based on preshared key. Since the DNS-01 challenge only needs to make a TXT
record with a predetermined name you can configure a zone like so (using BIND
syntax as an example):

    
    
      key "example-key" {
          algorithm hmac-sha512;
          secret <KEY_HERE>;
      };
    
      zone {
          ...
          update-policy {
              grant "example-key" name _acme-challenge.example.com TXT;
          };
      };
    

another option is to have a CNAME from _acme-challenge.example.com to a
dedicated challenge zone like challenges.example.com that has similar
restrictions. This coupled with something like acme.sh makes it easy and
relatively secure for machines to generate their own certificates.

------
jareware
We had a similar need a while back, and open sourced our solution: [alley-
oop]([https://github.com/futurice/alley-
oop](https://github.com/futurice/alley-oop)) is a Dynamic DNS server with an
integrated Let's Encrypt proxy, enabling easy HTTPS and WSS for web servers on
a local network (LAN).

It effectively automates the process that's described in the article. Once you
have it set up (you need a few DNS entries and a tiny Go server running
somewhere), using it is as simple as issuing an HTTP API call to your alley-
oop server with the LAN IP you have, and the DynDNS domain you want associated
with it, and getting a valid cert in return. You're then ready to spin up your
HTTPS server with it, and start serving LAN traffic with a valid cert.

------
archgoon
The article doesn't discuss this; what are the advantages to using Let's
Encrypt for internal services over deploying internally signed certificates?
Are there any disadvantages to using Let's Encrypt?

The rate limits alone seem to be a potential danger if they need to reissue
new certificates for their 65,000 servers.

~~~
mmalone
The disadvantages are:

it’s less secure — you’re trusting (many) third parties with your security and
relying on the security of DNS

it’s less flexible — you can’t sign certificates with internal names (e.g.,
*.cluster.local) and certs must be for 90 days, etc

it’s kind of hacky — you have to work around rate limits and whatnot because
Let’s Encrypt wasn’t designed for this use case

The advantage is it’s easier. But that’s arguable. What the article describes
isn’t easy. Using something like cfssl
([https://github.com/cloudflare/cfssl](https://github.com/cloudflare/cfssl))
or vault
([https://github.com/hashicorp/vault](https://github.com/hashicorp/vault)) or
step certificates
([https://github.com/smallstep/certificates](https://github.com/smallstep/certificates))
(which I work on) is probably easier and definitely better for internal
services.

~~~
ec109685
Why do you think trusting third parties is less secure than operating your own
pki infrastructure?

~~~
therein
Because we are introducing a third party into a situation that didn't require
a third party.

~~~
mmalone
And if you just rely on the default trust stores you’re actually introducing
over 100 third parties, some of which are known to be controlled by (or at
least work with) the NSA, China, etc.

~~~
ec109685
With certificate pinning, that is not a high risk.

~~~
Thorrez
Very few sites use certificate pinning. Chrome dropped support for dynamic
certificate pinning. Web based certificate pinning is often recommended
against because it's very hard to do right.

Also, using Let's Encrypt doesn't stop you from certificate pinning.

------
mholt
If you're writing Go, using the DNS challenge is really easy with CertMagic
(disclosure: my library). And if you're running a web server, believe me when
I say that the best solution is one that is integrated directly: use Caddy. It
will spare you some grief and frustration.

Both Caddy and CertMagic support nearly 3 dozen DNS providers, and adding your
own is relatively easy as well.

CertMagic:
[https://github.com/mholt/certmagic](https://github.com/mholt/certmagic)

Caddy: [https://caddyserver.com](https://caddyserver.com)

------
cpitman
Is there any open source self-hosted CA that supports ACME? Boulder (the
LetsEncrypt system) has big disclaimers in its docs about it not being a good
fit for most companies.

In most large companies, issuing certificates remains a manual process.
Generate a CSR, create a ticket, wait for response, download certificate,
install on host. ACME seems like it should be _the_ standard API for fixing
this.

~~~
tatersolid
Since version 2003, Windows Certificate Services has automated issuance and
renewal to domain joined clients managed via Group Policy. It just works and
many orgs use this for managing VPN certificates to fleets of Windows clients,
as well as internal TLS server certs. It uses Kerberos/AD for validating
certificate names.

No ACME of course, but considering the MSFT certificate management protocol
pre-dates ACME by more than a decade that’s not surprising. Traditional manual
issuance after CR submission workflow is also supported for any OS.

------
evenh
I'm in the process of writing a tool for automating this:
[https://github.com/evenh/intercert](https://github.com/evenh/intercert). It’s
still in the early stages though.

EDIT: Based on the fantastic CertMagic library by mholt

~~~
mholt
Neat, I hadn't known about this!

------
blhack
For those curious why this is so important:

A service I sell is a callcenter SAAS. It uses twilio to make outbound calls,
but this won't work on internal servers. (I prefer to deploy the service as
appliances that get shipped to the customers).

So this is a great thing for me, since it means I can start offering my
customers the ability to make outbound calls from their appliances.

~~~
ec109685
Why won’t twilio outbound calls work on internal servers?

~~~
blhack
Because chrome won't allow microphone access to non-ssl sites.

------
shittyadmin
I wish there was a way to just get a wildcard and use it to issue "sub
certificates" of some sort. ie: *.internal.mycompany.com could be used to
issue valid certs for git.internal.mycompany.com...

~~~
deathanatos
That would be a CA certificate w/ a nameConstraints extension; from RFC 5280,
§4.2.1.10[1]:

> _The name constraints extension, which MUST be used only in a CA
> certificate, indicates a name space within which all subject names in
> subsequent certificates in a certification path MUST be located._

That is, you get a CA certificate signed by some trusted CA; that essentially
makes you a CA; the nameConstraints section restricts it to a certain set of
names, so the _rest_ of us are okay w/ you being a CA as we know you can't
issue for google.com. E.g., a nameConstraint of ".mycompany.com" allows you to
issue under mycompany.com.

Sadly, and this is the key bit, AFAIK, this is completely unsupported by
browsers¹ and CAs. So, it's not possible to get one, and AFAIK, even if you
did, it wouldn't work. I really wish this was different, since it would make
the whole certificate thing considerably easier for a lot of usecases, IMO,
and is a more sensible way of doing things.

[1]:
[https://tools.ietf.org/html/rfc5280#section-4.2.1.10](https://tools.ietf.org/html/rfc5280#section-4.2.1.10)

¹I think Firefox might support them, but I think that might be it.

~~~
tialaramex
It's mostly policy not technology that forbids this

If you control a name constrained subCA for example.com, then new leaf
certificates can appear under there at your whim, but there are a bunch of
rules that need to be followed for leaf certificates, so how do those get
enforced? The root CA is responsible for ensuring they are.

One option is you could say OK, they just don't. So then certificates under
these constrained subCAs are _worse_ than the rest of the Web PKI, why should
we trust these subCAs? Clearly browsers should forcibly distrust them in that
case.

Another option is, the root CA has physical control and all issuance goes
through them anyway. But if you do this (and a few outfits have done it) then
there's no practical benefit to the constrained subCA existing at all. It's
just extra baggage.

~~~
deathanatos
While I admit I don't know all the rules that govern CAs, can you name one
that would be at issue here? I would presume that, at the time you're issuing
the name-constrained CA certificate, you would issue a challenge to prove
ownership of the domain, much as you would for a non-CA cert. Why would that
not suffice?

~~~
tialaramex
The "proof of ownership" needs to be fresh enough for each leaf issuance. It's
not permitted to just check you have example.com in 2015 and then keep issuing
you certificates for the next ten years for names under example.com

But that's not a huge obstacle, account based issuance exists today in the
corporate space, some tech person proves control every so often and as long as
they keep that up to date all other account users get certs whenever they
want.

It's the other rules that we don't trust you to enforce once given your own
subCA.

1\. Lifetime is an easy example. Having checked you own example.com the
maximum longevity for a leaf certificate is 825 days. A subCA that itself only
lasts 825 days would be kind of annoying (you'd need to replace it say once
per year, then swap all your issuance to the new one each time), so presumably
you're going to want it to last longer, whereupon nothing prevents you using
it to issue yourself leaf certs that last longer even though that's forbidden.

2\. Key checks are another rule for lead certificates that the root CA is
supposed to check but if you have physical control how can they?

If you ask Let's Encrypt for a certificate for your Debian weak key because
you forgot that the archaic Debian system you were trying to certify has a
broken OpenSSL install out of the box it will just refuse. That key is no
good. But how can the root be sure you run such checks every time on your own
subCA ?

3\. Or how about compatibility. The trust stores and the Baseline Requirements
both say only to issue specific types of certificates, e.g. no SHA1 for
security reason, but also no SHA3 because their software doesn't grok that
yet. Or equally No putting arbitrary EKUs in certificates without explaining
why. Rules like this also can't be enforced on the subCA.

~~~
deathanatos
Interesting, and thanks for taking the time to detail all that; I agree now
that it is not exactly straight-forward.

Regarding 1: that's interesting. I guess I figure that part of validation
should be to check that the expiry of a cert is within the expiry of the
issuing CA certificate. But if this is an issue… what prevents CAs from doing
this today, and if it's just "good behavior", why not codify this in the
validation rules? (That is, any certificate can't expiry after the expiry of
the certificate above it in the chain?)

Regarding 2 & 3… is not saying "it's the responsibility of the domain owner"
not sufficient? Issuing poor keys only hurts them (and perhaps those who want
to communicate with them, which _that_ might be an issue). (I agree that this
isn't a _great_ response, and it would be better for the world as a whole if
we got these checked, but is it really our job to force domain owners to do
this? Also, IDK if I really trust that major CAs are doing a good job of
checking these sorts of things given all the other issues we see with them.)

SHA1 is going to be ignored by user agents soon (already?), so a subCA won't
issue those solely b/c they won't work. What's wrong with a subCA adopting
SHA3 sooner than major CAs? (Given how long the switch off of SHA1 took … that
seems like a plus?)

~~~
tialaramex
You seem very focused on the approach of just putting all the work in clients
to detect things that are a problem and reject/ ignore them. But we do not
like this because it's only ever one mistake away from no prevention at all
and you can't tell by looking.

Better then to reject this whole "Let's give subCAs to random people to save
them some work" approach altogether.

Can you trust that major CAs are doing a good job? Well, if you suspect that
they aren't all the data is there to go look for yourself, please report
anything untoward that you find to m.d.s.policy and/or the issuer.

For the final point - the problem is compatibility. We do not want to create
myriad non-interoperable systems, that's why "the Internet" won in the first
place. If not for the grave security problems with SHA1 (demonstrated
collision for a relatively affordable sum of money) we'd be using that
forever, even though newer hashes are nicer.

SHA3 in particular might never go anywhere. Its design (a sponge not another
Merkle–Damgård construction like MD4, MD5, SHA1 and SHA2) has appealing
properties such as preventing length extension attacks, but you can get many
similar benefits from things like SHA512/256 (run SHA-512 but throw away bits
to keep only 256 of them).

[https://www.imperialviolet.org/2017/05/31/skipsha3.html](https://www.imperialviolet.org/2017/05/31/skipsha3.html)

------
zyberzero
I wrote some shell scripts a while back [0] for almost this purpose. I have a
backup MX that does not run any Web services, but I wanted TLS anyway for the
SMTP.

It uses the let's encrypt client (certbot) and nsupdate so I can use any
RFC2136 compatible DNS server, in my case, Bind. This has been running on the
backup MX since I wrote it, without any hickups so far.

This can also be used for. Internal services, since you do not expose anything
to the internal machine.

[0] [https://github.com/zyberzero/certbot-
rfc2136](https://github.com/zyberzero/certbot-rfc2136)

------
Naac
To address the security issue of Certificate Transparency mentioned by another
comment, you can combine Lets Encrypt wildcard certs, and SSL termination at
the LB.

Set up a Lets Encrypt wildcard cert, and use it on your internal LB. Set up
your internal DNS to autoresolve custom internal hostnames. For example, if
you own example.com, set up internal hostnames such as test1.example.com,
test2.example.com

These hostnames are internal, so they won't be resolvable ( or known about )
outside your private network. The wildcard cert combined with SSL termination
at the LB means all your hosts now have ssl certs.

~~~
kissgyorgy
In this case, all your internal servers would have the same cert with the same
private key. If one appliance is compromised, every other would be
compromised. Not a good idea.

~~~
Naac
Correct.

The solution I suggested above is for home/personal use. But if you're doing
anything for production, you really shouldn't be using LB ssl termination
anyway, and might even benefit from running your own internal CA service.

------
bogomipz
The article states:

>"To increase this number, you have to either request a higher rate limit or
get your domain added to the public suffix list (note: adding your domain here
has other implications!)."

Can someone say what those other implications are? I'm not sure why the author
would mention this and then fail to state what those are.

~~~
Boulth
You can start by reading bullet points here:
[https://publicsuffix.org/](https://publicsuffix.org/)

In a nutshell adding your domain there makes the subdomains completely
isolated (they can't set cookies for higher level domain).

This is a good idea if you're hosting user pages as subdomains. See also PRs
[https://github.com/publicsuffix/list/pull/722](https://github.com/publicsuffix/list/pull/722)

~~~
bogomipz
Thanks your point about the inability to set cookies for a higher level
domain(shorter)is seen as a net positive though correct?

------
isostatic
My understanding is that intermediate CAs can be limited to issuing
certificates only for specific domains.

Is it not possible to get a root-signed intermediate CA for someone who can
prove control over a domain? This would allow you to issue certificates for
"xxx.internal.mydomain.com", without the need for a wildcard certificate, and
without the need for using a public CA for every individual certificate?

CT would still be a problem unless the CA could be flagged to "allow non-CT
certificates", and the browser ignore those requirements as they ignore CT
requirements for manually installed root certs

The benefits to this is 1) there's no need to manually install a root
certificate on each client device 2) Your internal domains are not reliant on
an external CA

~~~
throw0101a
> My understanding is that intermediate CAs can be limited to issuing
> certificates only for specific domains.

Technically, X.509 allows for constraining things, [1] but from a practical
perspective [2] it's not really implemented.

[1]
[https://tools.ietf.org/html/rfc5280#section-4.2.1.10](https://tools.ietf.org/html/rfc5280#section-4.2.1.10)
[2]
[https://security.stackexchange.com/questions/31376/](https://security.stackexchange.com/questions/31376/)

~~~
isostatic
That thread does indicate things are changing though and it's becoming more
and more accepted.

------
ohiovr
My solution for my own private intranet at home is to use nginx's built in ip
filtering and a vpn so that only browsers that are behind my wan ip can view
my internal sites. Another interesting thing is that you can have the non
approved ip addresses "see" a totally different site on a url than those 'in
the know' do. I'm not aware of any downsides to this method. If there is one,
would be good to know.

------
therealmarv
Using DNS verification also for a couple of non critical production websites
which are hosted on GitLab Pages.

It's really nice not to upload a challenge file for doing so.

Tutorial here: [https://about.gitlab.com/2016/04/11/tutorial-securing-
your-g...](https://about.gitlab.com/2016/04/11/tutorial-securing-your-gitlab-
pages-with-tls-and-letsencrypt/#comment-3926513589)

------
Ajedi32
This is a good conceptual overview of how you'd accomplish this with ACME.
From a more concrete perspective, I'd suggest using acme-dns[1]. It's fairly
easy to set up and is supported by a number of existing ACME clients like
ACME.sh and Certbot (via an authentication hook).

[1]: [https://github.com/joohoi/acme-dns](https://github.com/joohoi/acme-dns)

------
amluto
There is an obvious technical solution that is IMO far superior: get all the
browsers to support CA certificates that are scoped to a particular domain and
its subdomains. Then get your organization one of these CA certs (which is no
more dangerous than a wildcard certificate) and issue your own subdomain
certificates.

Sadly, I don’t think any browsers support such a thing.

------
damageboy
Isn't this a whole lot of words for installing traefik with DNS challenge and
let it do basically all of the work?

------
cm2187
Also useful for smtp/imap servers.

------
redsavagefiero
Self signed internal CA for private use all day + mandatory client certs.
Internal approved CA for business. Never see a need to change this. This Lets
encrypt stuff is faddish to me.

------
xenophonf
I've set up my configuration management system to drive certbot and to handle
certificate distribution:

[https://github.com/irtnog/certbot-formula](https://github.com/irtnog/certbot-
formula)

[https://github.com/irtnog/salt-
states/blob/production/certbo...](https://github.com/irtnog/salt-
states/blob/production/certbot/files/etc/renewal-hooks/deploy/send-event)

[https://github.com/irtnog/salt-
states/blob/production/salt/f...](https://github.com/irtnog/salt-
states/blob/production/salt/files/reactors/certbot-deploy.sls)

There are a few bits I haven't published mostly due to laziness on my part,
like how I handle certificate assignments, but it's all pretty
straightforward. The hardest part has been developing the necessary
configuration scripts for Windows. It takes some effort to get keying material
installed in the appropriate certificate store, which you can see part of
here:

[https://github.com/irtnog/salt-
states/blob/production/wincer...](https://github.com/irtnog/salt-
states/blob/production/wincert/init.sls)

And then there's how things that use CryptoAPI/CryptoNG reference the desired
certificate by SHA-1 thumbprint, which you can see part of here:

[https://github.com/irtnog/salt-
states/blob/production/iis/ce...](https://github.com/irtnog/salt-
states/blob/production/iis/certificates.sls)

[https://github.com/irtnog/salt-states/blob/production/rd-
gat...](https://github.com/irtnog/salt-states/blob/production/rd-
gateway/certificates.sls)

In some cases there isn't a clean API for certificate installation:

[https://github.com/irtnog/salt-
states/blob/production/rdp/in...](https://github.com/irtnog/salt-
states/blob/production/rdp/init.sls)

In other cases one must run the necessary PowerShell cmdlets under user
accounts with the correct privileges, which is something I haven't quite
figured out (e.g., for Exchange 2007 and newer).

Sometimes the services in question require the keying material be structured
in very specific and slightly odd formats (e.g., Splunk). Sometimes, I just
stick a load balancer in front of it, terminate the TLS connection there, and
call it a day (e.g., Tableau), perfect being the enemy of good and all.

I've given a lot of thought to the question "how would an attacker use CT logs
against me", but I think the probable losses of something untoward happening
due to certificate transparency is extremely small compared to attackers
getting onto my networks and eavesdropping on sensitive, internal comms. I'm
also not confident in my ability to run my own internal CA, both because it'd
be a lot of work to secure and because the bus factor would be so damn high.
At least this gives the rest of the team some incentive to tie new things into
the configuration management system because they get certificates for "free".
Maybe there's a better way. I don't know.

------
virgakwolfw
Why not set the A record for the external DNS server to something like 1.1.1.1
and the internal DNS server has the correct IP address (or a view on the same
DNS Server). This way you won’t leak internal IP addresses to the Internet.

------
Animats
(Deleted. Wrong.)

~~~
detaro
Since when do you give a CA your private keys to get a certificate signed?

~~~
Animats
Right, sorry, forgot how the key generation works.

------
misiti3780
Anyone have a decent process / library for renewals ? Let Encrypt is great but
the certs need to be renewed every 3 months which is annoying.

~~~
kissgyorgy
You should automate the process and not manually request certificates. They
are saying this for a long time. Every client can auto-renew.

~~~
Hackbraten
I’ve been trying to automate this for a while but don’t know how to do it. Am
using split DNS with BIND. The device in question accepts a private key and
certificate chain but won’t allow arbitrary software to run so I need to do a
manual TXT challenge every 3 months, which is super annoying.

Any pointers?

~~~
kissgyorgy
Lego [1] can do DNS (dns-01) challenge automatically, you can run it on a
different device and transfer the certs to the device which needs it. This way
you expose the secret key to some danger (secret keys should never leave the
requester machine), but might be acceptable risk, it depends.

[1]: [https://github.com/xenolf/lego](https://github.com/xenolf/lego)

~~~
tialaramex
Lego can use a CSR, so you don't need to send private keys anywhere.

(Secret keys and private keys aren't the same kind of thing. lyrics to U2's
"The Fly" helps me remember, "A Secret is something you tell one other person,
so I'm telling you" \- somebody else needs to know the _secret_ key for it to
work, but nobody at all knows the private key)

A CSR is a signed document which says "I want a certificate for this identity
with this public key". The signature proves you know the corresponding private
key, but that key isn't transmitted anywhere.

If you're willing to have relatively long lived keys (2-3 years isn't too
scary for 2048-bit RSA) you can generate a private key and a CSR once, and
have Lego or similar CSR-capable software obtain certificates from Let's
Encrypt every couple of months with that CSR.

------
gerdesj
The article starts: _Let’s Encrypt is a revolutionary new certificate
authority_

... and in my opinion fails at that precise point.

I think that SSL/TLS itself is the correct starting point and not a particular
implementation. The article could have been titled "Using TLS for internal
servers". The LE implementation could have been one of a few.

Don't forget that you can manage your own Certificate Authority lists and
really ought to do so if you actually give a shit about IT security.
Abrogating your responsibility to MS, Apple, Mozilla, Google etc is way too
easy, dangerous and probably immoral.

