
Automatic SSL Certificates for internal IP's for home k8 setup using LetsEncrypt - gcds
https://www.techprowd.com/automatic-ssl-certificates-for-home-microk8s-setup-using-letsencrypt/
======
lgbr
Cert-manager has great support for a number of providers[0] including AWS,
CloudFlare, Google Cloud, and Azure.

I recommend this not just for internal IP setups, for actually for all setups,
since DNS verification is more robust than HTTP verification, particularly if
you have issues with load balancers, or if Let's Encrypt decides to deprecate
a protocol again [1].

[0] [https://cert-
manager.io/docs/configuration/acme/dns01/#suppo...](https://cert-
manager.io/docs/configuration/acme/dns01/#supported-dns01-providers) [1]
[https://community.letsencrypt.org/t/upcoming-tls-sni-
depreca...](https://community.letsencrypt.org/t/upcoming-tls-sni-deprecation-
in-certbot/76383)

~~~
z3t4
Verification via DNS is not without issues. If you have more then one DNS
server the verification record need to propagate to all servers. If you for
example use anycast DNS you will run into issues. Letsencrypt uses Google name
servers for lookup which is problematic because they do not behave, they will
for example not try secondary dns servers if the first try fail, making the
Letsencrypt verification also fail. And because of these issues and if you
have many domains you will quickly reach Letsencrypt quota.

~~~
throw0101a
> _If you for example use anycast DNS you will run into issues._

You're not wrong, but this assumes that you use the your 'service hostname'
for verification as well, rather than using CNAMEs.

So let us say you want to have " _svc1.example.com_ " in your cert: you
_could_ put the ACME challenge under there, but if you have anycast delays
that's a problem (as you mention). (A kludge is putting a 'sleep' somewhere to
allow for propagation.)

So instead what you can do is have " __acme-challenge.svc1.example.com_ " be a
CNAME that points to (say) " __acme-challenge.svc1._ dnsauth _.example.com_ ".
This sub-domain is not anycast, and may actually be a single machine that is
used solely for this purpose.

The LE/ACME server goes to your main domain, finds a CNAME, and follows that
to the real record and verification is achieved:

* [https://www.eff.org/deeplinks/2018/02/technical-deep-dive-se...](https://www.eff.org/deeplinks/2018/02/technical-deep-dive-securing-automation-acme-dns-challenge-validation)

* [https://dan.langille.org/2019/02/01/acme-domain-alias-mode/](https://dan.langille.org/2019/02/01/acme-domain-alias-mode/)

* [https://github.com/acmesh-official/acme.sh/wiki/DNS-alias-mo...](https://github.com/acmesh-official/acme.sh/wiki/DNS-alias-mode)

The CNAME has to be set up initially, but can be left lying around otherwise.

This is how $WORK deals with getting LE certs for internal domains: we create
a CNAME record (but no A records) for the internal hostname in our external
DNS that point to our "dnsauth" domain which gets updating by internal clients
via an API.

~~~
e12e
Oh, this is an interesting trick... I think I'll need to investigate further.

Do you use a custom acme/dns updater for automatic renewals?

[ed: ie - if I understand correctly, I could point: _acme-
challenge.example.com via CNAME to auth.other.example.net - but then I'd like
a command to check renew my example.com certs - and it would ideally use an
api/dns update to manipulate the auth.other.example.net TXT (or CNAME to
something like a15ce5b2-f170-4c91-97bf-09a5764a88f6.auth.acme-dns.io) record
when I ask for a check/update of '*.example.com' certificate.

As far as I'm aware, most tooling assumes that you can/will (programmatically
or manually) update the _acme-challenge.example.com record directly when
issuing/updating an example.com certificate?]

~~~
throw0101a
The way it works for us:

First we use standard LE/ACME clients: either certbot or dehydrated. They ask
for something like _svc1.int.example.com_ ($DOMAIN).

In the hook script(s)† we manipulate the $DOMAIN string to put it into
_dnsauth.example.com_ ($AUTH_ZONE) sub-domain and send that new string to the
DNS server that handles the _dnsauth_ zone (and only that).

Before all of this we would have set up, in our public-external DNS, a CNAME
record to point _svc1.int_ to _svc1.int.dnsauth_.

The ACME client only thinks about $DOMAIN and the cert-issuing LE server only
thinks about $DOMAIN. But the "in between" does not: by doing text
manipulation (expr(1) is handy in shell scripts), and DNS redirects, the "in
between" uses not-$DOMAIN for verification, but rather TXT records in
$AUTH_ZONE.

All with standard ACME clients and some jiggery pokery.

We ended up creating some custom scripts called via SSH, but there are (now)
DNS servers written specifically to handle REST API calls [0] and one can use
lexicon [1] for just about any commercial DNS service.

[0] [https://github.com/joohoi/acme-dns](https://github.com/joohoi/acme-dns)

[1] [https://github.com/AnalogJ/lexicon](https://github.com/AnalogJ/lexicon)

† _dehydrated_ has _deploy_challenge()_ and _clean_challenge()_ functions in
its example hook script. I'm sure most ACME clients have something similar.

~~~
e12e
Ok, thank you for the details and managing expectations. This still seems to
warrant some experimentation.

In my case I'm mostly interested in delegating a domain/sub-domain somewhere I
can easily update (be that run my own dns, host it somewhere with an api) -
while having my main domains on a more boring/static dns infrastructure - yet
still easily get certs for things like imap.example.com - which would not run
a web server. And also split cert renewal to vps/container isolated from
things like smtp/imap that need the certs.

~~~
throw0101a
> _yet still easily get certs for things like imap.example.com - which would
> not run a web server._

Well, depending on the OS, you could start up a web server during the LE
verification process and then bring it down once that's done. You'd only have
to run in on port 80 for probably less than a minute.

But yes, you could this mechanism to have " __acme-challenge.imap.example.com_
" (which is what the ACME protocol uses) be a CNAME to point to
_something.auth.example.com_ that is more dynamic. Or even a completely
different domain like _foo.bar.example.ORG_.

In your _example.com_ zone file you'd put NS and A(AAA) records to point to
the DNS server that handles the queries for the _auth_ sub-domain.

> _And also split cert renewal to vps /container isolated from things like
> smtp/imap that need the certs._

It's easier to run the ACME client on the host in question, and I'm not sure
what it gains you to have it run somewhere else. That being said, there are
ACME clients with a bit of a focus on being run 'remotely' from where the
certs actually live:

* [https://github.com/srvrco/getssl](https://github.com/srvrco/getssl)

This is probably for shared-hosting scenarios where _cron_ is not accessible.

IMHO though, if you have access to the CLI on the host running the TLS
service, it's best to run things there.

~~~
e12e
Re: your latest point - typically I'd like imap/smptd to run in separate
static containers/vms with _read_ access to the cert, but not write (and a
volume or db to write emails to etc).

In general I'd prefer the certs be something the services get via
configuration mgmnt - while the cert service can run via cron and make sure
certs are valid an present.

In particular, I don't want my smtpd server to have write access to my dns, if
I can help it.

------
windexh8er
I do this with Traefik [0] internally in almost the same way. I use DNS-01 to
get a Let's Encrypt wildcard cert and all my internal A records point to the
ingress IP and Traefik happily proxies the communications to the appropriate
service - container based and non-container based - which is the real win I
was looking to solve for in my home environment. The thing I like about just
using Traefik is it doesn't rely on a lot of extraneous tooling (can just use
Docker without Swarm/K8s) and will automatically consume orchestration
services if I'd like it to. But the reality is the majority of things I want
valid certs for are static mappings. One config file update of a few new lines
of boilerplate is all it takes to get a valid cert fronting any service. And
then to get a dashboard of all my internal services I use Heimdall [1].

[0] [https://docs.traefik.io/](https://docs.traefik.io/) [1]
[https://heimdall.site/](https://heimdall.site/)

~~~
j45
Appreciate this breakdown, I've been using an nginx proxy to hold a few things
over and wanted to move in the direction of a managed service, which Traefik
looks perfect for.

As time goes on the value of having services running as appliances is becoming
more and more valuable.

------
aforwardslash
Keep in mind, adding local entries to your external DNS will expose internal
details of your network, such as hostnames and IPs. Same goes for Let's
Encrypt, due to Certificate Transparency logging.

~~~
iso1210
While you'll get the hostnames leaked - you could register them as fake
addresses (say an A record for 192.168.0.1 for every address), and have a
local DNS server overriding with the real addresses.

Whether this is worthwhile or not is debatable. Is the fact your internal
server 'gubbins.mydomain.com' exists, or even that it exists on 10.0.41.43
really much use?

The other option for internal certificates is to get a wildcard of
*.internal.mydomain.com, and spread that wildcard certificate around your
network.

The final solution is run your own certificate authority and trust it on every
browser. For some reason when you import a root certificate you can't
typically allow that CA to only be used to authenticate a given subdomain.
There are x509 constraints you can use in setting up the CA, but that's rare
too, and I'm not sure every tool uses it.

In any case, if you go for an internal DNS provision, make sure you set use-
application-dns.net to NXDOMAIN on your internal dns server to override DoH
too

~~~
cassianoleal
> Is the fact your internal server 'gubbins.mydomain.com' exists, or even that
> it exists on 10.0.41.43 really much use?

Pretty much this. What does it matter if you know certain hostnames or
internal IPs on my network? It's all firewalled anyway, and if it wasn't it
would be trivial to find them out on your own...

~~~
Xylakant
It may be of interest for attackers that have no visibility into your network
and make cross site attacks against you easier. For example, if I know that
your router is available at router.network.internal, I might just try and see
if your browser is logged in and send you a link to a page that starts making
requests against that interface. Enumerating network resources can certainly
be done via other ways, but DNS is a particularly easy one.

~~~
cassianoleal
Interesting but 192.168.0.1 or 192.168.1.1 would probably cover 80% of that
use case.

This is just security by obscurity. It doesn't add much if you already
implement actual security in the form of firewalls, network segregation, etc.

~~~
Xylakant
I agree that it's certainly not a substitute for a proper security setup, but
it definitely makes an attackers life easier - up to the point where it may
get interesting to automate this. So as always, it's a tradeoff. Both options
may be acceptable (and I've run a setup where I'd have internal names exposed
on public DNS for years), but you need to be aware of the tradeoff to make
that judgement.

------
Schwan
Its TLS and not SSL. Its TLS for a long time now...

And yes be aware that through this, it works fine but you are also exposing
your internal infrastructure details through dns.

I'm not seeing a big issue, just be aware of it.

~~~
teh_klev
> Its TLS and not SSL. Its TLS for a long time now...

Sure, that's technically correct but a wee bit overly pedantic. When technical
people speak about SSL/TLS certificates it's common parlance to say "SSL" and
everyone usually knows what you're talking about, which includes TLS, and
whatever other new acronym might come down the pipe in the future.

~~~
Schwan
You know, i do get this but we are in IT not in Marketing.

My most used skill is to make sure i'm pedantic aka 'so we need to calculate
this from that after this? and we need accuracy of 0.32? And what should that
button do exactly?'

It is not SSL its TLS and i don't expect everyone to get it but its still
wrong.

The weirdes thing in IT is, that i don't know any other word which is so
missused then SSL.

------
danShumway
I wrote a similar post about a year ago[0], but even at the time I wasn't the
first to come up with this idea. As someone who doesn't have a lot of
experiences with DNS security, seeing other people floating similar setups
without significant pushback gives me more confidence that the core idea isn't
horribly unsafe. I'm pretty happy/relieved to see other people playing around
in the same space.

My perspective was (and is) that for portable devices (phones/laptops) that
are interfacing with locally hosted services, having SSL for those services is
really important because your device probably isn't configured to check what
network it's on before automatically pinging 192.168.1.x. This is doubly
important if you have other people occasionally hopping onto your network and
connecting to those same services. It's imo bad practice to ask everyone
connecting to your network to install certificates or set up a certificate
manager. I wouldn't do that for any of my personal devices if someone asked me
to.

To push this a step farther, I imagined a world where my services could handle
not just renewing their own certificates, but also updating their addresses if
they were moved to a different network/address. If I build a physical device
to give to someone, I'd like them to be able to plug it into their network, go
to a web URL, and have everything just work -- no messing around with their
internal DNS settings or worrying about whether they're using DNS over HTTPS
in Firefox.

[0]: [https://danshumway.com/blog/encrypting-internal-
networks/](https://danshumway.com/blog/encrypting-internal-networks/)

------
swiley
I’ve tried to set up kubernetes at home a couple of times and I always freak
out at the amount of layers and “just run this” style of tutorials. Am I
crazy?

I’ve heard guix has some kind of container management thing. I’ve been
thinking about trying it anyway.

~~~
pas
k8s needs a control plane, that needs security, hence all the tokens, certs
(which need internal and external IPs and FQDNs), also it needs to set up an
overlay network (so you need to configure the CNI provider, sysctl stuff for
ebtables and iptables/nftables to work correctly), and DNS, and a dashboard
would be nice too. oh, and unless you use k3s or something that budles a
container runtime (CRI provider) you need to setup one (eg docker).

it's understandably complex, even if many parts are pretty standard (eg. the
sysctl stuff, and installing dependencies is basically dnf/yum/apt/apk or exit
and let the user do it).

since the most error prone parts were/are setting up the TLS stuff that got
automated first (in the form of kubeadm install), and the rest just remains in
"run this" form.

but the k3s installer is just a one liner call to a bash script. though then
you have to make sure to include the magic env vars to get what you want.

~~~
DenseComet
The k3sup project [1] takes this a step further and makes installing k3s even
easier. k3s has been the most useful piece of infra I run at home. It gives me
all the benefits of k8s with none of the complexity.

[1] [https://github.com/alexellis/k3sup](https://github.com/alexellis/k3sup)

------
viro
Honestly this feels overly complex when you can just create a CA and add the
CA to ur devices. Still cool tho.

~~~
throw0101a
We looked into this at $WORK, but it can be slightly annoying as you have to
create a workflow for each operating system's trust store, but you _also_ have
to deal with many browsers independently as well, since many of them don't use
the OS' trust store.

------
Naac
I just created a wildcard with letsencrypt in the format of
_.internal.mydomain.com

My public services all run out of _.mydomain.com and all my internal services
run out of _.internal.mydomain.com

I have my internal dns set to resolve any _.internal calls to an internal load
balancer which hosts the ca certs.

The downside is that all internal services are ssl terminated at the load
balancer, but this makes handling internal certs easy as they're rotated in a
single location. This is Good Enough for my homelab.

------
alexellisuk
inlets with the inlets-operator [0] does this by using the HTTP01 challenge,
and gives you a LoadBalancer just like you'd have on AWS. The benefit is that
you get a real IP and routable traffic, there's no tricks required. It would
also work with DNS01 if that's of interest.

[0] [https://github.com/inlets/inlets-
operator](https://github.com/inlets/inlets-operator)

Feel free to check it out in this tutorial: [https://docs.inlets.dev/#/get-
started/quickstart-ingresscont...](https://docs.inlets.dev/#/get-
started/quickstart-ingresscontroller-cert-manager)

------
guerby
I haven't tried it yet but if you have control of your DNS and want
automation:

[https://github.com/joohoi/acme-dns/](https://github.com/joohoi/acme-dns/)

[https://github.com/joohoi/acme-dns-certbot](https://github.com/joohoi/acme-
dns-certbot)

A simplified DNS server with a RESTful HTTP API to provide a simple way to
automate ACME DNS challenges.

------
user5994461
This page is raising a ton of security alerts:

"NoScript detected a potential Cross-Site Scripting attack from
[https://www.techprowd.com](https://www.techprowd.com) to
[https://carbon.now.sh"](https://carbon.now.sh")

Images are failing to load too. Not sure what's going on.

edit: Probably some misusing of DNS rather than actual attack but who knows.
Author should fix the site.

~~~
Xylakant
I think that's NoScript being overprotective. carbon.now.sh is a site that
renders nice terminal sessions as html that you can include as iframes. -
better than screenshots because you can actually copy the code. And as part of
that request, the shell code is passed in the query string. I haven't
investigated, but NoScript may be triggering on that.

------
Hitton
It's not a certificate for internal ip address, it's a certificate for host
name. Ip address is irrelevant here.

------
phrygian
I use step-ca [0] for these sort of things and it works brilliantly. I barely
see the point of having external DNS servers resolving your internal
infrastructure.

[0] [https://smallstep.com/certificates/](https://smallstep.com/certificates/)

~~~
samgranieri
I thought about that but passed because I didn't feel like telling all my
browsers to trust that new CA. Yes, that's incredibly lazy.

I bought a real domain name, told my UBNT USG that was the domain for my
network, set up the dns servers to use digital ocean, used jetstack's cert-
manager [0] to acquire the a wildcart cert using DNS01 instead of HTTP01, and
use kubed [1] to synchronize the TLS cert across namespaces. One key thing to
consider is that you really should ensure that you use the staging let's
encrypt server to test out issuance and see your browser complain about
warnings before you switch to production let's encrypt.

Honestly, I don't mind that the cert requests for my domain show up in a CT
log.

[0] [https://cert-manager.io/](https://cert-manager.io/)

[1] [https://cert-manager.io/docs/faq/kubed/](https://cert-
manager.io/docs/faq/kubed/)

------
rackforms
No association what so ever to 'em but I so dearly love what they do, I'd
encourage users to donate to keep them going healthy and strong!

[https://letsencrypt.org/donate/](https://letsencrypt.org/donate/)

------
ttouch
I did that, the very hard way (I didn’t know better at the time):
[https://whynot.fail/homelab/lets-encrypt-the-
house/](https://whynot.fail/homelab/lets-encrypt-the-house/)

------
digitalsanctum
Another alternative is inlets which automates all of the steps necessary and
offers Layer 4 as well as Layer 7:
[https://docs.inlets.dev/#/](https://docs.inlets.dev/#/)

------
varbhat
Instead of using hacky fragile methods , use

[https://github.com/FiloSottile/mkcert](https://github.com/FiloSottile/mkcert)

to automate setting up local CA and making it trusted.

~~~
Xylakant
mkcert may work fine if you're the only person using those network resources
on a single machine, ever. Otherwise you've just traded yourself a trust
management problem: Now you need to secure the key of that CA and distribute
the CA certificate to all devices that should be trusting it. This may or may
not be trivial.

The fundamental problem is that this CA that you generated gets basically the
same trust level as a public CA, but it's just sitting there on your machine.
An attacker could use it to generate certificates for almost every site and
your devices would trust them. That's probably ok if only your machine trusts
that CA since if the attacker rooted your box to the point that they gained
access to that CA key, all is lost anyways. In a network with other devices -
maybe even not under your direct control - that tradeoff looks substantially
different.

------
berbec
Why not just get a wildcard LE cert and not worry about it?

------
aasasd
TLDR:

\- have a proper worldwide domain

\- obtain a certificate for that domain

\- point the domain to local IPs in your network and use the certificate on
the local server.

Doesn't change that you'd need to self-sign certs for .local or other funky
domains.

~~~
tialaramex
Probably just give things which need names actual globally unique names from
the Internet's DNS hierarchy.

More controversially I think you should give things globally unique addresses
from the Internet's global address system but it's more important to at least
give them _names_ from the globally unique system even if you insist on using
RFC1918 numbers.

------
jimueller
split dns is typically the solution for this, is it not?

