
RIPE's DNSSEC signer migration - fanf2
https://labs.ripe.net/Members/anandb/dnssec-signer-migration
======
tptacek
_The private keys for DNSSEC signing must be stored safely. It is common to
use a hardware security module (HSM) for this. However, this is not the only
way to store keys safely. Keys can be generated and stored on encrypted disks,
and if access to a server is restricted and limited to only certain functions,
then it is acceptable to use such a setup._

What? No, those are not comparable solutions at all. Rather, the latter
solution, of relying on Linux FDE, _defines the problem that HSMs solve away_.

------
CaliforniaKarl
> When we introduced DNSSEC to all our zones back in 2005, we did this using
> some home-grown perl scripts.

I am simultaneously cringing while nodding my head in approval.

> In 2009, we switched from these perl scripts to a signer product developed
> by a company called Secure64. This is a dedicated signer solution that runs
> on Secure64's SourceT operating system, on HP servers with Itanium
> processors.

I'm not going to malign them for choosing a proprietary product. Back when I
did DNS admin for a company, I didn't trust myself to handle all the vagaries
of DNSSEC, and so I relied on our Infoblox appliances to handle it, which they
did seamlessly.

Itanium, though…

> We settled on standard Knot DNS for our signing solution…

Interesting! I haven't heard of this before, and it looks interesting. Will
keep in in mind if I ever end up needing to choose an authoritative DNS.

> …and we have found the support from CZ.NIC to be really good.

Very important. Helps keep the product alive.

> …use of an HSM might actually be overkill. We have therefore decided to keep
> our setup simple, by adopting a design that uses a regular Linux server
> running Knot DNS, with keys on an encrypted disk partition.

Hmmmmm, I wonder how they set that up!

Just as a fun thing, I experimented with using a chain of systemd units to
handle the unlocking and mounting of an encrypted EBS volume. It went like
this:

1\. A oneshot service which attached an EBS volume on start, and detached it
on stop. 2\. A mount unit that mounted the filesystem that contained the
encrypted (wrapped) key. 3\. A oneshot service which called KMS to "unwrap"
the key into a RAM-based filesystem, and then call LUKS to unlock the
encrypted filesystem. 4\. A mount unit that mounted the unlocked filesystem.

(Note that AWS Secrets Manager didn't exist at the time, so I couldn't use it.
Also, I didn't want to rely on external infrastructure like a Vault.)

It all worked because, instead of putting entries into `/etc/fstab`, systemd
lets you create `mount` units with that can have the same dependency
relationship as services.

> Our old Secure64 signers do not export their private keys. This means that
> in order to migrate to the new signers, we will have to perform a KSK roll-
> over. Therefore, we will pre-publish DS records of KSKs from our new
> signers.

A KSK rollover? That's always 'fun', and I wish them luck! Although, it'll be
alot easier for them, compared to (for example) rolling over the root's KSK:
See [https://www.apnic.net/manage-ip/apnic-
services/dnssec/keyrol...](https://www.apnic.net/manage-ip/apnic-
services/dnssec/keyroll) and
[https://www.icann.org/news/announcement-2017-09-27-en](https://www.icann.org/news/announcement-2017-09-27-en)
and [https://www.icann.org/resources/press-
material/release-2018-...](https://www.icann.org/resources/press-
material/release-2018-09-18-en) which OMG just came out today (/me goes to
post it).

