
Ask HN: In a microservice architecture, how do you handle managing secrets? - kkamperschroer
I&#x27;m evaluating solutions for secrets management in relation to a distributed microservice architecture and am curious to hear what everyone else out there does. Some options I&#x27;ve considered:<p>- Git-crypt and deploying secrets along with binaries<p>- Hashicorp Vault<p>- Square Keywhiz<p>- AWS KMS<p>- Lyft Confidant<p>- Roll your own<p>All seem to have pros and cons depending on use cases and how mission critical the service you are offering is.<p>So what do you do to solve this problem in your world?
======
wsargent
You should check out Daniel Somerfield's talk at OWASP, "Turtles All the Way
Down: Storing Secrets in the Cloud and the Data Center"

[https://youtu.be/OUSvv2maMYI](https://youtu.be/OUSvv2maMYI)

~~~
kkamperschroer
Fantastic. Thank you!

------
gtaylor
We use Kubernetes, which includes its own secrets API:

[http://kubernetes.io/v1.1/docs/user-
guide/secrets.html](http://kubernetes.io/v1.1/docs/user-guide/secrets.html)

I can't remember which issue this was on, but it seemed like there was some
discussion on their GitHub project about making pluggable secrets backends
(HashiCorp's Vault was mentioned).

Kubernetes' secrets API is still very basic, but I think the fundamental
concept is very sound and has a great foundation to continue building on.

~~~
theptip
Are the k8s Secrets still stored in plaintext in the etcd datastore? Seems
that this feature is a bit half-baked right now -- though I'd love to me
mistaken on this point. The k8s docs mention shredding your apiserver hard
drives once you're done with them; that's hardly feasible in a cloud
environment.

Also on access control, any process with root on any node in your cluster can
get access to all your secrets (since the kubelet needs to be able to do so).
There are no user access controls either; any cluster admin can dump all the
secrets.

This stuff is clearly documented, so it's not an indictment on k8s; I just get
the feeling that the feature isn't really ready for production use yet.

~~~
ibotty
afaik the only part of kubernetes accessing etcd is the master. Nodes don't
need and can't access etcd directly.

That still leaves the secret in plain view on the nodes that run the pod that
needs the service. It would be great to be able to umount the secret when not
needed anymore.

~~~
theptip
Correct, the etcd instance is only accessed by the master, which uses etcd to
back the apiserver. But any root process on the nodes can access the secrets
through the apiserver (there's no access control at this point).

------
steveb
We have started work on exposing Hashicorp Vault secrets via FUSE and Docker
volumes. The expectation is your containers will just mount secrets via a
mount like /secret in the container.

The project is brand new and we'd love to hear your feedback:
[https://github.com/asteris-llc/vaultfs](https://github.com/asteris-
llc/vaultfs)

~~~
kkamperschroer
Nice! The FUSE mounting method for obtaining secrets is similar to how Keywhiz
does this. Very cool and novel solution. Though, if you are unfortunate enough
to still have Windows servers in your architecture, I think you're out of
luck.

------
esher
We run a PHP hosting platform and that tickled us as well. We were especially
upset with the common sense that storing secrets in ENV vars is a good idea —
in PHP those vars are easily exposed. See our blog post:
[http://blog.fortrabbit.com/how-to-keep-a-
secret](http://blog.fortrabbit.com/how-to-keep-a-secret) — here we suggested:

1\. create a secret key, store it with the code of your App 2\. store the
encrypted credentials in env vars

Later on we even launched our own solution for our clients, an app_secrets.yml
file, which can be edited via Dashboard.
[http://help.fortrabbit.com/secrets](http://help.fortrabbit.com/secrets)

The nice thing is, that this file is partly managed by the platform for it's
own credentials and partly by the user.

That has been running for a while now. The adaption rate is low until now. It
turned out that not everything will fit into that ONE fault. Blackfire.io and
NewRelic run as PHP extensions, thus the API-keys are stored with the
extension setting.

We have also discussed to implement an some open source "Secret as a Service"
but came to the conclusion that this can too easily turn into to be a SPOF.

I am amazed that this topic is getting discussed again and I have learned
about many new concepts here.

------
jvehent
I wrote some of my thoughts on the topic, and the primary motivation behind
SOPS [1] (which uses PGP and KMS):
[https://jve.linuxwall.info/blog/index.php?post/2015/10/01/In...](https://jve.linuxwall.info/blog/index.php?post/2015/10/01/Introducing-
Sops%3A-an-editor-of-encrypted-file-that-uses-AWS-KMS-and-PGP)

The initial trust problem boils down to trusting the API that controls the
provisioning of your infrastructure. Failing that, you have to ask a human to
manually authorize new nodes to retrieve secrets (that's how puppet approves
new agent certs).

[1] [https://github.com/mozilla/sops](https://github.com/mozilla/sops)

------
room271
A simple solution if you are in AWS is S3 with instance profiles for access.

~~~
samstave
Yes, but is there a succinct howto on this?

~~~
ladon86
The short version is:

1) Create an S3 bucket. Remove all permissions from it

2) Create an IAM role - give it explicit read permissions to just that bucket
(there's a HOWTO at the bottom of this article:
[http://mikeferrier.com/2011/10/27/granting-access-to-a-
singl...](http://mikeferrier.com/2011/10/27/granting-access-to-a-
single-s3-bucket-using-amazon-iam/)). When you start an ec2 instance, you can
give it one (and only one) IAM instance role.

3) Put your secrets or configs in a file on that bucket. For example,
config.json or whatever format you choose.

4) On your instance or container, use the aws-cli on when your app starts to
copy that file down from S3, then read it into memory in your application and
then delete it.

It's a bit of a hack but you can now easily restrict access to that secrets
bucket, and only your running instances/containers can access it. The secrets
only exist in running app memory. Now don't allow SSH access to those
instances :)

~~~
austinjp
I'm somewhat naive regarding S3. If data is in RAM, can you prevent it being
swapped to disk and read by an unauthorised user?

(I guess "RAM" and "disk" are virtual entities, but hopefully the spirit of
the question still applies.)

~~~
ladon86
As the sibling comment to mine points out, the fact that the instance has
access to S3 means it's not actually secure - they could just use the aws-cli
to copy the file back down again. My comment about deleting the file from disk
was a bit silly and doesn't add any true security.

Really, you need to just make sure that the instance is secure. The point of
this whole setup is not to make secrets unobtainable if someone compromises
your app server; it is to prevent you from checking in production database
passwords and secrets to your code repository.

------
TomFrost
We have an open source solution called Cryptex[0] to handle this. It's better
explained by this blog post[1] that gives the thinking and configuration
necessary for most scenarios.

[0]:
[https://github.com/TechnologyAdvice/Cryptex](https://github.com/TechnologyAdvice/Cryptex)
[1]: [http://technologyadvice.github.io/lock-up-your-customer-
acco...](http://technologyadvice.github.io/lock-up-your-customer-accounts-
give-away-the-key/)

------
joslin01
There's also blackbox by StackExchange
[https://github.com/StackExchange/blackbox](https://github.com/StackExchange/blackbox)

------
whisk3rs
For AWS users, KMS's GenerateDataKey is a simple way to store secrets locally
in a way that reuses your IAM policies. You can also use grants and
EncryptionContext to restrict the ability to decrypt secrets in a very fine-
grained manner. As a bonus, all decrypts are logged in CloudTrail. The KMS
docs are awful but if you're on AWS then it is worth checking out!

------
transitorykris
Conjur ([https://www.conjur.net/](https://www.conjur.net/)) has been working
well the users of our PaaS. It's a self hosted commercial product.

------
austinjp
Is there any detailed opinion on good and bad approaches to this problem? It
is clearly a hot topic.

I'm sure each product listed conforms to one from a small set of design
patterns. Has any credible analysis of these designs been published? Are
competing offerings likely to evolve toward a stable converged solution? Or is
there something in this problem that remains fundamentally unsolved?

I appreciate it's turtles all the way down, but I'm wondering if anyone has
proven the merits of some approaches over others, or components within the
approaches at least.

------
salimane
We use
[https://github.com/meltwater/secretary](https://github.com/meltwater/secretary)
. The key differences with Secretary is that plaintext secrets are never
stored to disk or otherwise made visible outside the container.

------
mikljohansson
We ended up creating
[https://github.com/meltwater/secretary](https://github.com/meltwater/secretary)
to allow storing encrypted secrets in config files checked into Git by
devteams. The encrypted secrets are passed as env vars through the continuous
delivery pipeline, Mesos/Marathon and into containers. They're then decrypted
and injected into the app environment at runtime, safely inside the container.

At startup the container reaches out to the Secretary daemon that holds the
master keys, using public key cryptos to authenticate itself. The Secretary
deamon uses Marathon to authenticate containers (checking their public keys
stored in env vars) and validate that they're authorized for the specific
secret in question (checking that the encrypted secret is indeed part of the
containers env vars).

Meaning that Marathon is the single source of trust of which container can
access what secrets. The problem then becomes controlling who and how changes
are made to the Git repo containing the CD config, which is something Github
does well with roles, status/deployment API and pull requests.

We had a similar problem as some describe with the distribution of the initial
secret (i.e. Vault token) and one time Vault tokens being cumbersome in
dynamic scaling envs. We didn't want the cleartext token ending up in config
files nor in the
[https://github.com/meltwater/lighter](https://github.com/meltwater/lighter)
config we use to drive our continuous delivery pipelines that go into
Mesos/Marathon. We also had some other aspects like

* Wanting to keep secrets, app config and code versions promoted together throughout or deployment pipelines. Seeing secrets as another type of app config we wanted to track all config and versions for an app in the same way, in the same place to avoid mismatches or deployment dependencies.

* Wanting to enable our very independent devteams to easily manage secrets for their services, same was as they manage the app config, versions and rollout of their services. And delegate management of what service is authorized for what secrets to devteams (with both automated checks for unencrypted secrets, and some gentle manual coaching post-commit)

* Versioning and rolling upgrades for secrets? E.g. how to roll out a new secret in a Marathon rolling upgrade? Creating and managing versioned keys in Vault seemed somewhat cumbersome.

Perhaps something like that could be used to solve your initial secret
distribution problem or even handle the secrets themselves until Vault has
solved the initial secret problem..?

------
ahelwer
Azure Key Vault! Disclosure: am dev in Azure, although not on this specific
product.

[https://azure.microsoft.com/en-us/services/key-
vault/](https://azure.microsoft.com/en-us/services/key-vault/)

~~~
yodon
Azure Key Vault is a great component but it's a component not a solution. By
way of example, Key Vault's hardware "secrets check in but they don't check
out" capability is awesome for preventing disclosure of secrets but if you
don't have a system for adequately managing who/what can use the contained key
to sign messages all you've done is add a complex and pricey piece of security
theater (but as I mention elsewhere our primary concern is making sure
whatever secret management we use helps us defend against at least the early
stages of compromise of our infrastructure)

------
johnnycarcin
Because we are already using Consul we went with Vault. We are using it in a
POC type setup now (only for a few services/scripts) but so far it's been
pretty easy to work with. The API is fairly easy to use outside of the fact
that there is no search function (or wasn't last time I checked). The
documentation could be better but since it's a public project and I haven't
submitted anything I'm not going to bash that :) The fact that it's a single
binary is another thing we liked. just drop it out somewhere and run it.

------
late2part
I'm pretty happy with this solution from Strongauth.

[http://keyappliance.strongauth.com/](http://keyappliance.strongauth.com/)

You can secure the root for it with TPM or HSM.

~~~
kkamperschroer
That's an interesting solution. I guess this would really only work though if
you are self hosted, right?

Thanks!

------
vasco
We're using Ansible which means we use ansible-vault to store secrets. We
store the encrypted files in S3 and decrypt them on deploy as needed.

~~~
kkamperschroer
So if you potentially need to roll a secret you would just run your deployment
playbook limited to the secrets task?

------
PaulHoule
When I am building stuff in AWS, most of the secrets are for entities that are
access-controlled by AWS and can be passed in through server roles.

------
ejp
Another option for your list, which you'll have to evaluate for your use case:
[https://wiki.openstack.org/wiki/Barbican](https://wiki.openstack.org/wiki/Barbican)

I've been evaluating most of these same options for my use case, but haven't
made any decisions yet.

~~~
kkamperschroer
Nice, thank you! I haven't heard of that one.

------
Crystalin
It also depends on how secret it needs to be. For most of our secrets (those
used for configuration) we use Consul.

~~~
tptacek
I think this can be sane when you don't have multiple privilege levels
anywhere in the data center you're deploying in. It's less sane if you have
less- and more- privileged machines anywhere in the environment, or less- and
more- privileged applications.

You're putting a lot of faith in a very complex and not- well- tested codebase
if you rely on Consul ACLs to protect secrets.

~~~
eropple
The poor state of its testing is the biggest red flag I have towards Consul.
I'm much more positive about it in its way than I am about other Hashicorp
tools like Packer and Terraform, if only because it seems like Consul is core
enough to the way they want to make money that it's more important to them.
But there doesn't seem to be a culture of correctness and strong testing
around those tools; trusting my sensitive data to a tool that's as complex and
complicated as Consul is worries me. (I feel like it should be normal to have
something maintaining my cryptographic secrets to be at least as well-tested
as my web framework...)

Of the tools listed in the OP, I feel really good about Square Keywhiz; I'm
still rolling it out in my first environment, so I can't say for sure, but I
appreciate the level of effort that's gone into _only_ doing secret storage
and making sure it is exhaustively tested to spec.

------
malnick
I used Marathon and Mesos and rolled my own pub/priv encryption for our
developers JSON (encrypted the ENV parameters POSTed to Marathon):
[https://github.com/malnick/mantle](https://github.com/malnick/mantle)

------
kt9
You can encrypt and deploy secrets using Distelli.

[https://www.distelli.com/docs/user-guides/securing-your-
appl...](https://www.distelli.com/docs/user-guides/securing-your-applications)

Disclaimer: I'm the founder at Distelli

~~~
danesparza
This is not a trivial thing. Why should I trust you with my company's secrets?

How do you manage key storage securely? Can people at your company see my
secrets? If somebody comes with a court order will you give them my secrets
and not tell me? What encryption algorithms do you use? What experience do you
have in reducing attack surfaces from internal and external threats? Is any of
your software open source? Has your software been audited? Is it PCI (or any
other standard) compliant?

~~~
kt9
All good questions. We do not store your secrets. You do not give us your
secrets. Your secrets do not live on our servers. No one at our company can
see your secrets or access them.

We provide you with an agent that you install on your own servers and that
agent is marked as a key management server. That agent is contacted to do
asymmetric key encryption.

Here is a more detailed blog post about this:
[https://www.distelli.com/blog/keeping-your-application-
secre...](https://www.distelli.com/blog/keeping-your-application-secrets-safe)

Also we use standard encryption algorithms and have not written our own crypto
(and never will).

------
grandalf
What is the tradeoff matrix for these kinds of services? They all seem pretty
similar to me.

------
yodon
it's a huge pain point for us. We're a .NET shop rolling our own that
mimics/overlays app.config and web.config patterns for both dev and production
usage. Our concern is less on how do you get the secrets to the box (though
that's obviously important) and more on how do you keep an attacker who has
started penetrating your infrastructure from gaining control of the
infrastructure that holds your secrets.

~~~
voltagex_
Have you had a look at the new ASP.NET Configuration classes? [1]

I hate having to manage web.config but I get your point about keeping
attackers at bay (and not providing pivot points).

[1]:
[http://docs.asp.net/en/latest/fundamentals/configuration.htm...](http://docs.asp.net/en/latest/fundamentals/configuration.html)

~~~
yodon
Thanks - it's clear MSFT is working hard to get to a place where secret
management is a first class part of the dev process and we're attempting to
integrate with the classes you mention but as I understand it they only work
with ASP.NET 5 so you can't use them in console app based test harnesses or
Windows services or etc. That means we end up needing to have a bunch of
provider mechanisms, all essentially the same in principle but with different
implementations for the different platform details. If it's not super easy for
the dev to drop into a quick test app, they'll "just copy and paste the
secrets for now" which is always the path to darkness.

~~~
dsp1234
It's 100% possible to use the new ConfigurationBuilder class outside of a
asp.net site (like a console application). ex:
[http://stackoverflow.com/questions/31885912/how-to-read-
valu...](http://stackoverflow.com/questions/31885912/how-to-read-values-from-
config-json-in-console-application)

------
avitzurel
I'd choose the one that I am the most comfortable with and is less obtrusive
to the rest of my stack.

Whatever you choose, make sure you are comfortable with it, it's easy to
deploy and work with.

~~~
kkamperschroer
Thanks for the tip. I guess the easiest thing to do is use git-crypt with some
encrypted file and have the secrets available at deploy time, but I'm worried
about long term disadvantages to this approach. Rolling secrets would then
require a deployment of at least that secrets file and restarting the
services, or writing them in a way they read the file every time they need the
secret.

Since our stack isn't on AWS, it kind of throws out AWS KMS and Lyft Confidant
(since it is built on AWS). I'll keep digging into Vault and the other options
put forward in this thread. Thanks again.

