
Hashicorp Vault v1.0 - blopeur
https://www.hashicorp.com/blog/vault-1-0
======
xhrpost
Looking at some of these encryption-as-a-service providers, I'm a bit confused
on one of the selling points. From my limited understanding: with a
traditional system, you encrypt in the database but your encryption key likely
exists on your main server, possibly in an environment variable. Attacker
compromises main system and has access to both encrypted data and encryption
keys. So, you instead use something like Vault to request an encryption key in
real-time from a remote service and thus don't need to store it on your
server. So, one of their selling points on their site is that Vault is better
because two systems would have to be compromised by an attacker in order to
decrypt sensitive data. The part I don't understand though is, if an attacker
has compromised my server, could they not just initiate a request to Vault for
a decryption key at that point? I feel like I'm missing something because this
sounds like it remains a single point of failure.

~~~
jedberg
It's part of security in depth. Yes, if they own the server, they could keep
requesting decryption keys. The main thing this protects against is someone
getting a copy of the encrypted data, then breaking into a server and getting
a key that is good forever. By using this system, it renders their copy of the
encrypted data useless.

Then you have to add in defenses for the active attack, such as rate limits,
anomaly detection on access patterns for decryption keys, and the usual host
and network based intrusion detection.

It's one part of a complete security strategy.

~~~
_jal
Yep.

Another aspect is separation of responsibility - don't forget insider-risk.
Security is a process, not a product, as the cliche goes.

Where I work, we use Vault for (among other things) authentication between
internal services. The developers do not have access to Vault and never see
tokens/certs/passwords/etc, which are created by a different group. So if you
want to run a rogue service, you need a conspiracy across departments,
increasing your risk of detection.

Same principle you see in accounting, of course. You develop process, loci of
responsibility and audit trails designed to enable the desired outcomes while
at the same time making attempts to defraud the system impossible, obvious, or
investigable after the fact, in descending order of preference.

~~~
nroets
"So if you want to run a rogue service, you need a conspiracy across
departments"

Or a developer that plants a back door in the code. He then exploits it on the
production server.

Maybe time is better spent on code reviews ?

~~~
Kalium
You're right! That's one approach to exploiting this.

It's worth considering that multiple approaches, across multiple departments,
can be used. A good environment might use Vault for authentication between
services _and_ require that code _cannot_ go into production without a review
and enforce it in code. Then you tightly control who has access to the
administration of those restrictions and log all usage to somewhere else, such
that even if someone _does_ compromise the infrastructure to allow them to
insert their back door it can be detected and the culprit identified.

Again, you're completely right. Code reviews can be a great way to spot
malicious code! You're also right that the Vault usage pattern that parent
pointed to definitely has vulnerabilities. It's perhaps worth considering that
this approach could be used in a context where it might not be the only
defense. Perhaps you could ask parent for a more detailed explanation of their
org's information security practices?

------
honkycat
Vault looks great, but I always balk at the operational overhead. Also the
cost is significant. I'm at a tiny org, though.

For smaller orgs and projects, Mozilla Sops is really great:

[https://github.com/mozilla/sops](https://github.com/mozilla/sops)

It encrypts your secrets at rest using Google KMS, Amazon KMS, and various
other cloud provider key services. You can then put those secrets into your
code repository, cloud file storage, etc. and give your build pipeline a
service account with the ability to decrypt the secret files.

Scales like crap, but is quick and dirty when you need it.

~~~
sethvargo
I agree on the operational load, but Vault is free. Did you mean Vault
Enterprise? There's two versions - OSS and Enterprise, but a lot of small and
mid-sized companies are able to use the free version at scale. Especially if
you're a smaller org, OSS should be fine, so your costs are the VMs it's
running on and the humans running it.

~~~
atmosx
I think he is talking about the burden of running a SPoF in highly available
fashion, with backups, DR plans, monitoring, logging and all the baggage that
comes with it.

~~~
sk5t
Right on. This is also what prevents me from using (unmanaged) Vault--not
having the time to invest in learning the failure modes and internals, backup
and restore strategies, in addition to the best ways to use it for the 5-10
patterns where it would become important.

------
kitotik
Congrats on the big milestone.

I’ve been extremely happy working with HashiCorp tools for the past several
years.

Vault provides sooooo much out of the box, it’s hard for me to imagine
spinning up a new project without it anymore. Which leads to my biggest
fear...my jaded-self is expecting an ‘unfriendly’ acquisition (Microsoft,
Alphabet) and/or some onerous licensing/pricing changes.

~~~
withinrafael
So, speaking of. There are rumors in the community of imminent Microsoft
acquisition announce. Just rumors though.

~~~
nodesocket
I was speculating that Amazon might acquire them at HashiConf[1]. Personally
I'd rather see HashiCorp go public eventually then get acquired.

[1] -
[https://news.ycombinator.com/item?id=18118321](https://news.ycombinator.com/item?id=18118321)

------
KerrickStaley
Confidant is another open-source product in this space
[https://lyft.github.io/confidant/](https://lyft.github.io/confidant/)

Disclosure: I work at Lyft.

~~~
athenot
That looks nice but it does requires things to live in DynamoDB, so if you use
multiple cloud providers and/or on prem, that might be limited.

~~~
gbrayut
Agreed... Neat project, but a hard dependency on AWS is an issue for us.

------
stevecalifornia
Went to use Vault for Enterprise and heard we got an invoice for half a
million dollars. Went with another solution.

~~~
mitchellh
Hey, I'm one of the founders of HashiCorp.

We'd like to make more of Vault Enterprise available to smaller companies with
lower palatable price points. This will be reflected in certain features
omitted as well as a lower level of support.

For now, please understand that our target _enterprise/commercial_ customer at
the moment are Global 2000-esque companies. We currently have almost 10% of
the global 2K as paying customers of Vault Enterprise. The features we've
built along with the support you get reflect that (dedicated TAMs and so on).
But we recognize that certain features of Vault Enterprise would be useful to
smaller companies (replication and so forth).

To that end, we're currently planning some new packaging/pricing aimed at this
type of user. I have no timelines on when we'd publish that, but its something
we're actively doing now. This should make Vault Enterprise more affordable by
smaller companies (think 5-figures/year instead of 6-figures/year).

Meanwhile, we're also making more "quality of life" features like auto-unseal
available in Open Source. As we continue to add more features and value aimed
directly at the Enterprise user, this lets us reevaluate and move more
features into OSS and we have continued to do so throughout the life of Vault
Enterprise. We hope this helps smaller companies adopt Vault successfully. And
note that this is a great example of the model working: our success in Vault
has funded growth in Vault staffing and that growth in Vault staffing has
directly led to more OSS features and the growth in funding has led to our
ability to more confidently make more features free. The community plays a
huge role here, too. This is exactly how we intend for it to work!

One thing we learned rather painfully is that selling to the 4-figure vs
5-figure vs 6-figure vs. 7-figure/year customer is each a _very_ different
company-building exercise. The expectations of the 7-figure customer (and we
have a number of those) is dedicated TAMs, dedicated support reps, quarterly
in-person meetings about the state of the install, high impact on the roadmap,
and much much more. That of course requires a certain kind of staffing. Very
often this staffing scales "down" to lower price points but very rarely does
the staffing at lower price points scale "up" to higher price points.

As a company, we chose to go after the large enterprise deals first. This of
course alienates some of the smaller deals since the large deals suck the air
out of your org a bit. But as we've acquired more and more customers, grown,
acquired more funding, etc. we're moving in that direction rapidly.

So, hopefully we'll get there soon! I'm sorry that you were quoted at a point
that didn't work for us mutually and I'm glad you find an alternate solution
that worked for you. For others in a similar position: we're working on it!

~~~
captainperl
Hi Mitchell.

I had to kill a rollout of Vault at one billion dollar (revenue) company for
the following reasons:

* the engineers doing the PoC could not/would not document how to operate it in production

* the managers did not take the unsealing responsibility seriously ("I'm in mgmt., don't call me on Sundays again.")

* our network was perceived as flaky.

Some cheap solutions are:

* provide some pre-written runbooks for administering Vault that people can cut-and-paste into their wiki

* provide some diagrams and scenarios for unsealing that can be adopted

* have the Vault server monitor and log network health (latency, bad packets, etc.)

~~~
jiveturkey
sounds like you have problems in your org unrelated to vault.

~~~
captainperl
> sounds like you have problems in your org unrelated to vault.

Unfortunately for engineers doing the deployment, Vault magnifies any
weaknesses your organization already has. That's the nature of centralized key
mgmt.

For example, I know one large company ended up using macros to unseal Vault to
solve the key mgmt. problem I mentioned. In other words, the unseal keys are
in plain text on the servers.

Probably happening more often than you would initially expect since nobody
wants to drive down to the data center.

The remarkable thing with AWS KMS is that it's so seamless - it's idiot-proof
compared to a self-hosted distributed system.

~~~
wmf
Obviously that's not ideal, but it's probably still more secure than using no
secret management system at all.

------
aerovistae
My understanding of vault is not ironclad, but from what I have read it seems
it allows ephemeral passwords that allow your application to get access to a
service at time of initialization, and then the password ceases to be valid.
Which means your application has access, but there's no credentials floating
around anywhere that they could be compromised later.

If anyone could correct me if I'm wrong, that would be great.

~~~
no_one_ever
You've more or less got it. You authorize to the vault and store the secrets
in memory. So no passwords on-disk/in-source 'floating around'.

~~~
xhrpost
But how do you authorize to the vault then? Aren't there credentials for that
part?

~~~
ReidZB
Yes, there is the "initial secret" problem. There are a variety of different
ways Vault handles that (look up the auth plugin docs), but at my org we use
AWS IAM auth. So, app servers authenticate via their IAM instance profile
(provided/managed by AWS) while developers assume a specific IAM role and then
authenticate via that (which is how we enforce MFA for Vault without paying
the crazy enterprise pricing).

Note that with AWS IAM auth, AWS is a trusted third party, and accounts with
high-powered IAM access (think AWS admins) end up having a great deal of
authority in Vault, too. But for us, at least, these assumptions are
reasonable.

------
jteppinette
I spent a few months of side work time working on a "secure-deployment-seed"
project, [https://github.com/jteppinette/secure-deployment-
seed](https://github.com/jteppinette/secure-deployment-seed). It is a set of
Ansible playbooks/roles that have Vault/Consul at the center of a standard web
deployment where privacy/security is taken to the Nth degree of perfectionist
driven insanity.

I ended up never using it, because it never really felt "perfect" to me..
There are so many circular dependencies between systems (DNS/Consul-
Template/Consul/Vault/Ansible) and bootstrapping is just complete hell. Dive
into that repo and witness it for yourself.

I can see myself using this setup if I was ever just doing Ops work, but when
you are also doing everything else, it is just too much.

Anyways, congrats to the Hashicorp team. Your stuff really is topnotch.

------
baby
There's so much detailed craft and love poured into Hashicorp's codebases,
this is great. Congrats Mitchell!

------
SureshG
We use [https://square.github.io/keywhiz/](https://square.github.io/keywhiz/).
It provide secrets as files in a directory, securely. So no special API,
client libraries required to access it.

~~~
jefferai
You can use Consul-Template ([https://github.com/hashicorp/consul-
template](https://github.com/hashicorp/consul-template)) -- yes, it really
needs a rename -- to do this with Vault (or Consul).

------
Dowwie
What are the real-world ephemeral workloads that batch tokens are intended to
address?

~~~
mitchellh
This is a feature we built to specifically address large scale serverless
workloads that are requesting a huge number of short lived tokens. And this
isn’t theoretical: it was directly driven by multiple paying customers. Fun!

------
chiu
"Expanded Alibaba Cloud Integration" haha.

