With Vault, you still need to figure out how to push secret zero (here a client authentication token). If you can solve that, why not push the secret you need to push directly, and bypass Vault entirely?
It allows you to have a non-sensitive single factor within things like configuration management, and have the second factor come from a machine-local location (instance ID, MAC address, TSM module, etc.). The idea is that another team out of band sets the 2nd factor that developers and config management never get to see. The result is you have full automation without secret zero issues.
As an additional security parameter, you can bind the two factors together to a single CIDR block, so you can have an additional factor restricting access from that machine's IP.
The URL itself above goes into a lot more detail, but our beta users are successfully automated in an elastic environment this way.
There is a typo though: "in client" should probably read "if a client" or "if the client"
Every new instance of application server is provisioned automatically by a trusted server (which holds the key to a credentials server). During orchestration/deployment, the application server has a temporary permission to fetch the secrets it needed, through SSH agent forwarding. Once the deployment is done, the session will end and the application server can never read new secrets until the next deployment.
This way we don't really need a solution like Vault. SSH is mature enough to provide authentication, and encryption is easy once you've figured out how to distribute keys automatically.
Would be interesting to hear more details of yours for comparison.
It's not a perfect system or a panacea, but it's far better than having a bunch of passwords directly stored in config files, in my opinion. We also have a wrapper tool that uses your SSH agent access to fetch secrets for the mysql suite of tools as needed, so nobody is ever tempted to create a personal .my.cnf on prod servers. Anybody who actually has access to a given mysql password is still on the honor system not to fetch it and write it down somewhere, but removing the need to ever even see them eliminates the temptation to put it in a .my.cnf or in your clipboard, which eliminates a whole class of mistakes you can make in handling them.
Personally, I think this is your only line of defense anyways. Someone with access to your private network will likely be able to do some harm, and if they get even a non-root user on your server, it's game over, since you are just one local exploit away from them gaining root. Remember, once they gain root on one box, it's just a matter of time before they replace /usr/bin/sudo and /usr/bin/login with a script that emails them your password and/or steals your ssh-agent socket.
In your scenario, if something happens to a front facing web server, nothing prevents the attacker to just enumerate all secrets and retrieve them, whether the server should have had access or not.
Defense in depth is a thing.
This goes doubly so for servers on your network. Either, you assume that only authorized users have access to it or you don't. You have to draw the line in the sand somewhere and say "past this point it's secure".
Now, I am not opposed to layers of security, but there are multiple options for how to define those, and assuming your network is secure isn't a far stretch on some platforms.
Another example : in AWS, IP spoofing is not possible (according to https://ip-saas-infopark23444378-cms.s3-eu-west-1.amazonaws....). So IP address is a reliable identifier. However, IP addresses of AWS instances are sometimes stable over time, and sometimes not. It depends on how the instance is configured and used. So again, IP address is a potential authentication factor, but there's no "one true solution".
And even once a machine is authenticated, it needs to be associated with some kind of role (or a role-like thing). This part is actually even trickier.
They used to publish the security group ids, so that you could whitelist inbound traffic from Heroku. Now they don't do that any more, which makes sense because really, what is the point of whitelisting all of Heroku? :-)
But, the first example made me flinch:
$ vault write secret/foo value=bar
Good security software gently herds the non-expert to make good choices. Too many options, especially specialty risky options, don't help that goal.
There's probably a way to do what you have launching an editor with stdin, but I'd probably suggest documenting an example, to avoid the risk of leaving the secret around.
Also +1 to removing the insecure history option. Documenting the stdin to use 'cat' or something that's not in the history would probably take care of that one.
jolan 4950 0.0 0.0 46164 8820 pts/12 S+ 16:06 0:00 mysql -uroot -px xxxxxxxx
Probably still a race condition and can be viewed before it is erased but at least it would be something.
On FreeBSD I like doing:
Which restricts users to seeing only their own processes. You can make exceptions for a specific group or users with the same primary group.
I still don't put anything sensitive in a command line, of course.
I want to tell the system <look, I have a user foo, and the webapp over there needs access to the mysql over here with that user. Make it so.>
I don't need to know the password my application uses to access the database. I don't WANT that password. No human should ever see this password. And heck, if the system wants to, just swithc passwords every 60 minutes. I don't want to care.
When I want to run something on a server and I don't want it to end in the .history file, I always run vi and execute the commands inside it ( <esc>:!command ).
It always shows in the process list; but for short-lived commands the opportunity window is small for an attacker.
It's a tricky question, one that we've spent a lot of time thinking about at my company (Conjur -- shameless plug, one of our use cases is secrets management). Obviously doing this on a server that others can access/pwn is a bad idea, but that's a very rare use case. Most secrets are being added via the command line from an admin's single user laptop.
In the end, we've settled on what seems to be the same solution as Vault: support other sources (stdin, files, etc.), but allow the command line.
That said, I think you're right that we should look into adding a warning for the CLI tools and placing more emphasis on this fact in our documentation.
It persists in your shell history... your notebook is stolen, then...
You can encrypt the filesystem but it is better to not have your secret information disseminated in your filesystem and computer memory.
Edit: am I wrong? If so, please explain, I really am curious.
This is the first time that I actually feel overwhelmed. That nagging feeling that I'm doing something in some way solely because of habits. And that someone somewhere has created an awesome tool that would have allowed me to go home two hours earlier.
The amount of new tools cropping in the last couple of years alone is the 'culprit'.
These are awesome times, of course. Still, a part of my brain misses the time where you could ftp crap away to a server (even if automated by a Capistrano tool or similar) and spend the rest of the time arguing about the merits of puppet vs chef.
Personally, I try to be pretty conservative with tools that I start relying on. I know by painful experience that some tools get abandoned, some change drastically, making it impossible to upgrade, and some just turn out to be very buggy and horrible once you start putting them through their paces. In lots of cases, it's better to be critical than permissive when it comes to your infrastructure (and that's what these tools are aimed at). In almost all cases, your edge is in knowing the tools you use, and knowing about other tools. It's not in knowing all and using everything under the sun.
I manage quite a few servers, all running different in-house developed services. Almost all configurations are completely different, with a few exceptions, but I am still automating them. Just leaving it like it is, all manually managed would probably save me a lot of time, so why do I do it? So it can first be deployed on some test VM's, and when someone changes a configuration, I can track it in git/svn/.. and see who committed what change. It would only save me time if I had to deploy service X more than 3 or 4 times (dev/test/qa/prod), but as it stands, nop. Timesink it is.
I avoid using new tools in production. I wait a few years first to see if they last - after all I'm going to have to support the system for years, I don't want the tools to be the weak spot.
We were already using consul as our service discovery engine and as a data store for dynamic configurations via consul template. We feel Vault will help us manage secrets on our infrastructure and inside containers.
Excuse the mess, I'm still working on it: https://github.com/jgrowl/livehq
> Chef, Puppet, etc. all solve this in a similar way: single-key encrypted storage. Chef has encrypted data bags, Puppet has encrypted Hiera, an so on. The encrypted data is always one secret (a password, a key, etc.) away from being decrypted, and this secret is generally not well protected since in an elastic environment, every server needs to somehow get this secret to decrypt the data. Additionally, access to the encrypted data isn't always logged, so if there is an intrusion, it isn't clear what data has been accessed and by who.
There are a bunch of good ideas in this RFC, https://tools.ietf.org/html/rfc2904
Something like a host-your-own Amazon IAM would be pretty dope. Add support for 2fa and especially u2f and you've got yourself a winner.
> When a backend is unmounted, all of its secrets are revoked and its data is deleted.
If a backend is similar to a filesystem as mentioned in the docs, that's quite a surprise. If a backend is like a filesystem, the secrets (files) should become unavailable when unmounted, but should become available again after remounting the backend (filesystem).
Are the secrets actually _deleted_ (ie, can't be recovered again), or am I misunderstanding?
Yes, when you unmount a backend, all the data is deleted. Vault mostly behaves like a filesystem, but not in this scenario. The raw data is not preserved. If you want to preserve the raw data, you can remount (move the mount point).
We didn't finish it for 0.1, but we already built in secret enumeration into the core. We plan to expose this at some point. If you want to unmount and preserve your secrets, you can always enumerate them and back them up elsewhere (unencrypted). This isn't yet available.
I'm not super familiar with Shamir's secret sharing. The post says that the master key is split up across a number of people. One thing that I am confused about is if Vault goes down (possibly in the middle of the night), does it require a quorum of humans to boot it back up?
The solution to this problem is to run Vault in high availability mode. This will have a set of stand-bys that can take over if the leader goes down. The idea is to unseal all the stand bys, and if the leader goes down, the standbys take over, and the sealed one can be unsealed at some other future point.
Security-sensitive things have been an interesting UX challenge for us, but this compromise comes at an expense of UX for a much better security promise. As one of the other comments says around this comment: without this feature there would be no real security for the stored data.
This way even if Vault dies during the night (and you don't have pager duty) at least some clients (e.g. your cloud instances) can live through?
You want to have a secret shared by n people, any m of whom can decode it. You create a polynomial of degree m, and give each of those people the value of said polynomial at a different point. The secret will be the value of the polynomial at yet another point.
From the value of the polynomial at any m points you can figure out the polynomial and the secret. But the value at any m-1 points does not let you predict its value anywhere else.
(Detail. You would make the polynomial be over a finite field instead of the real numbers. But the math is all the same.)
This is akin to disk encryption where you have to type password on boot, but then data is accessible later.
Love all the projects. Are they contracting out features to companies? Selling support contracts?
Very curious-- I think if people, especially individuals, can make a living from open source projects, that's great for everybody.
But do you have to be "famous" to do it?
//edit My solution focuses more on the sharing aspect between people, but can just as well be used like Vault
I just finished tossing together a Chef cookbook to get Vault installed/configured up to the init/unseal step. Here's the link in case its useful to anyone: https://github.com/hexedpackets/chef-vault
My biggest security concern with a cloud (or VPS) provider is that they can very easily read any keys or unencrypted data from memory. Still a worry when renting a dedicated box, but it's a damn site easier when it's virtualised.
I don't see Vault helping me with this... but then I'm probably both paranoid and missing the point.
App 1 needs secrets A B and D. App 2 need secrets A B and C.
So we need to set up our paths in a way that App 1 can get A and B and C, but not D. App 2 needs C but not D. Now when you want to modify secret access from your apps you have to rethink how your paths are set up.
When you're instead assigning permissions to roles this is a lot easier. An example:
A few reactions. First: amazing design. Awesome documentation. Second: wow they have an awesome little suite of high-impact devops tools. I spent all day learning about them.
The unified style of websites they have organized into "intro" and "documentation" is awesome and easy to really digest what's going on quickly. All of the tools seem powerful, well thought out, and complementary.
Vagrant's been awesome to me and I'm excited to start using some of their tools like Packer, Consul, and Terraform in production!
The security page seems to be a generic one about reporting vulnerabilities:
Edit: Found it under the threat model section here: https://vaultproject.io/docs/internals/security.html
re: naming (and disclaimer): I wrote Ansible and designed Ansible's vault (and no longer work on Ansible), but to be clear, Ansible's vault is a nicer CLI (IMHO) around an idea that Chef users were already used to: encrypted data bags. It was nothing new. It took about two weeks to implement, but we had a few security things to shake out after that made it take a bit longer before it got right. (Some people also got picky about whole-data-document encryption vs leaf-node-in-datastructure encryption, and I see points for doing it both ways, but went with the way that revealed the least amount of data)
I think calling it vault is perfectly fair, as it's a very generic word, and words are in short supply. This is also doing a LOT more.
I like what Mitchell and crew have done around revocation here, though I do somewhat wonder about the mode in the docs that sets secrets and allows them to show in bash history, which is why ansible vault spawned an editor. I'm sure it's got editor options too though. I'd probably also change the phrasing "HTTP API" to "HTTPS API" :)
The dynamic secret stuff also looks pretty interesting.
One of the interesting ideas might be to whether it allows sharing of secret values to some applications without letting someone read it (asking if a user can use X without letting the user READ x), which is something we did in tower - though it seems the ACL system might allow something like that. An example of that was the system could memorize an SSH key and let a team use it, but never give it to anyone, to avoid rotation scenarios.
Anyway, all the lease ideas are very cool and it seems natural to want this decoupled. For a time I considered decoupling what what in Ansible vault from Ansible, though really it already operates on generic files so that's easy enough if someone wanted to take a stab at it.
I suspect it wouldn't be impossible to make Ansible Vault have a hashicorp-vault-mode where it encrypts a file "into" the vault, and the contents that remain in your "ansible vault" file are nothing more than a pointer to a secret in Hashicorp Vault. In this way, it would just be another cipher type. Maybe that's crazy though - but I think I like that better than storing secrets in version control, where they exist forever unless you prune history. (And I never thought storing vault files in public source was a good idea). However, avoiding a server or database was always one of Ansible's design goals - for better or worse, that limited some of the things the core app could do, in ways that were both good and also occasionally limiting.
Anyway, keep up the forward progress with the new things! I really envy the time to architect/design something so detailed here.
If only I had time...
Is it simply that the former is more encrypted than the latter?
> While Consul can be used to store secret information and gate access using ACLs, it is not designed for that purpose. As such, data is not encrypted in transit nor at rest, it does not have pluggable authentication mechanisms, and there is no per-request auditing mechanism.
I guess my question now is why not add E2E encryption on top of Consul/Etcd instead of a whole new system?
* At Conjur, we care deeply about three things : (1) the overall user experience for developers, operations, and security personnel (2) the security of our code (3) high availability
* Conjur implements full-featured role-based access control for both "carbon" (people) and "silicon" (VMs, servers, containers, and other code)
* Conjur solves not just secrets management, but also SSH access management and software-defined firewall (granular access control for microservices)
* The Conjur core services are not open source, but our cryptography is (https://github.com/conjurinc/slosilo), as are many of our client libraries and connectors (for example: https://github.com/conjurinc/api-ruby, https://github.com/conjur-cookbooks/conjur). And Conjur cryptography has been audited by a respected 3rd party.
* Conjur features packaged integrations with Chef (https://supermarket.chef.io/cookbooks/conjur), Puppet (https://forge.puppetlabs.com/conjur/conjur), Docker (http://developer.conjur.net/product_demo/secrets.html), Heroku (https://github.com/conjurinc/heroku-buildpack-conjur), AWS and more. New integrations are easy to build using the Conjur HTTPS and LDAPS APIs (http://developer.conjur.net/reference/services)
We have also gone to great lengths to make it easy to try Conjur yourself, with automatically provisioned, self-service demo appliances featuring the Conjur server and client, Docker and Git: