Hacker News new | past | comments | ask | show | jobs | submit login
Vault – A tool for managing secrets (hashicorp.com)
440 points by eloycoto on Apr 28, 2015 | hide | past | web | favorite | 107 comments



There are many solutions out there that attempt to solve the secrets distribution problem. Vault is the newest one. None of them solve the fundamental problem of the secret zero: provisioning the first secret on an untrusted system the automated way is hard, and without that secret zero, there is no way to authenticate the new system.

With Vault, you still need to figure out how to push secret zero (here a client authentication token). If you can solve that, why not push the secret you need to push directly, and bypass Vault entirely?


We worked with a handful of companies to help us design the "App ID" authentication backend that does exactly that: https://vaultproject.io/docs/auth/app-id.html

It allows you to have a non-sensitive single factor within things like configuration management, and have the second factor come from a machine-local location (instance ID, MAC address, TSM module, etc.). The idea is that another team out of band sets the 2nd factor that developers and config management never get to see. The result is you have full automation without secret zero issues.

As an additional security parameter, you can bind the two factors together to a single CIDR block, so you can have an additional factor restricting access from that machine's IP.

The URL itself above goes into a lot more detail, but our beta users are successfully automated in an elastic environment this way.


So if I read that correctly, the UserID (I find that naming confusing since users here aren't necessarily humans) creation has a human in the loop to add the UserID to Vault and therefore make the authorization decision?


That's definitely a trade-off compared to strong authentication, one that I wouldn't want to make in critical environments. But indeed a very nice feature for 99% of low/medium risk applications.


Using an instance id or MAC address as a secret 2nd factor seems surprising to me. This is what banks are asking for? It's nice that there are two factors to the authentication, but neither one seems to be securely random.


No, it is usually one of those values salted with a value that is only available offline.


hard problem, but you clearly address it.

There is a typo though: "in client" should probably read "if a client" or "if the client"


We've had the same issues at CoinJar, and we used SSH agent forwarding to solve it. This is how it can work:

Every new instance of application server is provisioned automatically by a trusted server (which holds the key to a credentials server). During orchestration/deployment, the application server has a temporary permission to fetch the secrets it needed, through SSH agent forwarding. Once the deployment is done, the session will end and the application server can never read new secrets until the next deployment.

This way we don't really need a solution like Vault. SSH is mature enough to provide authentication, and encryption is easy once you've figured out how to distribute keys automatically.


This is similar to what we do at my company. I wrote about it a bit here: https://news.ycombinator.com/item?id=8826300

Would be interesting to hear more details of yours for comparison.


How does SSH agent forwarding work if there's no interactive user driving the process? Otherwise the concurrency is going to be limited to however many ssh sessions the client can spawn; and the orchestration/deployment cannot be a handsfree process.


I'm not the person you're asking, but in our system, our secret access is tied to an SSH identity, not an SSH agent. This means that you have flexibility to require a personal user's ssh agent to be present for access to some things, or an identity file (optionally encrypted) stored on the server for less-sensitive things. In the case of things that need to happen unattended, we use an in-the-clear identity file, which might seem a little counterintuitive since security-wise, it's basically equivalent to just having the secret sitting there, but the indirection adds ease of rotating secrets along with revocability if we were to suspect we lost control of such a machine. But for most of our secrets, we tie them to individual user's identities which must be forwarded via an agent. We consider this an acceptable constraint because deployment and app restarts are always driven by a human in our operations.

It's not a perfect system or a panacea, but it's far better than having a bunch of passwords directly stored in config files, in my opinion. We also have a wrapper tool that uses your SSH agent access to fetch secrets for the mysql suite of tools as needed, so nobody is ever tempted to create a personal .my.cnf on prod servers. Anybody who actually has access to a given mysql password is still on the honor system not to fetch it and write it down somewhere, but removing the need to ever even see them eliminates the temptation to put it in a .my.cnf or in your clipboard, which eliminates a whole class of mistakes you can make in handling them.


Why not do it at the network level? In some infrastructures this is really easy. For example in AWS you can set up all your servers in a VPC. Then put Vault on a machine on a specific subnetwork, and firewall it off such that only the machines from your VPC may access it. You have now created a walled garden where anything inside your VPC has access to the secrets without holding an authentication token, while anything outside does not.

Personally, I think this is your only line of defense anyways. Someone with access to your private network will likely be able to do some harm, and if they get even a non-root user on your server, it's game over, since you are just one local exploit away from them gaining root. Remember, once they gain root on one box, it's just a matter of time before they replace /usr/bin/sudo and /usr/bin/login with a script that emails them your password and/or steals your ssh-agent socket.


Because you want authorization and authentication by application. So a web server / proxy needs the public TLS cert but maybe an application server doesn't. Meanwhile the app server needs DB credentials but the web server doesn't.

In your scenario, if something happens to a front facing web server, nothing prevents the attacker to just enumerate all secrets and retrieve them, whether the server should have had access or not.

Defense in depth is a thing.


I believe what you are describing can be built using the primitives I described. Vault, or another secrets manager, would only release secrets to severs on specific subnets. The front-end would be on your front-end subnet, the application server would be on the backend subnet, and the application server would not be authorized to access the private TLS key because it's on the wrong subnet for that particular secret.


To some extent, that's true. The granularity is a bit lacking however. A bigger worry would be that the ability to spin up an host in a specific subnet is the effective authentication. It's not something I'd be comfortable with myself but I guess some people would be.


If I get on your subnet with a host on which I have root, I can do a few things. First, I can now see any unencrypted traffic. Use MySQL? I can read that or even MIIT it. Use unencrypted LDAP? I can do the same. Basically you either assume your VPC is secure or it's not.

This goes doubly so for servers on your network. Either, you assume that only authorized users have access to it or you don't. You have to draw the line in the sand somewhere and say "past this point it's secure".

Now, I am not opposed to layers of security, but there are multiple options for how to define those, and assuming your network is secure isn't a far stretch on some platforms.


Hmm, how many networks these days are unswitched? How are you going to see traffic that the switch doesn't send to you?


Active attacks, like ARP spoofing? Sure, there are environments where that won't work or will quickly be detected, but in many it will.


There's a lot of variability in cloud architectures, and for that reason there's really no one-size-fits-all solution for machine authentication. Sometimes using factors like the network subnet for machine authentication makes sense and adds something; sometimes it doesn't. It's definitely a factor to consider though.

Another example : in AWS, IP spoofing is not possible (according to https://ip-saas-infopark23444378-cms.s3-eu-west-1.amazonaws....). So IP address is a reliable identifier. However, IP addresses of AWS instances are sometimes stable over time, and sometimes not. It depends on how the instance is configured and used. So again, IP address is a potential authentication factor, but there's no "one true solution".

And even once a machine is authenticated, it needs to be associated with some kind of role (or a role-like thing). This part is actually even trickier.


Also, when you are talking about container-based cluster deployment, the network (IP addresses, VPC, subnets etc) is useless for machine authentication, because any container in the cluster can run on any host machine at any time. As an example, Heroku is basically a giant compute cluster in a single security group (for each region).

They used to publish the security group ids, so that you could whitelist inbound traffic from Heroku. Now they don't do that any more, which makes sense because really, what is the point of whitelisting all of Heroku? :-)


It sounds very exciting.

But, the first example made me flinch:

    $ vault write secret/foo value=bar
Don't ever pass passwords on the command line. On linux, other users can see your command line (ps aux), and it usually ends up in a shell's history file.


Absolutely true! We support values coming from the command-line, stdin, and files: https://vaultproject.io/docs/commands/read-write.html So you should use what you feel is most secure given the data you're entering.


I think the argument is something like, why support a method that is insecure nearly all the time and leads less-experienced users to make the wrong choices.

Good security software gently herds the non-expert to make good choices. Too many options, especially specialty risky options, don't help that goal.


true most ppl will do it command line..


This is why ansible-vault (which is not trying to do the same thing) spawned a temporary editor, and immediately upon save encrypted the file - it doesn't exist in history, nor is there a chance of leaving it behind on disk.

There's probably a way to do what you have launching an editor with stdin, but I'd probably suggest documenting an example, to avoid the risk of leaving the secret around.

Also +1 to removing the insecure history option. Documenting the stdin to use 'cat' or something that's not in the history would probably take care of that one.


Why not use interactive CLI instead of command line CLI (like routers have) to avoid leaks through the shell command history and other related channels?


I think what you mean by "interactive CLI" is reading from stdin, which seams to be supported.


Hopefully they use setproctitle() to erase it. Pretty sure this is what mysql does:

jolan 4950 0.0 0.0 46164 8820 pts/12 S+ 16:06 0:00 mysql -uroot -px xxxxxxxx

Probably still a race condition and can be viewed before it is erased but at least it would be something.


> On linux, other users can see your command line (ps aux)

On FreeBSD I like doing:

/boot/loader.conf: mac_seeotheruids_load="YES"

/etc/sysctl.conf: security.mac.seeotheruids.enabled="1"

Which restricts users to seeing only their own processes. You can make exceptions for a specific group or users with the same primary group.

I still don't put anything sensitive in a command line, of course.


on linux you can mount procfs with the option 'hidepid' to similar effect.


Yes FreeBSD got this one right.


Put a space before the command and it's gone from the history, AFAIR.


This is only true in bash if HISTCONTROL contains ignorespace or ignoreboth (ignoreboth implies ignorespace and ignoredups), so you should check that before relying on this functionality.


Yes but very easy to forget to do. BTDT.


Could you alias "vault" to " vault"?


I doubt it. My history file shows all the git-related aliases I use (st, gd, gd-) rather than the expansions (git status, git diff, git diff --cached).


Honestly, I dislike the approach. I don't want to tell my mysql <hello, please allow the user foo to authenticate itself with the password bar> and my webapp <hello webapp, please use the user foo and the password bar for the mysql over there>.

I want to tell the system <look, I have a user foo, and the webapp over there needs access to the mysql over here with that user. Make it so.>

I don't need to know the password my application uses to access the database. I don't WANT that password. No human should ever see this password. And heck, if the system wants to, just swithc passwords every 60 minutes. I don't want to care.


Good point!

When I want to run something on a server and I don't want it to end in the .history file, I always run vi and execute the commands inside it ( <esc>:!command ).

It always shows in the process list; but for short-lived commands the opportunity window is small for an attacker.


Unless you're on your personal laptop.

It's a tricky question, one that we've spent a lot of time thinking about at my company (Conjur -- shameless plug, one of our use cases is secrets management). Obviously doing this on a server that others can access/pwn is a bad idea, but that's a very rare use case. Most secrets are being added via the command line from an admin's single user laptop.

In the end, we've settled on what seems to be the same solution as Vault: support other sources (stdin, files, etc.), but allow the command line.

That said, I think you're right that we should look into adding a warning for the CLI tools and placing more emphasis on this fact in our documentation.


> Unless you're on your personal laptop.

It persists in your shell history... your notebook is stolen, then...

You can encrypt the filesystem but it is better to not have your secret information disseminated in your filesystem and computer memory.


How many people these days share their linux machine with anyone else? If you're the only user, first point doesn't matter. History file is a concern, but can be mitigated e.g. setting HISTIGNORE


Depends on the situation. A Linux machine running as a server can still be accessed by multiple people. Also, a "user" could be a compromised process that can execute commands. Using stdin or a properly created file means one less possible attack vector.


Doesn't storing the secret in a file first violate half the point of this service? Once it's written to disk in plaintext it's just as recoverable as your shell history.


All it takes is a WordPress vulnerability that lets someone run `exec` on your server.


if you the sort to worry about vault's command line being a security risk, you shouldn't be managing your secrets from your wordpress host in the first place.


This is true on shared systems. But for example as I am setting up a brand new server, I might do this from there since I know I'm the only user. The second half of your complaint is solved (at least in bash) by adding a space character before the beginning of the command.

Edit: am I wrong? If so, please explain, I really am curious.


> "Vault is the sixth such tool we've built. We've also built Vagrant, Packer, Serf, Consul, and Terraform. "

Sigh.

This is the first time that I actually feel overwhelmed. That nagging feeling that I'm doing something in some way solely because of habits. And that someone somewhere has created an awesome tool that would have allowed me to go home two hours earlier.

The amount of new tools cropping in the last couple of years alone is the 'culprit'.

These are awesome times, of course. Still, a part of my brain misses the time where you could ftp crap away to a server (even if automated by a Capistrano tool or similar) and spend the rest of the time arguing about the merits of puppet vs chef.


Obligatory quote from that XKCD about automating things. Remember, most of these tools simply automate what you already do. If you don't spend much time performing certain things, you don't need the tool. The cargo cult of "I should use Packer to create my Vagrant boxes, so I can have dev/prod parity" only works to a point. Instead, look at where you spend lots of time, and then look for tools that automate that task.

Personally, I try to be pretty conservative with tools that I start relying on. I know by painful experience that some tools get abandoned, some change drastically, making it impossible to upgrade, and some just turn out to be very buggy and horrible once you start putting them through their paces. In lots of cases, it's better to be critical than permissive when it comes to your infrastructure (and that's what these tools are aimed at). In almost all cases, your edge is in knowing the tools you use, and knowing about other tools. It's not in knowing all and using everything under the sun.


So what I really need to do is automate reading HN. I'll get half my day back!


This is totally true and a great comment, but where I almost always fall into the trap is when thinking about those things that I (or my co-workers) will spend lots of time on in the future, which are much easier to automate today than they will be then. And increasingly, just knowing about other tools is hard to keep up on!


The main reason for automation is not saving time. It is about being able to reproduce and test the thing you're trying to automate. It's also about tracking changes, accountability on a larger scale.

I manage quite a few servers, all running different in-house developed services. Almost all configurations are completely different, with a few exceptions, but I am still automating them. Just leaving it like it is, all manually managed would probably save me a lot of time, so why do I do it? So it can first be deployed on some test VM's, and when someone changes a configuration, I can track it in git/svn/.. and see who committed what change. It would only save me time if I had to deploy service X more than 3 or 4 times (dev/test/qa/prod), but as it stands, nop. Timesink it is.


I would say that if you are not automating something like what you describe, you would have to spend time manually verifying that it is working correctly, so it is about time after all.


Don't feel that way. Very few tools like this stand the test of time.

I avoid using new tools in production. I wait a few years first to see if they last - after all I'm going to have to support the system for years, I don't want the tools to be the weak spot.


Use the tools that are useful to you. Plenty of people hack out great things on old toolsets. If you don't want to be overwhelmed, don't be.


I'm pretty excited about this release. We just added deploying Vault on top of consul in the microservice/realtime data platform we're building:

https://github.com/CiscoCloud/microservices-infrastructure

We were already using consul as our service discovery engine and as a data store for dynamic configurations via consul template. We feel Vault will help us manage secrets on our infrastructure and inside containers.


Neat! I'll have to see how you ended up using Vault since your project uses a lot of the same tech my personal webrtc server project uses. I'm also using ansible, docker, consul, registrator.

Excuse the mess, I'm still working on it: https://github.com/jgrowl/livehq


I know it is very new, but I would love to read a Vault vs Keywhiz [1] article.

[1] https://square.github.io/keywhiz/


One is implemented in Java (Keywhiz), the other in Go?


Surely there's a little more to it than that.


Got to mention blackbox, it's based on gpg, and meant for teams interacting within version control systems:

https://github.com/StackExchange/blackbox


I'm not terribly familiar with blackbox, but based on the blackbox Readme it seems that this critique from Vault's website [1] addresses why HashiCorp thinks Vault is a better solution.

> Chef, Puppet, etc. all solve this in a similar way: single-key encrypted storage. Chef has encrypted data bags, Puppet has encrypted Hiera, an so on. The encrypted data is always one secret (a password, a key, etc.) away from being decrypted, and this secret is generally not well protected since in an elastic environment, every server needs to somehow get this secret to decrypt the data. Additionally, access to the encrypted data isn't always logged, so if there is an intrusion, it isn't clear what data has been accessed and by who.

[1]: https://vaultproject.io/intro/vs/chef-puppet-etc.html


Any reason why AWS dynamic credentials use IAM users instead of STS and role assumption? That seems to be a somewhat more widely accepted way to do dynamic credentials / role proxying.


that would definitely be more idiomatic in AWS source: I am an Amazon employee


came here to see how it was done; was hoping for STS :(



What's coming next from hashicorp? The one infrastructure piece for which good open-source solutions still lack is ACL with solid auditing and variety of permissions approval models.

There are a bunch of good ideas in this RFC, https://tools.ietf.org/html/rfc2904

Something like a host-your-own Amazon IAM would be pretty dope. Add support for 2fa and especially u2f and you've got yourself a winner.


I haven't finished reading the docs yet, but this part[1] caught my attention:

> When a backend is unmounted, all of its secrets are revoked and its data is deleted.

If a backend is similar to a filesystem as mentioned in the docs, that's quite a surprise. If a backend is like a filesystem, the secrets (files) should become unavailable when unmounted, but should become available again after remounting the backend (filesystem).

Are the secrets actually _deleted_ (ie, can't be recovered again), or am I misunderstanding?

[1] http://vaultproject.io/intro/getting-started/secret-backends...


I'm responding a little late here, but hopefully you'll still see this.

Yes, when you unmount a backend, all the data is deleted. Vault mostly behaves like a filesystem, but not in this scenario. The raw data is not preserved. If you want to preserve the raw data, you can remount (move the mount point).

We didn't finish it for 0.1, but we already built in secret enumeration into the core. We plan to expose this at some point. If you want to unmount and preserve your secrets, you can always enumerate them and back them up elsewhere (unencrypted). This isn't yet available.


Looks interesting. There is definitely a need for something secure to manage secrets easily & efficiently.

I'm not super familiar with Shamir's secret sharing. The post says that the master key is split up across a number of people. One thing that I am confused about is if Vault goes down (possibly in the middle of the night), does it require a quorum of humans to boot it back up?


It does.

The solution to this problem is to run Vault in high availability mode[1]. This will have a set of stand-bys that can take over if the leader goes down. The idea is to unseal all the stand bys, and if the leader goes down, the standbys take over, and the sealed one can be unsealed at some other future point.

Security-sensitive things have been an interesting UX challenge for us, but this compromise comes at an expense of UX for a much better security promise. As one of the other comments says around this comment: without this feature there would be no real security for the stored data.

[1]: http://vaultproject.io/docs/concepts/ha.html


Can you call mlock in go to prevent any secrets from being paged to disk?


We already do this. And yes, you can (using the "syscall" package).


Would you also recommend to set the lease duration to something higher in order to allow people to react?

This way even if Vault dies during the night (and you don't have pager duty) at least some clients (e.g. your cloud instances) can live through?


Looks good. The post did not mention HA (that I saw), which caused the confusion.


Shamir's secret sharing is actually fairly simple.

You want to have a secret shared by n people, any m of whom can decode it. You create a polynomial of degree m, and give each of those people the value of said polynomial at a different point. The secret will be the value of the polynomial at yet another point.

From the value of the polynomial at any m points you can figure out the polynomial and the secret. But the value at any m-1 points does not let you predict its value anywhere else.

(Detail. You would make the polynomial be over a finite field instead of the real numbers. But the math is all the same.)


Yes, otherwise - when keys travel with secrets - there would be absolutely no security.

This is akin to disk encryption where you have to type password on boot, but then data is accessible later.


Haven't tried it yet, but most likely yes. But if you actually need vault and do spread the key to more than 2 people, you likely have more than one vault server running at a time.


Can someone explain hashicorps business model to me?

Love all the projects. Are they contracting out features to companies? Selling support contracts?

Very curious-- I think if people, especially individuals, can make a living from open source projects, that's great for everybody.

But do you have to be "famous" to do it?


To the best of my knowledge it comes from contracting/support and mostly their Vagrant Cloud (now Atlas) features that are business/enterprise only, which they charge money for. For example private hosting of Vagrant box images is a paid feature. The costs for that are per-user/per-month.


What are some of the existing tools/solutions which people use for managing secrets in a production environment?



slightly different scope, but solves a similar problem https://github.com/cloudflare/redoctober good writeup about it here https://blog.cloudflare.com/red-october-cloudflares-open-sou...



Neat! I've been working on something similar for about the past six months, although my approach is a little more minimalistic, but also based on shamir's secret sharing. I was planning to release a preview the next couple of days. If any one is interested https://github.com/duse-io/api#how-it-works

//edit My solution focuses more on the sharing aspect between people, but can just as well be used like Vault


This is really awesome! I've been passively looking for something like Vault for a while, can't wait to start integrating.

I just finished tossing together a Chef cookbook to get Vault installed/configured up to the init/unseal step. Here's the link in case its useful to anyone: https://github.com/hexedpackets/chef-vault


> Vault is one giant leap forward for practical security in a cloud environment.

My biggest security concern with a cloud (or VPS) provider is that they can very easily read any keys or unencrypted data from memory. Still a worry when renting a dedicated box, but it's a damn site easier when it's virtualised.

I don't see Vault helping me with this... but then I'm probably both paranoid and missing the point.


There must be something in the air regarding tools for managing secrets. First Keywiz, then Credstash, a little python utility for managing secrets using AWS KMS https://github.com/LuminalOSS/credstash


The policies look like they'd be a bear to manage since they are path- and not role-based. In order to implement least privilege with multiple actors you'd have to be really careful with your paths.

Example: 2 apps. App 1 needs secrets A B and D. App 2 need secrets A B and C.

So we need to set up our paths in a way that App 1 can get A and B and C, but not D. App 2 needs C but not D. Now when you want to modify secret access from your apps you have to rethink how your paths are set up.

When you're instead assigning permissions to roles this is a lot easier. An example: http://blog.conjur.net/what-is-a-devops-secrets-server


I haven't been to HashiCorp's website basically since they released Vagrant. I use vagrant all the time though.

A few reactions. First: amazing design. Awesome documentation. Second: wow they have an awesome little suite of high-impact devops tools. I spent all day learning about them.

The unified style of websites they have organized into "intro" and "documentation" is awesome and easy to really digest what's going on quickly. All of the tools seem powerful, well thought out, and complementary.

Vagrant's been awesome to me and I'm excited to start using some of their tools like Packer, Consul, and Terraform in production!


Interesting indeed! But I couldn't find anything on the website or documentation about how secrets are encrypted?

The security page seems to be a generic one about reporting vulnerabilities: http://vaultproject.io/security.html

Edit: Found it under the threat model section here: https://vaultproject.io/docs/internals/security.html



Found it, thanks.


the blog post mentioned AES-GCM 256bit


Slightly OT, but is there such a tool (to store secrets securely and accessibly) for personal use that runs well with Linux and can be easily moved around (portable).



https://www.keepassx.org/ but I'm unsure if it has any sort of a CLI. It was mentioned in the EFF's updated SSL article.


[deleted]


Check out the ACL RFC [1] which has been developed and is now available for testing in the first alpha of etcd v2.1.0 [2], we'd love your feedback.

[1] https://coreos.com/etcd/docs/2.0.8/rfc/api_security.html [2] https://github.com/coreos/etcd/releases/tag/v2.1.0-alpha.0


(EDIT: I appear to be replying to a edited comment that was mentioning a possible naming confusion with ansible vault, I'll keep the post here even though the original is different now)

re: naming (and disclaimer): I wrote Ansible and designed Ansible's vault (and no longer work on Ansible), but to be clear, Ansible's vault is a nicer CLI (IMHO) around an idea that Chef users were already used to: encrypted data bags. It was nothing new. It took about two weeks to implement, but we had a few security things to shake out after that made it take a bit longer before it got right. (Some people also got picky about whole-data-document encryption vs leaf-node-in-datastructure encryption, and I see points for doing it both ways, but went with the way that revealed the least amount of data)

I think calling it vault is perfectly fair, as it's a very generic word, and words are in short supply. This is also doing a LOT more.

I like what Mitchell and crew have done around revocation here, though I do somewhat wonder about the mode in the docs that sets secrets and allows them to show in bash history, which is why ansible vault spawned an editor. I'm sure it's got editor options too though. I'd probably also change the phrasing "HTTP API" to "HTTPS API" :)

The dynamic secret stuff also looks pretty interesting.

One of the interesting ideas might be to whether it allows sharing of secret values to some applications without letting someone read it (asking if a user can use X without letting the user READ x), which is something we did in tower - though it seems the ACL system might allow something like that. An example of that was the system could memorize an SSH key and let a team use it, but never give it to anyone, to avoid rotation scenarios.

Anyway, all the lease ideas are very cool and it seems natural to want this decoupled. For a time I considered decoupling what what in Ansible vault from Ansible, though really it already operates on generic files so that's easy enough if someone wanted to take a stab at it.

I suspect it wouldn't be impossible to make Ansible Vault have a hashicorp-vault-mode where it encrypts a file "into" the vault, and the contents that remain in your "ansible vault" file are nothing more than a pointer to a secret in Hashicorp Vault. In this way, it would just be another cipher type. Maybe that's crazy though - but I think I like that better than storing secrets in version control, where they exist forever unless you prune history. (And I never thought storing vault files in public source was a good idea). However, avoiding a server or database was always one of Ansible's design goals - for better or worse, that limited some of the things the core app could do, in ways that were both good and also occasionally limiting.

Anyway, keep up the forward progress with the new things! I really envy the time to architect/design something so detailed here.


I do think it would be nice if Ansible Vault could integrate with HC's Vault in some way (especially for secret generation purposes, since that's not something ansible-vault does, really).


Would be cool if there was Yubikey support using Challenge-Response mechanism to seal the vault.

If only I had time...


this may be an obvious question, but how do things like Vault/Keywhiz differ from Consul/Etcd?

Is it simply that the former is more encrypted than the latter?


ah, as per http://vaultproject.io/intro/vs/consul.html

> While Consul can be used to store secret information and gate access using ACLs, it is not designed for that purpose. As such, data is not encrypted in transit nor at rest, it does not have pluggable authentication mechanisms, and there is no per-request auditing mechanism.

I guess my question now is why not add E2E encryption on top of Consul/Etcd instead of a whole new system?


It's desirable to keep secret managements simple and as self-contained and self-sufficient as possible. That way, it's much easier to drastically lock it down without impacting normal users too much. Your organization might have, say, 30 people who routinely need access to the orchestration / deployment infrastructure but maybe 3 who need access to cryptovariables.


How do I know I can trust this?


Read the source code: https://github.com/hashicorp/vault


Thank you. I read the article but it wasn't clear that it was open source.


It's pretty amazing to see how many new secrets tools for DevOps and cloud have suddenly materialized in the last few weeks. At Conjur, we have been in production in this space for almost two years. If you're wondering how to stack up these different tools against each other, here are a few facts about our solution:

* At Conjur, we care deeply about three things : (1) the overall user experience for developers, operations, and security personnel (2) the security of our code (3) high availability

* Conjur implements full-featured role-based access control for both "carbon" (people) and "silicon" (VMs, servers, containers, and other code)

* Conjur solves not just secrets management, but also SSH access management and software-defined firewall (granular access control for microservices)

* The Conjur core services are not open source, but our cryptography is (https://github.com/conjurinc/slosilo), as are many of our client libraries and connectors (for example: https://github.com/conjurinc/api-ruby, https://github.com/conjur-cookbooks/conjur). And Conjur cryptography has been audited by a respected 3rd party.

* Conjur features packaged integrations with Chef (https://supermarket.chef.io/cookbooks/conjur), Puppet (https://forge.puppetlabs.com/conjur/conjur), Docker (http://developer.conjur.net/product_demo/secrets.html), Heroku (https://github.com/conjurinc/heroku-buildpack-conjur), AWS and more. New integrations are easy to build using the Conjur HTTPS and LDAPS APIs (http://developer.conjur.net/reference/services)

We have also gone to great lengths to make it easy to try Conjur yourself, with automatically provisioned, self-service demo appliances featuring the Conjur server and client, Docker and Git:

https://demo-factory-conjur.herokuapp.com/request/secrets

https://demo-factory-conjur.herokuapp.com/request/ssh




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: