Hacker News new | past | comments | ask | show | jobs | submit login
Launch HN: EnvKey (YC W18) – Smart Configuration and Secrets Management
108 points by danenania on March 12, 2018 | hide | past | favorite | 63 comments
Hi HN! I’m Dane, the founder of EnvKey (https://www.envkey.com). EnvKey is an end-to-end encrypted 1Password-like service that lets dev teams manage API keys, credentials, and configuration easily and securely.

For some background, you can check out https://techcrunch.com/2018/02/27/envkey-wants-to-create-a-s... as well as a Show HN from when EnvKey first launched: https://news.ycombinator.com/item?id=15330757. The Show HN is what got EnvKey its first batch of production users, and I probably wouldn’t have gotten into YC without it. So thanks HN! I owe a hell of a lot to this community.

On where the idea came from: I had the first inklings at my last job. We were in the MVP stage, so we ended up with a bunch of separate apps and services as we experimented. These were split between CloudFoundry and Heroku, and we also had an in-house test server running CI for everything on TeamCity. Keeping stuff like API keys, puma server settings, and other environment-specific config in sync everywhere was a serious headache. Bugs and failed CI builds due to missing keys were common, and our Slack quickly filled up with requests for API keys and .env files. We knew this wasn’t secure, but there didn’t seem to be any solution out there that was worth the additional complexity it would introduce.

One day while wrangling with TeamCity build variables, I had the thought that this could all be so much easier. Why were we painstakingly copying big blocks of config from one place to another? It was like dealing with code pre-source control. And sure, our secrets were out of git, but was spraying them all over Slack and email any better? That night, I started typing out some notes for an 'Env Vars Locker' service that would use PGP and environment variables to solve this issue in a minimalistic way.

A bit later, I left that job to do something on my own. After a false start with a different idea, I decided that the 'Env Vars Locker' had potential. I did a round of problem interviews, and people were enthusiastic. It seemed like almost every team had this issue, and it only got worse as companies grew.

6 months later, I had a working beta and some early users. 6 months after that, EnvKey officially launched. Now we have many customers using it happily in production. It’s growing rapidly, and lots of new features are in the pipeline.

So that’s the backstory. Now for the good stuff: how it works.

With EnvKey, configuring any development or server environment becomes as simple as setting a single environment variable(ENVKEY=F4U4jGkZuo24zKxxgJsR-4f1g2w3VpHYpYC2x). It lets you edit configuration and set access levels for all your company’s apps, environments, and teams in one place with a user-friendly, cross-platform desktop ui.

It keeps developers and servers in sync securely and automatically so that people don’t resort to sharing secrets over email, Slack, git, spreadsheets, etc. (a serious security risk, even with 'development' secrets, since the line here is fuzzy). It also removes a whole class of config-related bugs, simplifies updates and secrets rotation, and prevents developers from interrupting each other or getting blocked when they don’t have the latest config.

Our servers are not trusted by any EnvKey client and cannot read or modify encrypted configuration (apart from deleting it). Public keys are verified by a web of trust during every crypto operation, and no third party gets access when you invite a new user to the system. The crypto is all vanilla OpenPGP, and all clients are fully open source. The security details are documented here: https://security.envkey.com

Apart from the cloud service, we're also working on an on-prem version and a hybrid option that will allow you to store the encrypted config in your own S3 account without having it ever touch our servers.

For reliability, we run a high availability Kubernetes cluster on AWS, and also back up encrypted config to S3 in a separate region on every update. If any of the client libraries can’t load from the server for any reason, they’ll fail over directly to S3 (you wouldn’t even notice).

Unlike other tools that require heavy lifting on the ops side and complex integrations, EnvKey typically takes less than 15 minutes to setup and integrate. With a line or two of code and an ENVKEY environment variable, all your config can be accessed just like local environment variables in your code.

With node, for example, it’s just:

$ npm install envkey —save

require ‘envkey’ // in main.js

Stripe.api_key = process.env.STRIPE_SECRET_KEY // this will always be in sync

Other languages (Ruby, Python, Go) work similarly, and there’s also a bash library called envkey-source that lets you set shell environment variables with a single line:

eval $(envkey-source)

This allows you to use EnvKey with any language. It also pairs well with Docker.

If you already use 12-factor or a similar approach, it’s extremely easy to switch. There’s an importer for bringing in your existing config that accepts bash KEY=VAL, json, or yaml format.

EnvKey is designed to be both simple and easy, and to make a previously messy and error-prone part of your system into something you hardly ever have to think about because it just works.

There are a lot of interesting possibilities for the future. Why are we dealing with API keys in the first place? I think this can all be abstracted over. Imagine that when a developer leaves a company, you click once to remove them, and then all the API keys and credentials they ever had access to are all automatically rotated behind the scenes. Or imagine integrating APIs like Stripe with your whole stack in one click. That’s the kind of thing that EnvKey enables and is why I believe this approach can have a huge impact. I hope you’ll give it a try and tell me what you think! I'm super interested to hear about your ideas and experiences in this area, since HN is obviously one of the places where people are most affected by these issues.




We've been using EnvKey since nearly the first day it was launched, and it has really made key management for our project and team easier on a massive scale.

Prior to that, we were looking at Vault or the AWS baked in key management solutions, but all of those were extremely tedious to set up and maintain. In total honesty, we migrated over to EnvKey within about 20 minutes (from over 30 secrets stored in server environment variables and .env files. Adding new secrets for new services takes a few seconds, and there is no need for us to redeploy elastic beanstalk instances when we do. EnvKey is definitely one of the 'top five' third party services that we integrated with for our development work, and we are still on the grandfathered free plan, and received no incentives for posting this, other than being a grateful customer.


So when EnvKey fails or is under attack, you lost access (or give away MITM) to all your infra ?


This is of course a legitimate concern, but it's also one that is addressed in EnvKey's design. Client-side encryption means that even if we are attacked, no sensitive data will be exposed. And we have a failover to S3 so that you won't lose access if the service goes down.

Storing config and secrets obviously requires trust, but at the same time, many other services that developers use without a second thought actually require a lot more trust than EnvKey does.


This is the key to me. This tech would make life easier, but what happens when there is an outage or they get hacked or $reason?


this


How would you compare EnvKey to something like HashiCorp's Vault?


I've been meaning to write up a comparison with Vault for the website. Simplicity is definitely the main differentiator. EnvKey is designed to "just work" from a developer's perspective, and to actually save time/boost productive instead of being an obstacle. There's no server setup or administration, and integration is just 1-2 lines of code and a single environment variable, vs. being a good amount of work with Vault.

In general, EnvKey is usually a 5-15 minute setup and integration process that requires no ongoing maintenance, whereas depending on a company's level of devops sophistication/resources, Vault is an n days - n weeks project to get it working just right, and will usually require additional maintenance/integration work on an ongoing basis.

Another area that I think can get overlooked with Vault is development secrets. In my view, it is important to protect these just as well as production secrets, since prod-level secrets can easily slip through the cracks and end up in development environments for various reasons. Vault can be setup to manage these, but it's not really the focus, and so you are left to your own devices in terms of integrating it into a dev-friendly workflow. EnvKey, on the other hand, makes distributing development config and secrets totally seamless.


I think it's a little weird and biased to imply that Hashicorp's vault needs special setup to manage development secrets.

Vault stores secrets. That's all it does. (Well it can also generate TLS certificates, handle AWS integration and more..) Once you have a vault instance adding a new secret takes seconds and the having an instance for development, and a second for production is trivial.

You can also prefer a single instance with more restrictions, logging, and similar.

* secret/$application/development/db_user * secret/$application/development/db_pass * secret/$application/development/db_host

vs

* secret/$application/production/db_user * secret/$application/production/db_pass * secret/$application/production/db_host

But the vault itself doesn't care about dev vs. prod. That's more an infrastracture question about which hosts can talk to it, etc.


Fair enough. My point is just that getting it working smoothly with a development workflow is another task that likely won't be trivial.


It looks really interesting, and I would definitely consider it (we're in the process of re-thinking our secret management solution. It works, and it's secure, but a bit clunky). We're a small bootstrapped company with only 4 developers currently.

I have to say, though, that the pricing looks a bit intimidating. Not from the cost perspective, but just the complexity of it all. And you seem to penalize growth (there was a discussion[0] on HN that I think you triggered actually).

I really find it hard to estimate how many config requests or connected servers we might have... And the last thing I want is for things to stop working or trigger some unknown billing when we're bootstrapping new servers, or re-jigging our configs.

Per developer sounds totally reasonable to me. You're aiming to make secrets management simple. Can't your pricing mirror this?

[0] https://news.ycombinator.com/item?id=16477316


Thanks for the feedback. As you can see from that thread, EnvKey started out with per-developer pricing.

The main issue is that we have some customers who use EnvKey with hundreds of servers and only one or two users. This, of course, doesn't work out very well for us in terms of unit economics and pricing based on the value the service provides.

That said, the usage caps are meant to be generous and not get in the way. If you're using EnvKey in a 'normal' way and not spinning up thousands of parallel processes or something along those lines, you'll be way, way under the limits on config requests. For connected servers, it's really just a question of how many server instances you're running that use EnvKey, which is something I would think you need to plan for anyway when it comes to hosting costs.

I'm very open to adjusting these based on feedback if they are getting in the way / not matching up correctly with value. But as much as I'd prefer otherwise, I'm afraid there do need to be some sort of usage-based limits to make it work.


I'm in a similar boat to the OP. We're near the boundary of your pricing between startup and traction, but the non linear jump (while being more common these days) is an immediate turn off.

There's a pretty big difference between $240 and $1200 a year when it comes to discretionary spending.


I'd argue that if you're worried about a thousand dollars a year difference over the course of a year, you're not a customer worth having.


I think the use case where the per-config-request pricing model could break is with serverless (like Lambda or Cloud Functions or ...). Secret management there is super important, and your solution looks great, except that what makes these platforms great for low-cost deployments is that they "scale to 0" (turn off your instances when there's no traffic), which means that each instance potentially lives just briefly. Since each start of each instance is a config request, that could in some cases become problematic.

That said, at your lowest tier you could have ~20 moments of "booting a new instance" per hour. That's probably plenty for most cases, although you may find users for whom it isn't enough.


Good point. What I'll say for now is that if anyone wants to use EnvKey with Lambda or Cloud Functions and is worried about this, feel free to reach out - dane@envkey.com. We'll make sure you're paying a reasonable price that reflects your usage level.


I don't get what the issue here is. Guess at which plan you'd need and go about your business. I'm sure they're reasonable about upgrading and/or paying for additional usage if you go over the limits. Even if they weren't, what's the risk here? That you'd end up owing a couple hundred extra dollars? If you're worried about that, you're probably not a customer they want. This sound like standard developer cheapness.


> I'm sure they're reasonable about upgrading and/or paying for additional usage if you go over the limits.

I certainly hope so, but we've been bitten by surcharges before when we had a mini-DoS on our system...

Besides that, there's a big element of trust here. I'm going to trust these guys with my most important secrets. I expect a trustworthy, transparent and simple solution that doesn't surprise me. Pricing is an element of that.

> This sound like standard developer cheapness.

I'm sorry, but this is just mean. I'm happy to pay, but I want to predict my costs. I understand usage-based pricing, but for a secret management system, I really don't see how this applies.

Sure, there might be outliers who might spin 1 billion micro-processes each connecting to their system, but I think it's the wrong approach to base your pricing on those. At least not until you're the scale of Amazon. Or even Digital Ocean.

I think I'm exactly the customer they want, but my 2 cents advice is to simplify pricing. Charge more. Fine. Just make it simple and clear.


Thanks for laying out your concerns. Is your feeling that any sort of usage-based tiers that go beyond per-user pricing are too complex? Or is there some way to limit usage in a reasonable way that wouldn't rub you the wrong way?


Just to clarify, I truly believe it's in your interest to make pricing simpler. It just helps building trust, and conveys the same product simplicity. This is key (no pun intended) with this kind of service.

Specifically, I think per-developer pricing should correlate well with value. Sharing secrets increases in complexity when more people are involved.

One other thing I can think of perhaps is environments. Maybe bigger customers would have more environments that need to be kept separately? (production, staging, test, dev come to mind, but potentially a more granular split in larger orgs. Billing environment, Marketing etc). Maybe you could offer totally separate accounts that are independent of each other, but billed together. Keeping things separate also has security value.

Servers: it gets much more messy much quicker in my opinion. Docker-based setups, auto-scaling groups etc make this difficult. So this reduces pricing predictability for us, and I imagine others.

Config requests: I don't even have a clue what this means before I start. But seeing this deters me. You explain it on the FAQ section, but I still have to do some homework to figure out roughly my usage, before I even try your product... This is the worse pricing element in my humble opinion.

In any case, I'm just one data point. If you have enough traffic, then you should A/B test this. Otherwise, talk to your customers (although then there's selection bias at play), or copy from someone else with a similar service?

At your early stage, I think building momentum and traction is more important than worrying about outliers. You have many more years to refine pricing and figure out how not to lose money and attract the right customers. But now you should probably get as many customers as possible to see that it's a viable business and build a reputation. Just my 2 cents.


Thanks for your thoughts. I've decided to take out the config request/server limits and wait until this is actually a problem to deal with it.

We'll probably need to put some sort of sanity check rate limits in place eventually, but I really don't want people be confused or have to worry about this stuff while using EnvKey.

Pricing is hard :-/ Thanks again!


As a huge fan, who has already introduced it to some other companies who have become customers, I can understand the change in pricing, however I find the abrupt removal of an “individual user level” a little frustrating. I have been ardently telling all who would listen they should try it, even on a tiny personal project, however for these solo developers, the price to continue using the product for their personal projects has now quadrupled! I’d sincerely recommend you reintroduce a ‘1 user’ tier, perhaps at 5 dollars.


Thanks, I think having an indie tier for a single developer is a good idea. Will get this implemented soon.


It is indeed. Keeping things simple, at least in the beginning is the best approach in my opinion. You can always add things later as you grow! But now you should focus mostly on growing :)


It’s kind of odd that you need to provision the “root” key into each client. You still have the problem of this key leaking from setup images or scripts or emails or slack or git.

I would expect the local client to generate a cert and register it with the CNC server, then wait for it to get authorized server-side.


You raise a good point. EnvKey minimizes the number of secrets that you need to deal with, but you still do have to protect ENVKEYs (the access keys that EnvKey generates).

The idea here is to create a simple 'base layer' for access that can work with any host. Environment variables fit the bill nicely because you'll almost always have some way to set one, and access to these will generally be coupled to server-level access.

That said, I'm definitely interested in adding other options for integration.

For example, I'm currently working on a way to store encrypted configuration in S3 buckets you control instead of in EnvKey's cloud. With this approach, you could define bucket policies (for both development and production-level secrets), that would restrict access by IP or by security group. That way, even if an ENVKEY leaked, it couldn't be used outside of a privileged context.

There are definitely other possibilities too. I'll look into the CNC server approach. Thanks!


Now that I think of it, you don't even need a CNC server, you can make do with a public FTP site. The client will generate a cert, send it to FTP server, an on-prem app will download it from the FTP, the admin will review the fingerprint and sign the cert with his own key, then the on-prem app will encrypt all secrets with all signed certs and push them back to the FTP server. Substitute FTP for S3/etc as needed.


If you're on AWS, I highly recommend taking a look at Chamber + Parameter Store for secret management. If you're not on AWS, EnvKey looks like a reasonable solution with ease of use. Just don't misplace the key!

https://github.com/segmentio/chamber/blob/master/README.md


This looks really awesome, 'danenania. Secret management has been on my mind lately, and this really fits a sweet spot for what I'm looking for. I know things HashiCorp's Vault do basically the same thing, but Vault is complex enough that sometimes I barely understand exactly what it does. Good luck!


We've been using this service for a quite a while as a small startup, and it changed the game for us as far as env vars go. WE LOVE IT!


Same here! We're really happy with it.


looks neat! would love to use something like this at work.

with regards to the S3 fail-over, how do you manage per-account access/authz?

source of fetch.go seems to indicate a single bucket is used for the fail-over: http://bit.ly/2p6ozyN

do you create per-account restricted policies and somehow have the client assume a particular IAM role or do you just have a world-readable bucket and rely on PGP for secrecy?

UPDATE: turns out that the first half of the ENVKEY (before the '-') is some sort of "env id". the S3 URL is then formed by appending the API version (v1 is valid, not sure if it's current) to the base bucket and prefix (see earlier bit.ly). for shits 'n giggles, let's try using the ENVKEY from the site's landing page ("p9WYzzHefy33gzgDdvPJ-EKdh4jgBsRBBNerK").

let's test it out with a simple, unauthenticated cURL:

  $ curl -s s3-eu-west-1.amazonaws.com/envkey-backup/envs/v1/p9WYzzHefy33gzgDdvPJ | wc -c
     15291

  $ curl s3-eu-west-1.amazonaws.com/envkey-backup/envs/v1/p9WYzzHefy33gzgDdvPJ
  ...

small sample of output for latter cmd: https://pastebin.com/igXQrk2z

so, it appears that the fail-over "feature" exposes your PGP-encrypted secrets to the world _without any authentication whatsoever_. PGP is pretty secure, and the space of potential IDs seems pretty large (20 chars^(26+26+10) ~ 4.61e80 potential IDs) so that's probably fine...

can users opt-out of the fail-over feature?

EDIT: s/security-by-obscurity//


Thanks for the feedback and for laying out your investigation :)

I'd say it's pretty unfair to call a 20 char id 'security-by-obscurity', unless you want to call almost every username/password authentication mechanism the same. The id has vastly more entropy than the average password and is far beyond brute-forcible.

Along the same lines, there are no known attacks that can break 2048 bit PGP with a sufficiently strong passphrase.

So there are two layers of security that cannot be broken by any real-world attacks. I believe that is indeed sufficient for protecting customer data.

It's likely that we'll move away from the S3 failover eventually in favor of our own replication strategy. This wouldn't really have security implications, but it does make it simpler to have a single source of truth for logging, which is coming soon.


all fair points. and your product seems to be designed/documented/marketed with the notion of keeping the username (which is part of the ENVKEY) secret, so agreed that "security-by-obscurity" is unfair (coupled w/ massive ID space).

your product is very well designed, seems like a tremendous customer experience. best of luck; i hope you continue to grow


This whole thing was created because it's annoying for devs to set up Vault, but Vault is an Ops process, not a Dev process. Just use environment variables and let Ops take care of exporting them via a credential management system. Coding a specific credential management system into your app is a bad idea.


Not all teams/companies have an ops team. If you are starting a project or are at a small team/company, then dev probably _is_ ops as well. Saying "throw it over the wall to ops" is not a very useful attitude for probably half the developers on this forum.


Thanks, you raise a good point.

To be clear, EnvKey is completely based around environment variables, so you won't need any EnvKey-specific code in your app apart from a line or two to install/import the package.

In code, config is accessed in the same way as local environment variables. For example, with python, it's just:

  $ pip install envkey

  # in main.py (the entrypoint of your app)
  import envkey

  # anywhere else in your code
  stripe.api_key = os.environ["STRIPE_SECRET_KEY"]
If you decide to switch to some other approach based on environment variables, all you need to do is remove the envkey package. Does that help to address your concern at all?


    import envkey   # Fetches and install environment
In Python at least, that feels a bit icky. Modules can run complex code at import time, but it's rarely done.

The import order between modules is often not specified or carefully maintained, and web projects can have many main entry points. For example, a Django app might start from "wsgi.py" if run in a web server, from "manage.py" when running command-line utility scripts or tests, or from any other script file at all if a few setup lines are added.

People prefer to put all imports at the top of the file, but may want to do something right before or after envkey runs. Imports are not supposed to fail or throw exceptions, unless there is a serious code bug.

It would be nicer to have

    import envkey   # No important side-effects
    envkey.setup()  # Can be placed at the appropriate place
or similar.


Thanks for the feedback. A one-liner to import is more consistent with EnvKey's style (this is how it works in all the libraries), but perhaps it makes sense to expose the loader as a separate package so that you can have more granular control when it's necessary.


Doesn't really make sense. Why are you importing this package if you're getting the creds via environment? Why wouldn't you just load the creds into the environment with an external tool, and then just run your app, and never have to modify your app in the first place?


Importing the package is what sets the config on the environment.

If you'd prefer not to modify the app at all, you could also use EnvKey's bash tool, envkey-source: https://github.com/envkey/envkey-source

This lets you set the config as OS-level environment variables, and therefore doesn't require adding a library to your app. With an ENVKEY environment variable set, it works like this:

  $ eval $(envkey-source)
  $ python
  
  >> import os
  >> os.environ["SOME_API_KEY"]


Using the bash tool is a great solution, and is way less technical debt than adding code to an app. I would have the tool execute apps directly rather than export them to bash, though both are useful.


Great - I'm glad that's a better fit for you :)

You can accomplish 'executing an app directly' with a sub-shell like this:

  $ (eval $(envkey-source) && run-app-command)
That way, the variables won't be exported in the parent shell.


What about reboots though? You need to reboot / relaunch some services to pick up latest env vars.


No matter what you're getting a value from, if you want it to be able to change without restarting, you have to add a custom routine to do so. Adding a custom routine to get an environment variable is the same as adding a custom routine to call an API method.


This is a great product! Our team does something similiar with an in house library and Google KMS.

I got a chance to talk about how it works at LISA last year: http://selfcommit.com


Haha, we just created this service ~3 months ago ourselfs with inheritance etc :D


How is this any different to Azure key vault? What's the benefit?


I haven't personally used Azure key vault (or Azure in general), but as it looks comparable to AWS Parameter Store, here are a few potential benefits of EnvKey:

- EnvKey is host-agnostic, so if you ever want to migrate to a different host, use an external CI tool, or even just bring up a quick script that relies on some configuration, that's trivial with EnvKey, whereas it might not be so easy with the Azure-specific service.

- EnvKey offers a user-friendly desktop ui for managing configuration across apps, environments, and teams. It allows for some useful things like inheritance and imports/exports. It's also generally better not to manage secrets on the web if you can avoid it (mainly due to browser extensions).

- EnvKey has much simpler access control system.

- EnvKey makes development configuration and secrets a first-class concern, whereas Azure/Parameter Store services tend to focus mainly on only staging/production-level secrets.

- EnvKey uses client-side encryption (OpenPGP) whereas Azure key vault (and Parameter Store) use server-side keys for encryption.


Parameter Store is rather cumbersome and heavyweight for the kind of quick iteration you can do with EnvKey, and of course it's AWS-specific. I do use Parameter Store to "inject" the EnvKey into newly-booted EC2 instances, but then EnvKey takes over from there.


I was able to abstract it away into a pretty simple Ruby class (for a Rails app) that writes an encrypted S3 file. That file is loaded by my containers into ENV (currently using ECS). Just a couple of method calls to update.


How is it cumbersome and heavyweight? If the app is already using AWS, it's trivial to start using the parameter store.


Any luck in bringing support to PHP yet? I know we had chatted a while ago about that being a possibility - was hoping that was still on the radar.


Yes, it's definitely still on the radar. I have a basic version working, but ran into some complexity with Apache and PHP's request model. It doesn't make sense to run EnvKey in every PHP process (since they are per-request), so I'm thinking it needs to cache based on the Apache pid. I'm still working out the best way to accomplish this. Sorry for the delay!


Ah, no worries.

I'm not sure if you're thinking it would be web server dependent, but NGINX is the bigger share of the market. (Ideally it wouldn't matter what the web server is though).


Good to know - as you can see, I don't have a lot of PHP experience.

If anyone knows of a good way to get a PHP process's parent server pid (regardless of what server it's running on), please let me know :)


PHP has an execution model that's /really/ bad for per-process requests. Even attempting to grab the parent pid is probably not sufficient as you can have execution contexts with no parent (ex: running php from cron).

On linux you can get the master process through posix-getppid (http://php.net/manual/en/function.posix-getppid.php) but this won't work on windows and has the same limitations as above.

Have you thought about having a linux daemon/agent that runs in the background and keeps the ENV in sync?


I think a caching daemon would be good option for PHP, but it does add a level of complexity.

Another option would be to add EnvKey support to confd (or similar project), and provide config and template files that write the env vars to Apache and Nginx configs, and reload when they change.

Though neither confd or a daemon would work for PHP sites that use a PaaS or shared hosting, which I think is a large percentage of the market for PHP.

Using one of the existing PHP caching solutions (like opcache) might be an option.


> Have you thought about having a linux daemon/agent that runs in the background and keeps the ENV in sync?

Yes, though I think it's also important to give developers control of when their config reloads, since surprises here can be dangerous.


I think having a daemon running on the OS that only syncs with an explicit ‘service envkey reload’ would work well.

If running php with nginx (via php-fpm), it’s still common for worker processes to come and go rather frequently. Imo, per-request pricing just won’t work well with php.

I would love to use your product, btw. Congratulations on launch.

-php dev


The webpage says "Zero Knowledge". What's zero knowledge about envkey?


Configuration and secrets are encrypted client-side so that our servers can't access this data.


How does this compare to relying on Parameter Store if you're in AWS?


Linking to my comment downthread which addresses this: https://news.ycombinator.com/item?id=16569534#16569940




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: