Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: From dotenv to dotenvx – better config management (dotenvx.com)
354 points by scottmotte 29 days ago | hide | past | favorite | 206 comments

I think it's good advice to not pass secrets through environment variables. Env vars leak a lot. Think php_info, Sentry, java vm dumps, etc. Also, env vars leak into sub-processes if you don't pay extra attention. Instead, read secrets from a vault or from a file-system from _inside_ your process. See also [1] (or [2] which discusses [1]). Dotnet does this pretty good with user secrets [3].

[1] https://blog.diogomonica.com/2017/03/27/why-you-shouldnt-use... [2] https://security.stackexchange.com/questions/197784/is-it-un... [3] https://learn.microsoft.com/en-us/aspnet/core/security/app-s...

> Instead, read secrets from a vault or from a file-system from _inside_ your process.

I’ve never liked making secrets available on the filesystem. Lots of security vulnerabilities have turned up over the years that let an attacker read an arbitrary file. If retrieving secrets is a completely different API from normal file IO (e.g. inject a Unix domain socket into each container, and the software running on that container sends a request to that socket to get secrets), that is much less likely to happen.

God this is such a prime example of how we just don't do security well enough industry wide, and then you end up with weird stupid stuff like encryption being an enterprise paid feature.

Secrets have to be somewhere. Environment variables are not a good place for them, but if you can't trust your filesystem to be secure, you're already screwed. There's no where else to go. The only remaining place is memory, and it's the same story.

If you can't trust memory isolation, you're screwed.

As a counterintuitive example from a former insider: virtually no one is storing secrets for financial software on an HSM. Almost no one does it, period.

> Secrets have to be somewhere. Environment variables are not a good place for them, but if you can't trust your filesystem to be secure, you're already screwed. There's no where else to go. The only remaining place is memory, and it's the same story.

There’s a whole class of security vulnerabilities that let you read from arbitrary files on the filesystem. So if you end up having of those vulnerabilities, and your secret is in a file, then the vulnerability lets the attacker read the secret. And on Linux, if you have such a vulnerability, you can use it to read /proc/PID/environ and get the environment variables, hence getting secrets in environment variables too.

However, the same isn’t necessarily true for memory. /proc/PID/mem isn’t an ordinary file, and naive approaches to reading it fail. You normally read a file starting at position 0; reading /proc/PID/mem requires first seeking to a mapped address (which you can get from /proc/PID/maps); if you just open the file and start reading it from the start, you’ll be trying to read the unmapped zero page, and you’ll get an IO error. Many (I suspect the majority) of arbitrary-file read vulnerabilities only let you read from the start of the file and won’t let you seek past the initial unreadable portion, so they won’t let you read /proc/PID/mem.

Additionally, there are hardening features to lock down access to /proc/PID/mem, such as kernel.yama.ptrace_scope, or prctl(PR_SET_DUMPABLE)-that kind of hardening can interfere with debugging, but one option is to leave it on most of the time and only temporarily disable it when you have an issue to diagnose

Also, memfd_secret supports allocating extra-special memory for secret storage, which the kernel can’t read, so it shouldn’t be accessible via /proc/PID/mem

>There’s a whole class of security vulnerabilities that let you read from arbitrary files on the filesystem.

This is maybe putting the cart before the horse a little bit. The reason there's a class of vulnerabilities that allow arbitrary read is that we've, as an industry, decided that we classify file access as a vulnerability. It's not that file access is somehow materially different or easier from any other security issue, it's just that we set that as one of the goals of an attack.

If you decide that an attack is successful when it reads a file, then you'll obviously get a clustering of successful attacks that read files.

It isn’t just about preventing vulnerabilities, it is also about limiting the damage they can cause. Suppose you have a web app, with customer data in a remote relational database. An arbitrary file read vulnerability, in itself, might not actually help an attacker in stealing your customer data, since it is in a remote DB not the web app’s filesystem. But if that vulnerability enables them to exfiltrate database credentials, that gets them one step closer to actually stealing your customer data, which can be an enormously costly legal and PR headache. (By itself, those credentials won’t be that useful, since hopefully your firewall will block direct public access to the DB - but a lot of successful attacks involve chaining multiple vulnerabilities/weaknesses - e.g. they compromise some employee laptop that lets them talk to the DB but they don’t have credentials, and now they have the credentials too.)

Whereas, if all they manage to steal using a file read vulnerability is the code (possibly even just the binaries if you are using a compiled language like Go or Java) of your web app - that’s not good either, but it is a lot smaller headache. You’d much rather be having to tell the CEO “attackers stole the binaries of our app” than “attackers stole all the PII of our customers”. Both are bad but the second is a lot worse. The first kind of attack you possibly won’t be obliged to disclose, the second you legally will be

Did they say why?

It strikes me that those envs might be particularly prone to corporate inertia, ieg "the current way passed security audit, don't change it or we need to requalify"

It's possibly also harder to rely on a HSM when your software is in a container? ( I'm guessing here tho )

It's a useless, unproveable generalisation from a supposedly omniscient "insider". I know of at least one finance organisation using HSM as you'd expect.

And I know non-finance orgs using HSM to protect encryption keys used to encrypt PII

Yeah, you don't have to trust me, there are plenty of software engineers working in finance who can tell you the same. Or they're using outdated ciphers, or they're storing information in plaintext or in logs, or they have no security playbooks.

It's irrelevant to me whether you believe it, it's happening today, and it happens with some of the top financial institutions and their subsidiaries and it's the same bureaucratic nonsense to move those teams to do something about it like it is anywhere else.

I used to store secrets in the FS, and was told the best practice was env vars. Now it's not env vars. What is it then?

There isn't a right answer. It's just that people don't understand that one doesn't provide any meaningful benefit over the other (in the context of storing secrets), but the security "experts" are always eager to claim "X is insecure, do Y instead, it's best practice btw"

Unless I'm missing something, there are three scenarios where this comes up:

1. You are using a .env file to store secrets that will then be passed to the program through env vars. There's literally no difference in this case, you end up storing secrets in the FS anyway.

2. You are manually setting an env var with the secret when launching a program, e.g. SECRET=foo ./bar. The secret can still be easily obtained by inspecting /proc/PID/environ. It can't be read by other users, but so are the files in your user's directory (.env/secrets.json/whatever)

3. A program obtains the secret via some other means (network, user input, etc). You can still access /proc/PID/mem and extract the secret from process memory.

So I'm assuming that what people really want is passing the secret to a program and having that secret not be readable by anything other than that program. The proper way to do this is using some OS-provided mechanism, like memfd_secret in Linux. The program can ask for the secret on startup via stdin, then store that secret in the special memory region designed for storing secrets.

The main security benefit of byzantine paranoid security best practices is that they massively hinder productivity. If you can't make a system, the system will have no vulnerabilities.

lmao so true

You don't even need to roll your own solution with memfd. Linux already has keyrings[1] as a kernel concept.

[1]: https://man7.org/linux/man-pages/man7/keyrings.7.html

I’d wager that–in the context of web apps–over time there have been many more (or more readily exploitable) arbitrary file read/directory traversal/file inclusion vulnerabilities than remote code execution ones, so the preference for having secrets in memory as env vars may stem from that. You’re also probably not reading from /proc/self/mem without code execution either.

Well, if there's an arbitrary file read, shouldn't the attacker be able to just read /proc/PID/environ anyway? It behaves like a regular file in that regard, unlike /proc/PID/mem, which requires seek operations to read data.

Also for 3. you just added another supply chain issue and another thing to maintain and keeping up to date, which is probably worse

Well, I’d be the first to admit that we have a gap here, the solution that I personally would consider ideal doesn’t seem to actually exist, at least on the server-side.

If we are running under something like K8S or Docker, then I think there should be some component that runs on the host, that provides access to secrets over a Unix domain secret, and then we mount that socket into each container. (The reason I say a Unix domain socket, is so then the component can use SCM_CREDENTIALS/SO_PEERCRED/etc to authenticate the containers). I’d also suggest not using HTTP, to reduce the potential impact of any SSRF vulnerabilities (although maybe that’s less of a risk given many HTTP clients don’t work with Unix domain sockets-or at least not without special config). (Can we pass memfd_secret using SCM_RIGHTS?)

For desktop and native mobile, I think the best practice is to use the platform secret store (Keychain on macOS/iOS, Freedesktop Secret Service for desktop Linux, Android Keystore, Windows Credential Manager API, etc). But for server-side apps, those APIs generally aren’t available (Windows excepted). Server-side Linux often lacks desktop Linux components such as Freedesktop APIs (and even when they’re present, they aren’t the best fit for server-side use cases)

The problem with .env files is that you are doing both.

You have a .env file that is in the same directory as your code and you just copy to to env vars at some point. This does not even meet the security principles that dotenv is supposed to implement!

I think people are blindly following the advice "put secrets in env vars" without understanding that the point of it is to keep secrets outside files your app can read - because if you do a vulnerability or misconfiguration that lets people read those files leaks the secrets.

What you can do is have environment vars set outside your code, preferably by another user. You do it in your init system or process supervisor. Someone mentioned passing them in from outside a docker container in another comment.

> people are blindly following the advice "put secrets in env vars" without understanding that the point of it is to keep secrets outside files your app can read - because if you do a vulnerability or misconfiguration that lets people read those files leaks the secrets.

The problem with this is that, on Linux, the environment is a file, /proc/self/environ

And yes, as has been mentioned in some other comments, the process memory is also a file /proc/self/mem - but it is a special file that can only be read using special procedures, whereas /proc/self/environ behaves much more like a normal file, so a lot of vulnerabilities that enable reading /proc/self/environ wouldn’t enable reading /proc/self/mem

Technically one workaround on Linux is to not mount /proc (or at least not in your app’s container) - but doing that breaks a lot of things

Yes... surely the point of .env file is as a shortcut to provide env vars for local development

.env file shouldn't be used in production, env vars should be injected directly

I agree that is what should happen. That is not what dotenv does though: https://www.npmjs.com/package/dotenv#-documentation

I think dotenv would be fine as long as it doesn't raise exceptions if no .env file is found, i.e. if it works just as a helper for local dev and as a no-op for production

I notice the dotenv docs are recommending dotenvx now in places: https://www.npmjs.com/package/dotenv#-manage-multiple-enviro...

I have seen many .env files in production. SOme junior devs told me it was what they had been taught to do.

Not using env vars is security through obscurity. If someone has ssh access to your container, it doesn't matter whether the secrets are on a file or on memory. The attacker has as much access as the app itself.

On the other hand, using .env vars can leak in different ways like a developer mistakenly committing secrets to git or making this file available to the world wide web.

The filesystem is fine, but we really shouldn't be using .env files that get loaded into environment variables due to them leaking in a few different ways.

Pull them from a remote host over ssh? :D

Disagree here. Basically if you use docker (which for most of the stuff you mention, you should), environment variables are pretty much how you configure your docker containers and a lot of sever software packaged up as docker containers expects to be configured this way.

Building a lot of assumptions into your containers about where and how they are being deployed kind of defeats the point of using containers. You should inject configuration, including secrets, from the outside. The right time to access secret stores is just before you start the container as part of the deploy process or vm startup in cloud environments. And then you use environment variables to pass the information onto the container.

Of course that does make some assumptions about the environment where you run your containers not being compromised. But then if that assumption breaks you are in big trouble anyway.

Of course this tool is designed for developer machines and for that it seems useful. But I hope to never find this in a production environment.

> The right time to access secret stores is just before you start the container as part of the deploy process or vm startup in cloud environments.

So how do you rotate secrets without bouncing app servers..?!

You don’t - you bounce the servers because it’s simpler. If your secret is static for the lifetime of a deployment - just redeploy to get a new value.

I like secret stores but only when the value of something regularly changes in a way that redeploying becomes unacceptable.

Bouncing things is often unacceptably expensive - caches, consensus and the cost of data redistribution etc are all good reasons to have hot configuration for secrets.

inject those variables via mount?

When you launch the docker containers you can pass in process env vars or do it via file. Nowadays people do this via kubernetes config yamls, which passes env to docker. Or rather they used to. Most people now use Helm charts which pass in the env to k8 yaml which pass them to docker. But then they feel its not secure enough... so a lot of people have the env split halfway between github actions secrets, and amazon secrets. The yaml for your github action config sends aws secret uri to the runner, which runs cdk which grabs the aws secret, and passes that to helm which makes k8 yamls, which passes the env to docker, which passes it to the process.

Then I killed myself and was reborn. Now I just use an env file.

Remember we are mainly talking about dev envs here. If you put the secret key in a file...where do you put the file? In a common location for all the dotenv instances? One per dotenv instance? What if people start putting it as a dotfile in the same project directory?

Secrets are nasty and there are tradeoffs in every direction.

Environment vars propagate from process to process _by design_ and generally last the entire process(es) lifetime. They are observable from many os tools unless you've hardened your config and they will appear in core files etc. Secrets imply scope and lifetime - so env variables feel very at odds. Alternatively Env variables are nearly perfect for config for the same reasons that they are concerning for secrets.

Tl/Dr; in low stakes environments the fact that secrets are a special type of config means you will see it being used with env vars which are great for most configs but are poor for secrets. And frankly if you can stomach the risks, it is not that bad.

Storing secrets on the filesystem - you immediately need to answer where on the filesystem and how to restrict access (and are your rules being followed). Is your media encrypted at rest? Do you have se Linux configured? Are you sure the secrets are cleaned after you no longer need them? Retrieving secrets or elevated perms via sockets / local ipc have very similar problems (but perhaps at this point your compartmentalizing all the secrets into a centralized, local point).

A step beyond this are secrets that are locked behind cloud key management APIs or systems like spiffe/spire. At this point you still have to tackle workload identity, which is also a very nuanced problem.

With secrets every solution has challenges and there the only clear answer is to have a threat model and design an architecture and appropriate mitigations that let you feel comfortable while acknowledging the cost, user, developer and operator experience balancing act.

I've started using Mise for some stuff at work. Haven't digged in a lot yet, but looks really promising.


It handles task running (wipe local test db, run linting scripts, etc), environment variables and 'virtual environments', as well as replacing stuff like asdf, nvm, pyenv and rbenv.

Still somewhat early days, tasks are experimental. But looks very promising and the stuff I've tried to far (tasks) works really well.

I second mise, it's been a nice replacement for direnv, asdf and makefiles for my use case. Much faster, still compatible with the old configuration files when needed and all in one tool for the new projects. Awesome.

Yes, also definitely a big vote for Mise.

I’ve switched recently from asdf for managing language & tool versions and the ergonomics are much nicer (eg one command vs having to manually install plugins, etc., more logical commands) It’s also noticeably faster.

Regarding the env vars features, a couple of relevant Mise issues around people trying to integrate env var secrets using SOPS, 1Password, etc.

- https://github.com/jdx/mise/issues/1617

- https://github.com/jdx/mise/issues/1359

Seconded. I changed from pyenv to mise because pyenv was slowing down my shell startup (probably the shims, which mise doesn't use by default), and I'm slowly using mise for more stuff. Right now, I'm using it to auto-turn on virtual environments and add project scripts to the PATH, and it works very well.

I haven't felt the need to use it as a task runner yet, but that's probably because I'm used to having a bunch of shell and Python scripts in a `scripts` folder.

Add another vote for mise. For me it replaced asdf, pyenv, poetry, and direnv. Biggest thing for me is it _just works_:tm:.

I use asdf at work and mise at home. I only use it for runtime version management and it’s great!

The only reason I use .env is because it’s dead simple and very obvious as to how it works to anyone.

If now someone has to read docs to figure out how to configure the app, I’d rather have them read docs for some other safer and more powerful configuration scheme.

I've dealt with problems from python-dotenv and very much prefer it as a command than as a library.

As an example, once I changed a .env fikr and unit tests started failing. Digging deeper into it, and lots of code was checking for .env to load its configurations, and would break without it. I'd prefer this not to happen, as our tests were executing based on outside, not version controlled, configurations.

After removing dotenv as a library and using it only as a command, we were able to separate configuration and logic, and not have .env files affecting our unit tests - we simply ran the application with dotenv command, and the unit tests without.

With leaking secrets being such a big concern, it seems wise to require that secrets be encrypted to use dotenvx. That is, it will only work with encrypted secrets. As others have commented, this doesn't eliminate the risk entirely, but I think having a tool that doesn't support unencrypted secrets at all, although a bit less convenient, is a win.

Not all env vars are secrets. About half is just regular config. If mandatory how would it know what's a secret?

I'm not sure what the "it" you're referring to is, but if something is not a secret, you could either encrypt it anyway or use an alternate mechanism to provide the value.

Encrypted with what key? Do you just mean obscured?

No, I mean encrypted. I'm not sure what you mean by "obscured", but if you just mean obfuscated in some easily recoverable way without a key, then no. If you read the original post, it describes the mechanism that dotenvx already has in place for encryption.

The utility has a means of encrypting them with public key cryptography so that the plaintext is never in your development directory. GP thinks this should be made mandatory.

point is: where and how you do get this key when in prod?

Part of my point is that if you care about security, you need to be thinking about these things. Forcing encryption then also forces you to think about how to provide the key. Of course, it's possible to do this in a very insecure way that defeats the purpose of encryption in the first place, but I think it moves things in the right direction.

The "securest" way is, you start your app and the app awaits the input of the key, it is only in the RAM.

But in reality, nobody will do that.

There are a lot of ways of doing this, sealed secrets is one, mozilla sops is another.

Seems pretty similar to sops[0], but without the encrypted-by-default feature that makes sops feel significantly safer for secret management.

Sops also integrates easily with AWS and other existing key management solutions, so that you can use your existing IAM controls on keys.

I mentioned in another comment, but I've been using it over five years at two jobs and have found it to be great.

[0]: https://github.com/getsops/sops

I too have been using sops for years, and agree -- dotenvx encryption seems very similar to sops.

I'd prefer an integration between dotevnx and sops where dotevnx handles the UX of public env and injection, while leveraging sops for secret management and retrieval. Additionally, being able to have multiple keys for different actors is important.

Having a single `.env.keys` file feels risky and error prone. dotenvx encourages adding your various env files, such as `.env.production`, to vcs, and you're one simple mistake away from committing your keyfile and having a bad day.

If sops is not to be integrated, dotenvx could take some inspiration where the main key is encrypted in the secrets file itself, and you can define multiple age key recipients, each of which can then decrypt the main key.

I've been using sops in production since at least 2017, plus it has excellent compatibility with containerized infra tools like helm and other infra tools like terraform (both technically using plugins, but helm secrets and the carlpetts terraform plugin have been around for ages and are widely used.

I don't think encryption is a good idea, and the reason is forming bad habits. Now developers have a very strong and non-ambiguous habit: never put .env files in version control (except may be for .example.env). However, with this, you'll get accustomed to commit .env in _some_ projects, so you'll easily slip and commit it in another project where the vars are not encrypted.

Encrypting secrets and committing them seems very convenient but I'm paranoid about these sorts of things. Can anyone tell me why this would be a bad idea?

One reason I can think of is that normally with secrets I actually don't keep any copies of them. I just set them in whatever secret manager my cloud environment uses and never touch them again unless I need to rotate them. Meaning there is no way to accidentally expose them other than by the secret vault being hacked or my environment being hacked.

With this approach if someone gets access to the encryption key all secrets are exposed.

The biggest issue with storing secrets in version control with the code is that past secrets are never relevant after they have been rotated. This makes rollbacks risky. Consider:

  1. Create secret v1
  2. Code v1
  3. Deploy
  4. Secret v2 (rotation)
  5. Code v2
  6. Deploy
  7. Oops, need to roll back to v1 (from step 2)
  8. Outage, because the secrets in step 2 are not the secrets from step 4

This seems trivially solved by never mingling secret rotations with general deployments?

Inserting deployments between steps 1 and 2 and between steps 4 and 5 still results in an outage when the deploy for code v2 needs to be rolled back to the older commit, not the commit before. Arguing that rollbacks means you only ever need to rollback to commit #n-1 is like arguing you should never retain more than one backup; there's always a chance that you need to rollback even further to fix things.

Using encrypted secrets provides a way better developer experience than using a vault.

Typically, developers can’t change production secrets in vaults and need to follow some other protocols.

Encrypted secrets mean you deploy everything along side the secrets.

The developer experience is great, but the biggest issues I have faced while using Kubeseal were

1. Developers HAVE the secret in order to encrypt it. This can be not ideal as then they can use these secrets in production or leak them

2. The secret encryption key change causes the need to re encrypt everything.

3. People don’t understand the concept.

I disagree. Using vaults isn’t that bad. And I’d also like developers to never actually know the secrets.

It’s a learning curve, but I think it’s best to just bite the bullet and use a vault rather than trusting developers to know and manage secrets properly.

Another benefit is that debugging secret changes is a lot easier. We've had a couple of cases where someone changes the secret in a vault and that causes problems and no one can tell what changed between two deploys

I've used git-crypt and sealed-secrets and the problem is always backing up the master key. sealed-secrets rotates it every so often so you need to go find it and copy it to 1password or whatever you use as your root of trust. (We used a calendar invite for this.)

git-crypt is easy, the master key doesn't rotate, so don't leak it. (Secret encryption key rotation is kind useless; it's nice that if you leak an old key newer secrets aren't leaked, but it depends on your underlying secret rotation policy as to whether or not that saves you any work. I have tended to do them in bulk in the past.)

On my last project we did disaster recovery exercises every 6 months, sometimes with the master key intentionally lost, and it wasn't that big of a deal. Restoring your infra-as-code involves creating the secret encryption service manually, though, which is kind of a pain, but not like days of downtime pain or anything. Of course, if the secrets encrypted your database or something like that, then losing the master key equals losing all your data. Hopefully your database backup has some solution for that problem.

Can a kind soul point me some documentation on how to put .env in a vault correctly? Possibly open-source solutions?

If the vault is password protected, aren't you just adding one more indirection and nothing more? How is that helpful, since now I have to write the vault password in clear-text somewhere such that my application can read the env file from the vault?

A lot of modern cloud deployments read from a secret management system or vault at deployment time, and the secrets are made accessible to the application through various indirect methods so they cannot be accessed later on (i.e. if someone were to gain access to a running Kubernetes container).

At no point does the application have access to the vault itself, and access to read the vault is guarded by IAM role permissions.

Oh well, I really appreciate you taking the time to explain it. But honestly, I didn't understand a word. I recognize it is my lack of knowledge.

I hope someone can do me a ELI5.

Something else pulls from the vault and exposes it as a regular env var to your process. The .env file is a workaround to get this running locally! For open source, the simplest to explain is using a k8s secret mounted to a pod — pod identity allows access to the secret via rbac, which cannot be faked. Only that pod has access to that secret. But your process, running on the pod, sees it as an env var.

That doesn't cover what the GP was talking about with

> (i.e. if someone were to gain access to a running Kubernetes container)

right? Since those would still be secrets available in the env.

I get that if someone has access to read your envvars, its a foregone conclusion already (about how compromised you are).

However IIUC, the part of the point of doing things in memory with reading secrets (like with a Secrets Manager, is to eliminate having to keep secrets around as envvars/secret files in the runtime?

If they can snoop on env vars of a running pod, it can snoop on the process. A k8s secret could be a file in the pod or a env var in the process, but neither are a persistent file distributed to developers

You were on the right path when you called a vault “one more layer of redirection.” A vault on its own won’t dramatically improve your security posture. Neither will a .env file.

Instead, they both beat the alternative (which is writing secrets and environmental config directly in the source code). And they’re both part of defending in depth.

Defense in depth is hard to explain to a five year old, so think of candy. You’re five and you’re obsessed with candy. If I don’t want you to eat pounds of it every single day, I’ll likely do different things to stop you.

1.) I’ll give you a reasonable amount of candy.

2.) I’ll explain that if you eat too much candy, you’ll face health consequences like tooth decay or childhood obesity.

3.) I’ll put the candy somewhere both out of sight and out of reach.

Software security is like that too. Instead of relying upon just one method, you’ll do a number of different things. On their own, few of them are really very useful. But when you combine them all together, you can end up with a reasonably secure system.

In the case of vaults, it’s just a slightly safer tradeoff with its own problems. We’ve already established that it’s bad to write secrets and environmental configuration info in source code. One way around that is to put secrets in a .env file but then distributing that file becomes the weak link. Maybe you Slack them around, or email them or maybe you write all the secrets on a whiteboard in your office? A vault has a lot of flaws, but it’s better than writing them on a whiteboard. In some threat models, it’s better than Slack or email.

It adds depth but it’s far from perfect.

During deployment, the pipeline connects to the Vault, dumps relevant secrets in a .env file that it pushes to production systems that the application then reads from.

A script fetches values from a database then sticks them in a file. This script runs before the your application starts so the file will be there waiting.

Add your actual dotenv to .gitignore. Use bfg to make sure that any traces of your dotenv aren't in your commit history. Use detect-secrets client-side commit hook to confirm this and also monitor for secrets leakage. Use sops to encrypt your dotenv into some other file that is tracked by git. Sops can integrate with secrets management solutions (Vault, AWS KMS, etc.). Done.

This is mildly complicated, but the alternative is storing config in a configuration server somewhere, which comes with its own can of worms.

I want an option to manage all env in a single file using a TOML like format like this.

It makes it easier to update all env at once, compare, and share. It's not much help, but it helps me avoid a few annoyances.

On an unrelated note, I always find it a real headache to keep the naming convention of the environments throughout the project. It always ends up like a mixed bag:

  * Juggling production/prod, staging/stg, and develop/dev,
  * Inconsistent placement of env, e.g. prod-myproject or myproject-stg,
  * Skipping env name sometimes, e.g. myproject-bucket for dev S3 bucket but prod-myproject-bucket for prod (though it's okay to emit env name for user facing places like URL),
  * Inconsistent resource sharing between envs, e.g. same S3 bucket for local and dev but different DB, or same Kubernetes cluster with different labels for dev/stg but different cluster without a label for prod.
These inconsistencies often result from quick decisions without much thought or out of necessities, and everyone is too scared to fix them anyway. But it bothers me a lot and sometimes causes serious bugs in production.

Fix: format

You can do that today with sops if you'd like!

Env vars over-share and files depend on local permissions. We should have a capabilities -like way to send secrets between processes. e.g., decrypt and expose on a Unix socket with a sha filename that can only be read from once, and then gets torn down. Share the file name, target can read it and immediately the secret is now at-rest encrypted. Encryption based on config containing a whitelist of ssh public keys and what they can access, sort of like age.

Yes and now we have to manage the identities of processes to ensure they are entitled to given capabilities.

Also any system as described needs security audit and analysis to truly understand it strengths and weaknesses (or glaring compromises).

Alternatively - secrets via environment vars weaknesses and mitigations are well understood.

On my phone so can’t double test, but can’t you get this by adding “export” in front of every line in your env file and then source before running command?

I suppose if you don’t want it to stay after execution i believe you can:

    > $(source .env; my command)
I’m sure there is a fairly straightforward way to encrypt and decrypt a local file

If in bash, you can use the 'allexport' option and source the .env without having to add 'export' in front of every line:

    set -o allexport
    . .env
    set +o allexport

TIL thanks!

Being able to source the file is the main benefit of using a .env file, IMO. Otherwise, you can just use any format for config management.

Now take that a step further with Environment Modules and you can source them from any shell or scripting language: https://modules.readthedocs.io/ (without creating a subinterpreter)

dotenv has features that include conditional selection and ordered merging of env files, which are configurable by dotenv's runtime and buildtime APIs.

At least merging can happen by sourcing .env files in order (last to be sourced has priority)

But I do agree that at some point you want a tool to orchestrate these things and guide your usage so you don’t have to reinvent the same lines of code all the item

Importing a set of library and dependencies to handle reading a plain text file poses more risks than just leaving the file unencrypted.

You don't need to encrypt your keys, with what keys are you going to do so? Will you encrypt those?

if someone is in your server you are pwned anyways.

It's ok if you identify yourself as a cybersecurity dude and hold a cybersecurity role and you need to justify your livelihood.

But do it in a way where you don't bother people. It's ok if you bother devs, but then you go on and bother users with 4FA, 5 rule passwords, systems that can't answer subpoenas because you have encrypted your sense of self.

When you are improving security at the expense of every other variable, that's annoying, but when you keep "improving security" at the expense even of security, is the point where people will start ignoring and hiding shit from you

completely agree. convenience is terribly underlooked in security judging by the reactions in this thread.

if the "secure" methods aren't being used because of 4FA and 5 rule passwords and 30 day expiries (don't get me started on this), then it is by default insecure because the devs will find more convenient ways, and thereby, less secure ways.

it's like storing passwords, i can't tell u how many times i've seen people use the same passwords everywhere because the rules are too restrictive. or just write it down somewhere public because it's too much work to get into the password manager and properly add it

i'd be willing to put big money down that a LARGE chunk of passwords for apps that require at least 1 number or symbol largely end in `!` or `1` at the end.

luckily i do think passkey is a step in the right direction with good convenience and overall ux

> if someone is in your server you are pwned anyways.

This is false and also a symptom of an all-or-nothing approach to cybersecurity, which isn't feasible in the real world.

Sorry but it is largely all-or-nothing in this case, if someone has access to the user the app runs as, you are screwed. It doesn't matter whether you use env vars or files.

I'm assuming the parent intended to say "if someone gained access to your user you are pwned anyways", which is true, unless you actually go to the effort of storing the secrets securely using OS-provided mechanisms. Env vars are not that.

> which isn't feasible in the real world

Well of course it isn't, how would you justify those sweet cybersecurity experts' paychecks otherwise? Not saying cybersecurity isn't important, but there's way too much snake oil in the industry nowadays (always has been?).

I suppose they could have read only access to the filesystem and read the api keys, like through an http server

We’ve been pushing for committing encrypted secrets for many years now, and have written an open source spec and implementation in multiple languages: https://github.com/neosmart/securestore-rs

I got so excited, but it doesn't seem to support multiple keys and seems overly eager to encourage people to leave a valuable key lying around on disk.

So if a single dev machine is compromised, all of your prod secrets are exposed?

I wish this were closer to sops with support for gpg and or ssh keys. Because sops is a great idea locked in a questionable codebase.

Happy to discuss a proposal to add asymmetric key support to the project in the GitHub issue tracker. Although I'm not sure how the security changes with an asymmetric key, as either way the worst case scenario is the same?

Note that you don't have to leave the key "lying around" as you can secure it the same way you would an asymmetric key. And it certainly beats leaving the plaintext secrets themselves lying around in a .env file or similar.


I see you were saying "dev machine" exposes "prod secrets" but that's not the case. The protocol is designed so you would have secrets.json and secrets.prod.json, encrypted with different keys and (necessarily) managed separately but with the same tools and api. Dev machines being compromised compromises dev keys, not prod keys.

Read the last section in the README on GitHub for more on the dev/prod split.

Asymmetric keys mean you can you can have per-dev or per-team keys and allow one team to rotate them and resign them for all other consumers. I don't know how you'd do that with symmetric keys. This is an important feature of sops, imo.

It also means I can do things like seal them to a key that is stored in KeyVault and then allow the transparent retrieval of that key at runtime on Instances that have been given an identity with access.

This means that production secrets are sealed in place and only openable by effectively authenticated workloads.

And if you use sops-nix, this becomes a "setup once and never think about it ever again, ever" kind of operation.

There's also sops: https://github.com/getsops/sops

I've used it at two jobs now over about 5 years and have had zero issues.

Another commenter mentioned it, I'm looking at it now.

SecureStore was launched in 2017 (initial version was .NET only): https://neosmart.net/blog/securestore-a-net-secrets-manager/

I don’t get it. There’s a symmetric secrets.key that anyone could get hold of and use to overwrite secrets? No thanks.

And where do I keep the key? In a secret store?

You keep the key wherever you want to keep the key, just don't commit it and don't distribute it. Put it in on a YubiKey for your devs, upload it out-of-band securely to prod.

Whether it's a symmetric key or an asymmetric key, you have the same problem. Someone overriding your secrets is definitely not high on the list of concerns, and if they're committed to git then they can never be truly overwritten.

Thanks, but I would rather go with the imperfect setup that I understand than an allegedly perfect setup with dozens of third party dependencies that I don't understand.

Doubly the case now that env is natively supported by node now.

dotenv has zero npm dependencies. dotenvx has 21, including a few I have never heard of. Is this really more secure?

This is quite a common remark when it comes to Javascript. I rarely see the same being made about Rust libs, which also rely heavily on external dependencies.

Justifiably! Supply-chain attacks have occurred via npm, and have been widely reported. A lack of oversight and lack of standard libraries are often cited as the cause.

I don't know if it's a problem for Rust (or other platforms like Python, .NET, or Java afaik).

As someone who primarily writes TypeScript to run in browsers and on node.js, this kind of threat requires an extra level of vigilence, and often nudges me toward writing my own things rather than importing them.

Recent versions of node support env files directly, so you don't even need one dependency anymore. https://nodejs.org/docs/latest-v20.x/api/cli.html#--env-file...

This is similar to how Rails handles secrets - https://edgeguides.rubyonrails.org/security.html#environment...

In Rails, the entire file is encrypted unlike here where only the secrets are

Dotenvx used to encrypt by file. It’s a very recent design decision to encrypt by secret.

I really like how its implemented in Rails

How do reviewers of a PR know when/which environment variables changed?

Something I have done for secrets is use a syntax in environment variables to tell the process to go a key vault for the secret.

So we can have


when testing locally, but


in production. Env vars are processed simply by running a regex with callback that fetches secrets from vaults. This is quite flexible and has the advantage of being able to inject the secrets in the same place as other configuration, without actually having the secrets in environment variables or git etc (even encrypted)

Hah! I've done something similar. In my case, I could have something like this in my .env file:

SOME_CONFIG_OPTION = @AWS::some_config_option

And I've written a config library that knows that when a config value starts with `@AWS::' it needs to resolve the config option to an actual value by reaching out to AWS's Secrets Manager service and looking it up there, in which case it receives the value and caches it locally so that subsequent references to this configuration option don't require an additional call out to the cloud.

It works surprisingly well.

There's a typo in the article, if the author reads this : > But I think we have a solution to all three today - with dotenvx. In reverse problem order

The URL in `dotenvx` points to https://gitub.com/dotenvx/dotenvx (gitub without the h)

Hey this link is malicious. It redirects to location: http://ww99.gitub.com/dotenvx/dotenvx and then some site blocked by ublock

I’ve been using dotenv-cli for a long time now. I appreciate the encryption, but I will keep loading secrets from a vault instead.

I don't really understand why this is a new project. Seems it would have been pretty simple to add these in a backwards compatible way.

It would only break in cases where people's values specifically started with "encrypted:"

Even without backwards compatibility, why not just a breaking change in the same project? Don't get it either.

I've never used it (knowingly) but if I did and wanted to use this new version/project even the CLI name change to append 'x' would be annoying (I'd probably alias /symlink it).

It was added in a backwards compatible way, but the author decided to make a breaking change with this release.

The previous (IMHO superior) version was generating a .env.vault and a .env.keys from a .env file. Leaving the .env plain text and .env.vault encrypted.

Maybe the simple idea is to start a company. Nothing wrong with that. It's definitely what I would do. First envx then "envx vault"

Whether good or bad, it's marketing

I always used foreman [0] which I found to be superior to dotenv in every way. Even superior to this new dotenvx

[0] https://github.com/ddollar/foreman

Not to be confused with the larger foreman[0] also written in ruby.

[0] https://theforeman.org

Yeah, this is a really weird post for me. Before dotenv existed I used Foreman. It worked basically exactly how "dotenvx cross-platform" works (as a global command line). I switched to node-foreman because I largely switched to working on node projects and didn't want devs to need to have ruby installed. Then, at some point I switched to dotenv, and I don't even remember why.

Your "a single gitignore append" command overwrites existing .gitignore files. It should be `>>`

A lot of people here are saying that a vault is preferable to this, especially for secrets.

Is there a good primer on using vaults? I know how to query and insert into Azure Key Vaults, but architecting around it is unclear to me.

Things that come up for me:

- As (azure) key vaults don't support per secret access rights, where do I store secrets between deployments?

- Should I store connection strings to cloud resources, or just ask the resource for the connection string at deployment time (for Azure, a cloud function pretty much needs a connection string for most basic things. They say they are moving away from this but ...)

- A security warning is send if a key is accessed more then x/times per hour. Does that mean I should pull in the key from vault at deployment? Cache it after first call during runtime?

- Most of our 3rd party vendors gives us 1 and only 1 key. How do I manage that key between development, production and several developers? Right now we mostly forward the e-mail from the vendor with the key ...

I'm not comfortable that this tool is able to make HTTP requests, IMO it ought to be completely offline and not perform network operations. I did a search for requests and found some results pushing to and pulling from a 'hub'. But I couldn't figure out what the hub refers to.

I think for some reason people don't like such simple tools. I think it seems more reasonable to deploy a vault service running on kubernetes with jenkins. In some comments, they didn't even understand what the tool was for. Dear dude, you can travel between dimensions with this tool.

Secrets in env vars in production is not too secure either, ideally you’ll move to your app pulling secrets in-process from your infrastructure at boot-up or upon use. This also gives a nice advantage of not needing to rebuild the app or container or whatever to rotate a secret.

This just moves the problem to a different step. How are you going to manage access to said secrets, especially when your application lives off premises?

Many hosting environments give you this.

For example AWS gives you multiple ways of injecting secrets as env vars into your containers when they boot up (ECS + secrets manager, EKS, etc)

This is still “env vars”, easy to read from /proc/*/env too see the decrypted secrets from a different process. Versus in-process only secret fetch where you’d need to scan the memory pages of the app, which is a bit harder - especially if you keep the credentials in memory in a scrambled format so a simple scan on process memory for “secret_prefix_” doesn’t find them.

If an attacker can read other processes' envs you've pretty much lost as they're

1. Inside your process which means they can see the decrypted values.

2. Root which means they can get into your process to see the decrypted values.

I'm not sure if your average dev has a threat model that assumes in memory scrambling let alone leaked env vars. After all we're talking about the standard way to do it being populating a file with the decrypted secrets and just leaving it there. All the security is already kernel security.

I'm honestly not sure who dotenvx is aimed at.

- No one security conscious is going to be cool just making the cyphertext available publicly or even internally.

- Someone scrambling in-memory secrets isn't using dotenv to begin with, is using SecretsManager and the like, and probably doesn't want to change those to now go through the filesystem. You now get less auditing because all those secrets are bundled and you now only know "they accessed the decryption key."

- And someone using dotenv for secrets doesn't have a threat-model where this meaningfully improves security.

In adittion, if I'm not mistaken, child processes inherit the parent env vars, so if your application forks or use subcommands, you may be exposing the whole environment trove to 3rd party scripts, no root needed. Also, most vulnerabilities that enables execution of code will happily leak the env vars, no root access or "being inside the process" thingy (I know, code execution is technically "inside the process", but without requiring privileged levels)

I’m advocating people use something like SecretsManager, not this thing. In-memory only > env vars > secret files on disk.

I find env vars very precarious because harmless developer debug logging, actions like sshing into a container and typing `env` etc can easily expose them.

File on disk can be read by an attacker with via subdirectory path traversal bug

It’s much less likely for in process only secrets to be exposed by common mistakes/bugs

I've always wondered this. Seems like a password to get more passwords is potentially actually less secure (as in practice people will reuse master keys and they might actually increase the surface area, or make it a persistent threat after the keys themselves rotate)

In AWS and other clouds, you application can use an infrastructure provider API to create a secure session to access infrastructure APIs with per-application-instance credentials that are automatically rotated and can be used only by that instance. These APIs are how the cloud provider themselves provides environment variable injection features, but if your application consumes these APIs directly you can avoid having decrypted secrets hanging out in environment variables as a middleman between your app and the cloud runtime.

Typically the application instance sessions are automatically rotated very frequently, AWS’s sessions are limited to 6 hours for example.

Dotenvx has a cloud hub from which the keys can be pulled. I imagine an eventual feature will be for the keys to expire, and you’ll have to re-authenticate with the cloud to get new credentials, just as you would with AWS.

Doppler is very nice, although it gets expensive at the middle tier, which is needed to separate by role.

Yeah but it seems like the right step — deployment, not implementation. All my secrets are managed by GitLab AFAIR, but in general I feel way better integrating secrets into each service/container (I happen to use Dokku, so it’s as simple as something like `dokku add_secret APPNAME SECRET`) than having a central system.

I am far too clumsy to trust myself to push secrets in encrypted form, personally

Would be nice to be able to configure which .env file to read as an environment variable. Why? Imagine a package.json with this line:

`start: dotenvx run -f .env.local -f .env -- node index.js`

Instead of the -f flag, which now cannot be overriden, one could invoke it with

`DOTENV=.env.staging npm run start`

You can, you just need to define the private key.

For example


Would provide it with the information it needs to read .env.production

Or just have the environment variable used after the flags are processed

This seems to encourage committing encrypted secrets, which is a bad idea. Configuration and code should be in separate repositories. Secrets should be protected elsewhere.

The correct fix for “it’s too easy to accidentally commit .env files with secrets” is to not function (panic/throw) if there isn’t a suitable .gitignore/.dockerignore, not a specialized cryptosystem for .env files. This just creates a different problem.

I simply use an envdir outside of the project and update all my run scripts to use “envdir $CONFIG_PATH <whatever>”. Simpler and safer.

I really wish 1Password would ship an environment manager for their op CLI.

I don't think this is the best approach. I am building https://phase.dev which lets you import secrets (.env), encrypt secrets (end-to-end encrypted with keys you control), sync them to other services/platforms (think AWS, GitHub, Kubernetes), and inject them into applications at runtime (e.g., phase run node index.js).

Source: https://github.com/phasehq/console

What do you mean it's not the best approach? That's a little light on the details.

Since node v20.06, has built in support for --env-file=.env on load... as for local(ish) encryption and pushing them into source control, I don't like this at all. I'm fine using a vault or secret distribution from either the environment host (k8s) or ci/cd deployment.

I do like to keep a .env.example that you can rename to .env and adjust as desired. I tend to have defaults for running a compose stack locally that close to "just works" as possible.

I doubt I'd ever want to use this in practice.

My beef with it is that it's written in Javascript whereas what it does is so simple that they should just code it up in Rust or Go and be done with it, particularly when it could be packed up as a tiny binary that doesn't require you to install Node, Python, the JVM or any other runtime or library.

Wait, people use the .env file for secrets?! I never even looked into this but just assumed this was a convenience for local dev to hold a local non-secret app config? In production you'd load the env from a secrets store like what kubernetes has or aws has for ec2 instances. You shouldn't ever have those secrets on your local machine... Because they are secret.

We implemented the exact same method for config encryption a year ago or so, using pub/private key auth and the same `encrypted:` prefixes for encrypted config values.

This is a great tradeoff: easy way to share configuration, easy way to edit non-encrypted config values, reasonable security for the private values.

Doesn't solve key rotation of course, but for small teams this is a great solution.

I've recently taken a simpler approach to handling secrets in .env files. Since I use autoenv and conda venvs for everything, I persist secrets outside all projects in appropriately-named files, so including them in a .env becomes similar to `source $HOME/.secrets/work__aws_access`. Also makes them easier to manage across projects.

So with this, I have to prefix every executable with dotenvx in order to utilize the env variables?

Can’t I somehow do this in the script itself so “ruby index.rb” is enough? I know I’m only saving a couple of characters in the command line but I’m asking out of curiosity.

I always thought highly of the approach used by https://www.npmjs.com/package/@strong-config/node

Does dotenvx support secrets managers?

Can someone help enlightening me. In terms of preventing leaks, since we still need to ensure that the .env.keys doesn't leak, wouldn't effort still be the same?

The ability to use arbitrary filename for.env is quite nice though!

The idea is that the private key is stored outside of your configuration directory (e.g. in your OS's keychain), so your dotenv file can contain encrypted key values without leaking the value itself.

How would this work with application-specific Intellij Run/Debug configurations? Would this require switching to a "Shell Script" configuration, thereby losing debugging capability etc?

Based on my skim, this doesn’t really solve anything? The private key to decrypt is still stored on machine in a similar place? All this is doing is obfuscating?

So.. it just swapped ignoring `.env` for `.env.keys`?

> An attacker needs the DOTENV_PRIVATE_KEY

And the attackers will be after this file not the .env anymore.

It looks great nonetheless, especially the cross-language feature.

Personally, I like sops for encrypting my secrets.

Wait, it's also doing command interpolation? I missed that. Where is the code for that? I thought it uses dotenv-expand for interpolation, which can't do commands. dotenv-expand is already a bit meh, because of how it resolves the variables after parsing the dotenv file. Meaning you can't have a verbatim single quoted string and also you can get a stack overflow doing this:

Anyway, I hope they don't do command interpolation on top of that (like Ruby dotenv does), because then you can inject code via environment variables (like in the Ruby version).

I recently looked into various dotenv implementations just for fun. They're all different. No unified syntax at all. A lot don't do proper parsing either, but just use some regular expressions (like this one), which means they just skip over what doesn't matches. I started to document all the quirks I could find and wrote my own dotenv dialect just for fun. Nobody use it! Anyway, here it is: https://github.com/panzi/punktum

Direct link to the quirks of the JavaScript dotenv implementation: https://github.com/panzi/punktum?tab=readme-ov-file#javascri...

I've also tried to write a parser compatible to JavaScript dotenv (no x) in C++: https://github.com/panzi/cpp-dotenv

The whole idea of using environment variables for configuration information is good, but ultimately flawed, and we are way past the point where this should continue to be the status quo.

Environment variables are great for configuration because:

  - you can inherit them from a previous application or application(s)
  - you can override them in each environment you run your app in
  - you can pass them on to other applications
  - they are globals that can be loaded by libraries
  - they're not hardcoded in the code, so easier to change things without rebuilding, easier to reuse in different ways/environments/configurations
  - the OS has primitives for them
  - they're simple
Environment variables are bad for configuration:

  - because (by default) when set in application, they are passed on to all future applications/forks/execs
  - they are often dumped as part of troubleshooting and aren't considered confidential
  - they can often be viewed by external processes/users
  - there are restrictions on key names and values and size depending on the platform
  - typical "dotenv" solution doesn't necessarily handle things like multi-line strings, has no formal specification
  - no types, schemas
What we actually need that environment variables are being used for:

  - configuration information passed at execution time that can change per environment
  - loading or passing secret values
  - development environments
  - production environments
So what would be a good alternative?

  - an application library ("libconfig") that can load configuration of various types from various sources in various ways
  - support for configuration types: key-value, file/blob, integer/float
  - support for confidentiality (require specific function to unseal secret values; in programming languages the intent would be you can't just print a stringified version of the variable without an unseal function)
  - support for schema (application defines schema, throws exception if value does not match)
  - support allowing a configuration to be overloaded by different sources/hierarchies
  - support passing a configuration on to other applications
  - support tracing, verbose logging
  - truly cross-platform and cross-language with one specification, behavior for all
How would it work?

  - devs can create a .env file if they want
  - devs load 'libconfig' into app, use it to load their configuration values during development. library can have default sources, and even set env vars or an object internally, so no code needs to be written to use it
  - in production, same code causes libconfig to look at cloud-native and other sources for configuration
  - when debugging, secret confidentiality is maintained, tracing communicates sources of configuration, what was loaded, from where, etc

Super early days but we're building something very similar to what you are describing over at https://dmno.dev

Granted our solution is more javascript/typescript focused - and the config schema will be defined using TypeScript and piggyback on npm for sharing plugins/extensions. But the config will be usable in any language (and generate types for multiple languages) with deeper integrations coming soon.

The pluggable nature of our architecture also means you can encrypt secrets in your repo if you want to, or sync with other backends. Shouldn't be too hard to keep everything away from env vars either if that's what some folks want.

Would love your input, and to hear what you think!

The feature list looks great, but it looks like it's trying to bite off too much at once.

  - Supporting a particular repo type is an engineering smell (over-complicated/over-opinionated/tightly-integrated); this should be repo-design-agnostic
  - The "dmno service" is also biting off too much, this should not be a concept inherent to the configuration library
  - Schemas should be optional
  - Data types are fine, but get complicated, especially when mapping between different data formats/containers/transports; it's probably better to start with only a couple types and grow them over time if needed
  - Inter-service dependency management is also too complex for this solution
  - Plugins are a good idea
  - Three different package types? Complexity...
  - The security features are great
I get that you're trying to sell a product, and so having a big kitchen sink makes it more attractive to buy, but it makes for more complicated solutions which then annoy users.

GOAL: To be able to commit all your envs to git.

This is the only goal and this tool archives it. In the simplest way. While keeping you as secure as you were before, manually setting envs on heroku, railway, aws, jenkins etc.

GitOps FTW

Can you do safer? Yes, yes you can with a secrets management service (e.g.: Hasicorp Vault). Is that way more complicated to setup in all envs? Oh yes, yes it is.

Huge fan of dotenv, excited to try this out!

It doesn’t have Typescript types!

Written in JS; I think I’ll pass.

dotenvx encryption goes a long way towards solving THE BIGGEST problem with dotenv; using multiple tools to ensure that secrets are protected.

I wonder if dotenvx ensures that .env is in .gitignore and yells loudly if it is not.

I encrypt my dotenvs with gpg, but that's hella esoteric and everyone shouldn't be forced to do that.

and .dockerignore


Yet another problem already solved 25 years ago by the mighty Sun in the J2EE spec.

I _detest_ this kind of encryption. It's literally worse than useless. It makes life much harder during debugging, and it eventually leads to developers just storing the decryption keys locally.

For this kind of encryption to work, you need to supply the decryption key from some outside system (e.g. via env vars, AWS SSM, etc.). And if it can supply the key, then why not just use it for other important secrets directly?

It's also problematic from a secrets management perspective, because a big part of the perceived value of encryption is being able to check secrets into git. But because the encryption is tied to long-term keys, you have to design your security processes with the assumption that those keys will eventually be exposed and need to be "revoked" (ie: the secrets re-encrypted), and the "de-revocation" of those keys is hiding in your git history.

it might be easier to just store and checkout a single decryption key that only devops people know, vs storing hundreds of secrets.

while developers can move around their .env file across systems without worrying that they left plaintext secrets somewhere.

also it allows adding new secrets without knowing decryption key - I think it is important for collaboration

also most importantly: plaintext decrypted secrets are never stored on disk, and only kept in memory. I think it is also an improvement towards the regular doting

> it might be easier to just store and checkout a single decryption key that only devops people know,

"Devops people know" means that the key must be some secret property. Or you need to use the key during the deployment artifact building pipeline, and then deploy the artifacts with clear-text secrets.

> vs storing hundreds of secrets.

Then serialize them to JSON or whatever.

> also it allows adding new secrets without knowing decryption key - I think it is important for collaboration

So basically, you want developers (who don't have access to prod) to add random properties that your peers can't see during the code review? Ok...

Sorry, there's just no way the encrypted secrets in git are a good idea for general-purpose software.

encrypted secrets are strictly an improvement over status quo of unencrypted secrets on disk - I dont understand why you make it seem like it is a bad idea?

like what is the alternative you propose? storing plaintext secrets on disk and hope that your runtime is secure and hardened enough and free from vulnerabilities??

as if directory traversal, path injection vulnerabilities, shell command injection, etc vulns that allow reading file from disk, don't exist?

> encrypted secrets are strictly an improvement over status quo of unencrypted secrets on disk

No. They are NOT. They are strictly worse than unencrypted data.

Unecrypted data is at least honest. Simply encrypting it and putting the key next to the data itself creates a dangerous illusion of security.

> like what is the alternative you propose? storing plaintext secrets on disk and hope that your runtime is secure and hardened enough and free from vulnerabilities??

Put secrets into your environment, don't store them on the disk.

If your code runs on AWS, then use AWS SSM or AWS Secrets Manager. If it's on Heroku, put secrets into the env vars. K8s has a secret manager. And so on.

I use tools that read secrets out of vaults on demand using existing infrastructure for key management. For AWS there is aws-vault.

I don't get it. Dotenv is only good for local dev. Otherwise you should put your secrets in environment variables (the "env" in ".env"). That people put .env files in prod is a mistake itself, and the proposed fixes here seem to not really do much about that.

You can commit your secrets as an encrypted vault with this. Then decrypt it with a key where needed: locally, on CI, on prod, etc.

This is basically a simplified version of Hashicorp's Vault, GCP key vault etc. with some less granularity on user authentication.

It solves the issues around .env.example and is perfect for gitops. You have all your secrets for all your envs ready, while you only need to set a single env var (the private encrpytion key) on your specific hosting environment.

You could even use separate keys per env, eg.: to give access to a developer to staging only.

Mozilla's https://github.com/getsops/sops is another contender but with a more complicated (and perhaps more flexible) key management.

https://rotx.dev can also be used for a local password safe, and it supports environment variables injection into various scripts or workflows.

(author of rot)

Looks great, however it is much easier to work with just files, rather than key rings stored somewhere third party.

The keyrings are in jsonnet files stored locally, not sure what you mean.

So you have to make a commit when a key is compromised or simply rotated? What the hell.

If something is compromissed you have a problem, whether with encrpyted key or something else.

If your env-data is compromissed you have to set new in all services and restart your app / container.

But the enc-env file could be shared in you team or published to server without any problem. In the past this was a problem, when publishing accidently with plain text passwords.

Secrets don’t belong in environment variables either. Place them in a vault and grant specific processes/identities permission to read and decrypt them.

Env vars are prone to leaking and best practice moves the goal post further. Devs love to dump envs to log files, child processes inherit them, admins can very easily sniff them.

all solution inevitably evolve as "add another layer of indirection / abstraction"

there are costs associated with adding additional layer in regards to maintenance of such layer.

easiest way to bring down your entire distributed infrastructure and cause large scale outage is when your vault is down...

Dotenv is an okay hack, but the root problem is that the operating systems we run these applications in don't have a fully conceived environment variable system.

I think you have a fair point that dotenvx doesn't get the implementation right, but it does at least seem to recognize where the problem lies and is trying to fix it from that angle. You have to start somewhere. Almost never do we get solutions right the first time. It takes iteration and experimentation and perhaps this (and others like it) can pave the way towards a better solution.

That is the point, the OS / container-system must provide a secure system for storing and accessing the crypted keys.

In the best case your app sends (something like gRPC) to the OS-key-system, that adds decrypted keys and executes the function. So you app will never have direct access to the decrypted keys. Like fingerprint-system in smartphones.

I had to scroll a long way down to find this comment. Can't believe people are using the .env file in production!

Just set those env vars in your IDE. Let your IDE or docker-compose or whatever read an .env file if you must. But don't do it directly from your application code, indeed you're one lazy dev away from putting an .env file on prod servers.

Using dotenv-like constructions is, in my eyes, an antipattern.

Please don't configure important things in the IDE. IDE configs are notoriously fickle and checking them into the repo usually just leads to a lot of problems in my experience, plus it will mean duplicated effort every time you need to do something from the CLI.

I've seen so many .env files in production that I don't even wince anymore. I just formalize the finding by email and carry on.

And it need not even be an IDE. Whatever environment you use for development, set environment variables in that.

> Just set those env vars in your IDE.

How is that different from a dotenv, other than location of where the k/v persists?

Keys in the env file will ultimately be loaded as environmental variables, so it's just adding a convenience layer.

One that negates the entire point.

> Dotenv is only good for local dev.

It wouldn't surprise me if many VPS use .env files.

Seems odd to announce new features without calling out to the fact that libraries with the same name in other languages have had those same features for years. (for example, the dotenv gem in Ruby)

Node.js already has built in support for .env


True, but the dotenv npm package has been around much longer than that support (11 years vs 9 months), and the .env support in Node doesn't yet support features like auto-inclusion of environment-specific files.

"Config Management" means something else to systems folks. Suggest adding "for node projects" or something akin to the title to clarify.

It's not node specific like dotenv, it's a generic tool for launching processes with (optionally encrypted) environment variables.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact