Hacker News new | past | comments | ask | show | jobs | submit login
AWS Secrets Manager Agent (github.com/aws)
112 points by plurby 3 months ago | hide | past | favorite | 66 comments



So the point of this is just to cache secrets, to avoid caching them in your app memory?

Seems like kinda a niche threat model, if your app is already compromised to the point where it's secret cache can be read, it seems likely that the attacker could also pivot to just read from the cache, or use the instance credentials to read from secrets manager itself.


If I looked at what this does and none of the surrounding discussion/documentation, I'd say this is to simplify using secrets manager properly more-so than any security purpose.

To use secret manager "properly", in most cases you're gonna need to pull in the entire AWS SDK, maybe authenticate it, make your requests to secret manager, cache values for some sort of lifetime before refreshing, etc.

To use it "less properly", you can just inject the values in environment variables but then there's no way to pick up changes and rotating secrets becomes a _project_.

Or just spin this up and that's all handled. It's so simple you can even use it from your shell scripts.


For anything we inject secrets in via env vars (which really is only supported by ECS, maybe EKS?), it is easy to add a lambda to kick off a nightly ECS restart. Easier if you are already using the AWS CDK for tooling.

The purist in me thinks restarts are a hack, but the pragmatist has been around long enough to embrace the simplicity.

Adding another dependency/moving piece that AWS could drop support or it could just break also steers me away from this.

For Lambda, processes should be getting swapped fast enough and you also normally load during a cold start only. I could see some argument there around improving cold start performance, but would need some testing.

So, maybe this is to save a few cents?


No, the point is to get sensitive data out of the env variables, which nowadays get stored in plaintext in an .env file or similar. This is a solution for storing and retrieving secrets using AWS credentials. Essentially an online password manager for your application.


But AWS Secrets Manager does that already, without the Agent. It seems like the main value-add of the Agent is that you don’t have to manage a cache in your application code but still get the performance/cost advantage of having one.


So you don't have to manage a cache, but you do have to manage a network-connected sidecar service? You can make the "N programming languages" argument for why this isn't just a library, but they already have the aws-sdk, and Secrets Manager clients in that SDK, what advantage would this hypothetically have over a caching mechanism native to the SDK and internal to the application process?


The Security section of the Readme actually recommends using the SDK when it is a viable option. Seems like this is meant to fill a small gap for niche scenarios where that isn't an option, for some reason.


Yeah I think this line should really be at the utter-tippy-top of the README

> The Secrets Manager Agent provides compatibility for legacy applications that access secrets through an existing agent or that need caching for langages not supported through other solutions.

This does not appear to be an interesting, state of the art, purported best practices way to access Secrets Manager; its just a shim for legacy apps. And that does make sense, as there are many languages which do not have SDKs available, but do have an HTTP client library; though I question how much of the demand for something like this comes from how insanely complex authenticating with the AWS API is.


What about the credentials used to access AWS credentials? I think there's a good case for centralised credentials where they are shared across applications, though I would seriously question the need to share them across applications. But what you're achieving here as far as I can tell is just making secret retrieval more convoluted (for both devs and hypothetical attackers). Not to beat the dead horse, but obscurity != security.


When you deploy your code to AWS lambda or EC2 the code can simply access the appropriate secret stores as dictated by the IAM policy. If you haven't bought into AWS as a whole you're right that there's no good reason to use secret manager.


If you're in AWS, you get credentials from the metadata service. If you're outside AWS, workloads assume roles using OIDC. If you still have access keys, generally speaking, you're doing it wrong.

https://aws.amazon.com/blogs/security/access-aws-using-a-goo...


Both the metadata service and assuming a role with a “web identity” still give you an access key along with a session token.


Technically true, but in practice the role means you don't have to care about them. They're an implementation detail that's managed by AWS. Could be flying mice for all the app dev cares.


Sure, under the hood it is still access keys. Very temporarily defined access keys that going the normal happy path means you're not directly handling. What I'm really meaning by my above comment is you're not configuring your workload with ACCESS_KEY=abc123 SECRET_ACCESS_KEY=xyz789.


They aren't configured, but they're not as temporary as one might hope (i.e. they don't rotate on every read, for example), and it's pretty trivial set of exploits to leak them, especially in Kubernetes clusters with incorrectly configured worker nodes.

A much better solution would be for AWS to offer a domain socket or device inside VMs that will sign requests, such that the private material isn't even available to leak.


OIDC uses a client secret, for one.


Ah that's where we have the Credentials for AWS Credentials Service Agent

Just simply pass it a credential and it will provide you the necessary credentials to access the Credentials for AWS Credentials Service


You need to think bigger, as there is surely some limit n beyond which the nested process of retrieving credentialₙ is beyond the reach of attackers.


There are no credential, you are supposed to use identity-based auth: your lambda / ec2 / eks pods etc have a IAM role, so there are no secret in any form


From AWS pricing:

> Per 10,000 API calls

> $0.05 per 10,000 API calls.

So imagine you have some number of cron jobs which require a bunch of secrets and these things fire every minute or 30 seconds or what have you. You could save as much as $0.25 a month!


also according to this blog initializing the AWS sdk can add ~1 second per invocation https://blog.aquia.us/blog/2023-01-01-secrets-manager-lambda...


I think the point is less for apps, and more for the infrastructure that deploys apps (think: a Kubernetes control-plane), when that infrastructure depends on secrets from AWS but does not itself live within AWS — i.e. the "hybrid cloud" use-case.

> or use the instance credentials to read from secrets manager itself

Usually apps don't actually have instance credentials like this, but rather the thing deploying the app does, and that thing then injects just the secrets the app actually needs into the app's sandbox.


I am working on a secrets manager for the paranoid, and part of the idea is to do this, yes. However, most of the idea is to get secrets off of your disks and out of your git repos. That's mainly what Hashicorp Vault and AWS Secrets Manager do for you. They turn authenticated roles into the ability to access your secrets so they don't go in a plaintext file.


Reading this I am confused about what exactly this is meant to solve as well.

Given that services like Lambda and ECS are already setup to be able to pull from secret manager natively and provide it as an environment variable.

What is the threat model that this is actually going to solve? At best it seems like security through obscurity, it removes the low hanging fruit of looking at ENV but if your application has the rights to use this than if someone gets into your container they can still get your secret.

What am I missing about the big advantage of this and why it was made?


The motivation is in the project’s readme, down at the bottom.

The tl;dr is that this is for legacy software where you can make HTTP calls to retrieve a secret, but for some reason cannot use the AWS SDK. If you can use the SDK, you should use that instead of this proxy.


Why are all the various "secrets vault" approaches so splintered and proprietary, anyway? Why is there a separate tool I have to install for:

• AWS secrets, GCP secrets, Azure secrets... each has its own API

• secrets in a HashiCorp Vault install

• secrets from whatever cloud password manager

• "ambient" secrets from env-vars, or the local .netrc, or the local macOS Keychain

• k8s Secrets resources (when you're a k8s CRD controller)

• secrets stored in SOPS files, in turn encrypted by keys held in any of the above

Why haven't we seen a generic "secrets client" library, with pluggable adapters for handling all of these cases through the same library API / CLI tooling?

Or better yet, why not a generic stub secrets client, that speaks to an also-generic "caching middleware proxy" like this AWS one — where the proxy has the pluggable backend adapters + connection config for them?


The stub secrets client is just a key->value API, so the value of a proxy is pretty limited. It's not a hard enough problem that anyone is interested in having a separate product for it.


The point of the proxy, is that it would talk to these fifteen different backends and convert them into a generic key-value API. And also, as with the AWS solution above, do TTL-based cache refresh of the secrets, cache-invalidation when it loses connection to the backend, etc.

Also, the "stub" client wouldn't really be a stub, as all the "ambient environment" secrets adapters would necessarily be local to the client rather than to the proxy. The client library would be a bit like using dnsmasq(1) as a local "stub" DNS resolver — where it reads your /etc/hosts and so forth, but for most things is deferring to a configured upstream DNS server.


The extent of the “conversion” required would pretty much just be taking one form of JSON output and transforming it into a different JSON output, which is pretty easy to do in a few lines of Python or a single jq command. It would likely be more work and hassle to have to install and manage a secrets proxy, rather than just writing a few transformation lines, or better yet just using the SDK of the service you’re using.

Even this secrets manager proxy that the OP is about is explicitly to be used in legacy situations where you can’t use the AWS SDK, which is preferred because it does all of the stuff you mentioned for you.

However, this is Hacker News! If you think you see a problem that can be solved that other people don’t, why don’t you build it?


I mean, that's kind of what KeyWhiz does, no?


> Why haven't we seen a generic "secrets client" library, with pluggable adapters

"Spring" has that. You can define a property that can be populated from pluggable sources, like vaults, yaml, environment variables and others.

You just add an annotation @Value("${foo.bar}") to a field or constructor parameter, and it will be filled from the appropriate source automatically.


> secrets stored in SOPS files, in turn encrypted by keys held in any of the above

SOPS does already work this way, right? You don't have to use local GPG keys or whatever with SOPS, you can use keys from AWS KMS or stored in HashiCorp Vault or whatever


at least in kubernetes-land, external-secrets.io provides this.


external-secrets really is great!

Pointing this out here, because big evil companies generally don’t get praise when it’s due: godaddy built this!


I think 1password can be used for aws at least


Because a security vulnerability in the common library will have a much larger impact. It also increases the potential attack surface by adding more components. Companies value the secrets they keep and want to make sure they have 100% vertical control, where they can audit everything.

Also, at any project with a sane architecture, you're using 1 vault and maybe 1-2 ambient strategies to pass the data. You won't use all the vaults at the same time anyway


> Also, at any project with a sane architecture, you're using 1 vault and maybe 1-2 ambient strategies to pass the data. You won't use all the vaults at the same time anyway

You're assuming the secrets here are managed by infra+glue added by a DevOps team when deploying an app.

I'm talking about use-cases where the secret-handling is designed into e.g. a cluster-scale deployable virtual appliance, where you configure the app through its UI or deployment-time config files to access your "secrets provider" of choice. (Think "deployable PaaS.")


This seems like quite a lot of setup and hassle for what could be handled some other way with less fuss, like chamber[0] or Doppler[1]. Heck, even the classic .env seems like a better choice in every way.

What are the advantages to a configuration like this? Seems the HTTP interface with non-encrypted cache and separate agent situation isn’t something secure enough to satisfy most companies these days.

[0] https://github.com/segmentio/chamber

[1] https://www.doppler.com/


I think the audience for this is someone who is already using AWS Secrets Manager, but wants to reduce their API usage (perhaps due to cost).

Chamber uses SSM Parameter Store, which for many cases is similar, but some people might have a preference for Secrets Manager. For example, a team might like the automatic RDS password rotation for Secrets Manager and decide to put everything there for consistency.

For Doppler, well maybe someone doesn't want to pay for it, or they'd rather control access to their secrets via IAM instead of through a separate tool.


Yes, we use something similar for debugging lambdas locally. We use Dotnet, and this library:

https://github.com/Kralizek/AWSSecretsManagerConfigurationEx...

Normally Boto uses the current account context to get secrets, but if we run a lambda as a local build, it uses this library to pull secrets from the actual dev AWS account.

This makes it easier to onboard new developers, reduces problems of figuring out what secrets to get for each lambda, etc.

Also if secrets are rotated in dev, local stacks get them automatically.

I am curious to see if this tool is remarkably different.


Its no joke that AWS Secrets Manager calls add up. At my medium-size US web company, for our data lake account last month, KMS is the second highest line item after s3 service cost. S3 at 94% of total, KMS at 4% of total with Tax and Kinesis the remaining sizable components.


Chamber can also use S3 + KMS as a backend, which reduces the API costs to ~0 and massively improves the scalability (since SSM has annoyingly low rate limits, or at least it did a few years ago when we last tried it).


The use-case seems to be intentionally narrow:

> The Secrets Manager Agent provides compatibility for legacy applications that access secrets through an existing agent or that need caching for languages not supported through other solutions.


I was going to say you can rotate secrets in secrets manager without redeploying all your services. But this caches the secrets so you'll still get stale results for up to 5 minutes by default. Not sure what the point is then.


> even the classic .env seems like a better choice in every way

That's a pretty thorough misunderstanding of the value that secrets management services provide. We can start with the idea of never storing secrets in files.

I think most companies also understand the difference between plain HTTP localhost loopback and transmitting secrets in plaintext over the network. There are many services that rely on localhost loopbacks for handling all kinds of sensitive data.

Chamber is great but generally relies on transmitting secrets via environment variables to the enclosed process and assumes that they will remain valid for the lifetime of that process. Part of the point of this tool is to provide a secrets cache with a TTL.


This sounds an awful lot like an internal Amazon tool that predates AWS secret manager. It was actually really nice to use; the advantage comes if you always can rely on the daemon being available and you can just say "these machines have access to this secret." If you had to set up and configure the VM, maybe pointless, but it's intended for situations where you're deploying 1000s of VMs with many teams and some centralized team is preparing the machine images you're using.


What I really want is a consul-template for AWS Secrets Manager. As I wrote this I googled and found a plugin:

https://github.com/chrissav/consul-template-plugin-secretsma...

I didn't realize consul-template supported plugins.


For senior developers who are ready to write code, integrating the appropriate AWS SDK library for your programming language and writing a few lines of code might seem straightforward, and may not take more than half a day. However, consider a large company with thousands of applications—like in my case—where this effort is multiplied a thousandfold. Moreover, these applications are developed in over 10 different languages, some of which may not even have an available AWS SDK. Therefore, using an agent that simplifies these operations into a single HTTP call to a sidecar service truly adds value.

Another consideration is operation; imagine that there are 10 different libraries maintained for this purpose, and if there is a new feature, say, you need all logs going to one place, making sure it is available in all languages would require a team with different programming skills to do so. Secrets agent, being language agnostic, you only need to change at one place, and someone else may have already done it for it or ready to do it, as it is open source project.

When it comes to cost saving, imagine scenarios where a junior developer improperly implements secret retrieval in a Lambda function, with retrieval occurring at every function invocation and each function handling 100 transactions per second. Such a single oversight can cost $1,000 a month, and if left unnoticed for a year—a common occurrence when the function appears to work—people often overlook further scrutiny as long as it functions.


FYI, there is an AWS-provided Lambda layer similar in principle to this, also including access to Parameter Store.

https://aws.amazon.com/blogs/compute/using-the-aws-parameter...


How is this different from calling Secrets Manager directly? The only benefit I can think of is caching. So your secrets can be fetched a bit faster. But that is such a niche use-case, and you can easily cache it yourself if you need to.


Sometimes you just want a daemon to fetch secrets / config files containing secret for whatever code you don't own.

For example you spin up nginx and setup HTTP basic auth quickly and don't bother writing your own script to periodically update user list from SSM.


apparently it has to do with pricing per API call.


AWS directly contacted us to warn us about pricing because we were pulling secrets so much across all our deployments. Caching is definitely important for that reason alone.


One particular use case that I might try this for is for (very) restrictive environments. One such case was with my previous work where we had to develop services for the client but we can only do it in a remote desktop with certain network and application restrictions. Instead of having conditions for the environment to load certain config, we can simply retrieve the secrets stored in AWS (ex. RDS credentials) via the agent.


I'm going to say this as nicely as I can. Secrets Manager can fuck right off with their $.50/mo/secret pricing.

Moved all our secrets to S3 a long time ago and haven't looked back.


If you don't need the granularity, you can store all the credentials that will be used by a specific caller(s) in a single JSON object and it will cost you only those $0.50. You can easily fit a thousand, maximum size is 64kb.


DynamoDB also makes for a nice fine grained secrets manager with their new table resource policies: https://speedrun.nobackspacecrew.com/blog/2024/06/27/using-d...


I use Parameter Store instead


This is really cool, I've been running something similar to simplify rotating database credentials for legacy projects.


So a bit like Hashicorp Vault (in that it has a locally accessed secrets store) but backed by AWS Secrets Manager.


Who cares? People are only upvoting this because it’s written in Rust. The actual tool seems useless


This should come in handy with SOPS and git log.


I got to use secrets manager a while back it was a breath of fresh air as it was all of those things you seeking in vault without all of the problems of it being hashicorp. No offense hashicorp. I rather blame AWS than a self-managed solution.


The auth alone makes it so much simpler. We initially were going to setup a self-hosted vault and setup all the auth to integrate into our EC2s and on a whim I spent a few hours setting it all up with AWS Secrets Manager with implicit auth through an IAM role attached to the EC2s and it was dead simple and done. Best part is, I don't have to care how AWS Secrets Manager is hosted and my services don't care how to authenticate against it, it's all implicit through a simple api.


Yep delegating it all to IAM is another huge win.


this feels more like Azure Secrets which has been a superior product.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: