I work with AWS a lot every day and lead a team responsible for building workloads on AWS for some customers with very high security requirements. This tool terrifies me.
The sheer amount of potential for misconfiguration of resources that this tool can exploit with no effort whatsoever is absolutely insane. I feel like every AWS environment I've ever seen is suddenly at risk of some angry employee compromising everything very very quickly.
I'm betting over at AWS they're almost as terrified by this as I am.
I can almost guarantee you that attackers focusing on AWS environments have all sorts of similar (if not worse) tools. The fact that this is public hopefully terrifies AWS into improving their security usability and making these kinds of exposures more difficult. What's important to remember is that there isn't actually any _vulnerability_ here (the tool still requires valid authentication to work); it just makes it 100x easier to automate.
Initially stuff like this is scary but it leads to good things in the end. Tighter security, opening customers eyes. Etc. Probably the better black hatters already knew about these and your organization wasn't really worth anything to them so they skipped it. At least tools like these help us security neophytes have a little bit of a fighting chance out there on the Wild Wild Web.
Working currently with a cloudsecurity project, the sheer amount of surface area that AWS exposes combined with the amount of asterisks I see in various types of policies is terrifying. Enumeration is incredibly dangerous when there are so many poor service roles blindly trusting an entire AWS service, not realizing this is trust across accounts.
My first thought was "why is salesforce publishing essentially a hacking tool? why can't they bring it up privately, surely a large enough company will have some weight to their request?" but then I remembered AWS...
>At the time of this writing, AWS Access Analyzer does NOT support auditing 11 out of the 18 services that Endgame attacks. Given that Access Analyzer is intended to detect this exact kind of violation, we kindly suggest to the AWS Team that they support all resources that can be attacked using Endgame
Author here :) Endgame exploits/abuses features. If it was a bug, I'd work with AWS to solve the problem, but with abusing features - that would result in years of unsatisfied feature requests. This should push the issue along.
>...and it's not even a hacking tool!
It can be used to backdoor resources to rogue accounts, so I'd say it's a hacking tool and can/should be used on penetration tests. I'd certainly use it on a pentest :)
Salesforce also runs Heroku, which is one of the biggest AWS wrappers around. I'm really glad they're active in security auditing here, it's a real value add to customers of Heroku / Salesforce services to see evidence of their work to analyze security.
Not sure what the shock is with seeing security tools like this released, the vast majority of security tools are open source, how is this different to what we have been seeing the past 30 year?
Not to mention companies such as Google, Netflix and Mozilla all release security tools just like this.
Of course this was going to happen. Who knows, probably this way the author achieved what be wanted and those policy exploits will be revisited, at last.
It really seems that AWS cares more about the cadence of shiny new managed solutions than they do about maintaining and upgrading their existing solutions. I wouldn't characterize it as willful negligence, quite yet, but some processes are definitely broken.
Case in point, in the last week alone, I've discovered a Fargate EKS managed platform upgrade getting botched behind the scenes (unexpected containerd versions, etc), as well as a lack of support out of RDS Proxy for things like the latest stable default Postgres offering (12.5) in RDS. They released 12.0 to the preview channel in November of 2019 ... how long does it take exactly to get support for something like that?
All that is to say, I would not be expecting any improvements to AWS Access Analyzer anytime soon, despite this tool's debut.
Note that as far as I could tell, this is a tool to check which unexpected AWS modifications can be done from API keys that you do make public in the first place. It doesn't "hack" an account per se.
So for example if you've created some IAM API keys and embedded in an app for example, and you (incorrectly) believe the permissions only grant the app to fetch some static media files from an S3 bucket, the tool can discover incorrect configurations that would allow someone who extracted the key to change permissions of the bucket.
Yes, you'd have to leverage compromised credentials. That could be obtained via SSRF, RCE on a privileged box, leakage of user access keys, or other means. In the context of a penetration test, it's more of a post-exploitation tool.
> First, authenticate to AWS CLI using credentials to the victim's account.
... right. This is just a glorified "what can this IAM user do" tool. There is literally no actual pentesting done. Not much different than having the key to your neighbor's front door and seeing how many things inside their house are unlocked for you.
dry run should be the default, and for you to actually do damage, you should explicitly run with a flag like `--commit` or `--deploy-evil-payload "yes I am certain of this"`
fwiw the opensource (and cncf incubator project) https://cloudcustodian.io can detect and remediate these modifications to embedded iam policies (across many resource types) in realtime that share beyond an organizations/accounts boundaries. its like access analyzer except its flexible enough to understand internal org distinctions (dev/prod separation) and allowed access to third parties.
Anybody have a mirror? It seems to have been taken down from GitHub.
Also I guess it might have been a not so nice from an almost direct competitor of AWS - salesforce - to publish something like that. Salesforce owns heroku.
Impressive tool, but the supporting documentation is what I appreciate most.
I think the prevention guide could be improved by providing an example service control policy that blocks known dangerous IAM actions like ecr:SetRepositoryPolicy for all but a specific security principal.
Can someone explain why you'd ever want to run this in the non-dryrun mode?
I understand that if you have these problems you've already effectively granted those permissions anyway but actually executing them before someone finds them lowers the bar quite a bit for other baddies to attack.
for me, my environments are in different AWS accounts and can be torn down and stood back up rather quickly. so it wouldn't be a big deal to let this destroy a dev environment in the name of science so that i could implement improvements.
So, this is essentially a script to mess up your AWS resource permissions by using a privileged account to an extent that a) might surprise folks who haven't thought too deeply on the matter, and b) will be challenging to uncover using AWS's own audit facilities, is that fair to say?
Does anyone have any ideas as to why this is being taken down? Hacking tools are released all the time. Why did this one make such a big ripple in the pond?
It is great that it is Public because as it will create some sense of urgency. Similar to how you expose a Bug on Aurora like following, every such finding will directly/indirectly help a user in making good decisions and understand how to be careful.
https://github.com/brandongalbraith/endgame still has it as of this morning, the several I marked last night waiting to ask at work about it have disappeared, so dunno how long this one will be tehre.
Not to sound like a jerk but why do you think this would be some "OMG" response from AWS? This is not some sort of "hacking", this is a tool that is being used to detect whether you misconfigured API access to be overly permissive. The tools job is to find them and them "abuse" them. Its not like AWS is not aware of user misconfigurations. The issue is AWS does not provide tools to detect these very well. Tools like CloudAware also exist because of things AWS don't provide. Not like AWS isn't aware of the ability to make such tools, considering these are just crawling and attempting to use a series of already existing AWS calls.
The tool is great as a free tool and very helpful, but its also not like AWS doesn't already have the people smart enough to make something just as good, if not better. It just obviously not AWS's priority. They can just leave the blame on the user for not properly managing IAM permissions.
And? Thats not because "AWS" was like "OMG so smart", AWS is already well aware of this issue but lays the blame on "Shared Responsibility" and are likely annoyed that Salesforce, a partner of AWS< released this without communication.
Honestly, my guess is there was a lapse in Salesforce somewhere, where either legal or PR didn't check this because this likely goes against Salesforce and AWS NDA for their partnership. I worked as an AWS partner before, there are requirements that go into place before you can release stuff like this to the public. Plus, having worked with Salesforce as well, I assume they have a PR policy to not use the word "hacking" in tool names or description, especially in regards to partners. My company has similar rules for OSS stuff.
This was more of a bad PR / Legal issue. AWS is well aware that people misconfigure permissions...
And again... better tools and more popular tools already existed... This is not new
Except it's not "Pentesting tool to backdoor" anything.
It's simply modifying an access given you already have credentials to do that.
You can do the same with aws cli (oh horror /s).
I was thinking about putting a new repo with the code in it but I'd rather not risk the wrath of AWS since my job kinda depends on the service. Which probably says something about the state of Faang companies that I'm even concerned about it.
We use both AWS and Salesforce and I'm surprised about this tool being developed by SF after all the whistle and bells about the partnership between the two.
nothing of security threat I guess. It uses your permissions, to modify the current permissions for different product. If u do have permissions to modify things, then this will work. if you have no permissions, it will fail.
So can it be used with bad intention, yes. But if I am a hacker, would i want to open all the available doors? or choose 1 or 2 doors only instead and keep the rest as is!!
for sub in `az account list | jq -r '.[].id'`; do \
for rg in `az group list --subscription $sub | jq -r '.[].name'`; do \
az group delete --name ${rg} --subscription $sub --no-wait --yes; \
done; done;
>I did uncover a ridiculously destructive approach to abusing Azure Service Principals in CI/CD pipelines that deploy infrastructure in Azure (Confused Deputy problem):
> for sub in `az account list | jq -r '.[].id'`; do \ for rg in `az group list --subscription $sub | jq -r '.[].name'`; do \ az group delete --name ${rg} --subscription $sub --no-wait --yes; \ done; done;
The CI provider giving you an over-privileged SP to play with needs to fix that, sure. SPs start with zero role assignments, so it's particularly egregious that they gave it unnecessary permissions.
(Though, for the CI providers I'm familiar with, you the user would be the one creating the SP and providing it to the pipeline. So making it over-privileged would be your mistake.)
But it's not a Confused Deputy problem when you have a service principal with delete access to all resource groups in all subscription and tell it delete those resource groups. Confused Deputy involves a higher-privileged server forgetting to downgrade its privileges on behalf of a low-prvivilege client. The SP is the client in this case - it was created with high privileges in the first place.
The sheer amount of potential for misconfiguration of resources that this tool can exploit with no effort whatsoever is absolutely insane. I feel like every AWS environment I've ever seen is suddenly at risk of some angry employee compromising everything very very quickly.
I'm betting over at AWS they're almost as terrified by this as I am.