Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Endgame – An AWS Pentesting tool to backdoor or expose AWS resources (github.com/salesforce)
368 points by kmcquade on Feb 16, 2021 | hide | past | favorite | 93 comments



I work with AWS a lot every day and lead a team responsible for building workloads on AWS for some customers with very high security requirements. This tool terrifies me.

The sheer amount of potential for misconfiguration of resources that this tool can exploit with no effort whatsoever is absolutely insane. I feel like every AWS environment I've ever seen is suddenly at risk of some angry employee compromising everything very very quickly.

I'm betting over at AWS they're almost as terrified by this as I am.


I can almost guarantee you that attackers focusing on AWS environments have all sorts of similar (if not worse) tools. The fact that this is public hopefully terrifies AWS into improving their security usability and making these kinds of exposures more difficult. What's important to remember is that there isn't actually any _vulnerability_ here (the tool still requires valid authentication to work); it just makes it 100x easier to automate.


The scary part is who has built tools like this before but people didn't know they existed?

At least now we all have access to the same tool. Maybe this one won't have everything the "secret" tools have. But it's a good start!


Initially stuff like this is scary but it leads to good things in the end. Tighter security, opening customers eyes. Etc. Probably the better black hatters already knew about these and your organization wasn't really worth anything to them so they skipped it. At least tools like these help us security neophytes have a little bit of a fighting chance out there on the Wild Wild Web.


Working currently with a cloudsecurity project, the sheer amount of surface area that AWS exposes combined with the amount of asterisks I see in various types of policies is terrifying. Enumeration is incredibly dangerous when there are so many poor service roles blindly trusting an entire AWS service, not realizing this is trust across accounts.


There are a lot of other tools in this space and people that specialize in AWS pentesting. Another popular tool is Pacu: https://rhinosecuritylabs.com/aws/pacu-open-source-aws-explo...


404 on this today so I guess they were terrified.


My first thought was "why is salesforce publishing essentially a hacking tool? why can't they bring it up privately, surely a large enough company will have some weight to their request?" but then I remembered AWS...

>At the time of this writing, AWS Access Analyzer does NOT support auditing 11 out of the 18 services that Endgame attacks. Given that Access Analyzer is intended to detect this exact kind of violation, we kindly suggest to the AWS Team that they support all resources that can be attacked using Endgame

...and it's not even a hacking tool!


Author here :) Endgame exploits/abuses features. If it was a bug, I'd work with AWS to solve the problem, but with abusing features - that would result in years of unsatisfied feature requests. This should push the issue along.

>...and it's not even a hacking tool! It can be used to backdoor resources to rogue accounts, so I'd say it's a hacking tool and can/should be used on penetration tests. I'd certainly use it on a pentest :)


404. Did they pull the repo or make it private?

https://github.com/salesforce/endgame


Here is one of many forks: https://github.com/agnivesh/endgame/



I'm impressed you were able to get your employer (Salesforce) to actually let you publish this under their organization. Kudos to that.


Salesforce also runs Heroku, which is one of the biggest AWS wrappers around. I'm really glad they're active in security auditing here, it's a real value add to customers of Heroku / Salesforce services to see evidence of their work to analyze security.


Yes, surprised also, given past stories around Defcon.

I think it's great to have audit tools like this. It makes people realize how vulnerable their accounts are.

Does a similar tool exist for Salesforce and Heroku?


Not sure what the shock is with seeing security tools like this released, the vast majority of security tools are open source, how is this different to what we have been seeing the past 30 year?

Not to mention companies such as Google, Netflix and Mozilla all release security tools just like this.


I guess they didn't.


That’s what I was expecting to happen, unfortunately.


Well, you know the saying about eggs and omelettes. I wish you luck with getting AWS to listen to you!


Thanks :)


Can you share the code somewhere else? It's been taken down from github



Bugs get patched. Features are protected, and sometimes simultaneously abused. Thank you!


So did you just put this out there or did you give AWS Security peeps a week or two notice?


This isn't exploiting a vulnerability. This requires authentication and uses AWS features. Why would they need to alert AWS?


you're an evil genius


404? someone got an urgent call from AWS and politely requested to remove it since both companies are supposed to be partners?


It looks that way.

Looks like some of it was archived though at https://web.archive.org/web/20210216153239/https://github.co....

Also still live at PyPI: https://pypi.org/project/endgame/


pypi tgz at archive.org: https://web.archive.org/web/20210216214208/https://files.pyt...

Also https://github.com/hirajanwin/endgame is still up as of 00:58 Wednesday, February 17, 2021 Coordinated Universal Time (UTC). Zip file download of that Git repo here: https://web.archive.org/web/20210217005905/https://codeload....


Of course this was going to happen. Who knows, probably this way the author achieved what be wanted and those policy exploits will be revisited, at last.




Gone


Give


Now a 404.




that's gone too :)



It really seems that AWS cares more about the cadence of shiny new managed solutions than they do about maintaining and upgrading their existing solutions. I wouldn't characterize it as willful negligence, quite yet, but some processes are definitely broken.

Case in point, in the last week alone, I've discovered a Fargate EKS managed platform upgrade getting botched behind the scenes (unexpected containerd versions, etc), as well as a lack of support out of RDS Proxy for things like the latest stable default Postgres offering (12.5) in RDS. They released 12.0 to the preview channel in November of 2019 ... how long does it take exactly to get support for something like that?

All that is to say, I would not be expecting any improvements to AWS Access Analyzer anytime soon, despite this tool's debut.


Note that as far as I could tell, this is a tool to check which unexpected AWS modifications can be done from API keys that you do make public in the first place. It doesn't "hack" an account per se.

So for example if you've created some IAM API keys and embedded in an app for example, and you (incorrectly) believe the permissions only grant the app to fetch some static media files from an S3 bucket, the tool can discover incorrect configurations that would allow someone who extracted the key to change permissions of the bucket.


> the tool can discover incorrect configurations that would allow someone who extracted the key to change permissions of the bucket.

Nit: The tool can discover and abuse excessive permissions.


Yes, you'd have to leverage compromised credentials. That could be obtained via SSRF, RCE on a privileged box, leakage of user access keys, or other means. In the context of a penetration test, it's more of a post-exploitation tool.


> First, authenticate to AWS CLI using credentials to the victim's account.

... right. This is just a glorified "what can this IAM user do" tool. There is literally no actual pentesting done. Not much different than having the key to your neighbor's front door and seeing how many things inside their house are unlocked for you.


It would be nice if this was the other way round :'''''(

# this will ruin your day

endgame smash --service all --evil-principal ""

# This will show you how your day could have been ruined

endgame smash --service all --evil-principal "" --dry-run

Looks like it can be reversed with --undo, but brown trousers time if you groggily run it at 08:30am coffee in hand.


dry run should be the default, and for you to actually do damage, you should explicitly run with a flag like `--commit` or `--deploy-evil-payload "yes I am certain of this"`


Dry run as default is a good idea. I'll open a GitHub issue for that.

FWIW, if you run `endgame smash` with `--service all`, then it spits out a huge "WARNING" in ASCII art with an explanation and a confirmation prompt.

But I agree, we should have dry-run on by default.


Yes please, thanks!!


fwiw the opensource (and cncf incubator project) https://cloudcustodian.io can detect and remediate these modifications to embedded iam policies (across many resource types) in realtime that share beyond an organizations/accounts boundaries. its like access analyzer except its flexible enough to understand internal org distinctions (dev/prod separation) and allowed access to third parties.


Anybody have a mirror? It seems to have been taken down from GitHub.

Also I guess it might have been a not so nice from an almost direct competitor of AWS - salesforce - to publish something like that. Salesforce owns heroku.


I don't know that I would call them a direct competitor. Heroku uses AWS for a lot of it's infrastructure. They are a pretty big AWS customer.



Gone


The repo is gone but the code is still on PyPI: https://pypi.org/project/endgame/


Also gone


Impressive tool, but the supporting documentation is what I appreciate most.

I think the prevention guide could be improved by providing an example service control policy that blocks known dangerous IAM actions like ecr:SetRepositoryPolicy for all but a specific security principal.


Great feedback! I will update the docs accordingly.


Can someone explain why you'd ever want to run this in the non-dryrun mode?

I understand that if you have these problems you've already effectively granted those permissions anyway but actually executing them before someone finds them lowers the bar quite a bit for other baddies to attack.


To test autoremediation and alerting. At least in the environment I'm evolving these days it makes sense.


Exposing resources to a specific "evil principal" via Endgame would be reasonable in some attack simulations/red team engagements


for me, my environments are in different AWS accounts and can be torn down and stood back up rather quickly. so it wouldn't be a big deal to let this destroy a dev environment in the name of science so that i could implement improvements.


Red team exercises come to mind.


The main repository seems to have been taken down but it is still available at https://github.com/kmcquade/endgame and on Pypi



Thanks!


404 also.


So, this is essentially a script to mess up your AWS resource permissions by using a privileged account to an extent that a) might surprise folks who haven't thought too deeply on the matter, and b) will be challenging to uncover using AWS's own audit facilities, is that fair to say?


@kmcquade ur awesome ! we are users of https://github.com/salesforce/policy_sentry and definitely definitely https://github.com/salesforce/cloudsplaining .

If I could give you guys money, I would. You should totally build a startup around it.


Aw, thank you. I really appreciate that.


Does anyone have any ideas as to why this is being taken down? Hacking tools are released all the time. Why did this one make such a big ripple in the pond?


Make sure you check AWS' pentesting policy [0].

[0]: https://aws.amazon.com/security/penetration-testing/


Given that they wrote a tool dedicated to pentesting AWS, I'm sure the author is very familiar with that.

Also the pentesting policy explicitly states that customers can pentest without approval.


It is great that it is Public because as it will create some sense of urgency. Similar to how you expose a Bug on Aurora like following, every such finding will directly/indirectly help a user in making good decisions and understand how to be careful.

https://news.ycombinator.com/item?id=26146440


https://github.com/brandongalbraith/endgame still has it as of this morning, the several I marked last night waiting to ask at work about it have disappeared, so dunno how long this one will be tehre.


lol you're about to get a giant offer from Amazon. Tell them you want 10x whatever they first offer and they'll say yes.


Not to sound like a jerk but why do you think this would be some "OMG" response from AWS? This is not some sort of "hacking", this is a tool that is being used to detect whether you misconfigured API access to be overly permissive. The tools job is to find them and them "abuse" them. Its not like AWS is not aware of user misconfigurations. The issue is AWS does not provide tools to detect these very well. Tools like CloudAware also exist because of things AWS don't provide. Not like AWS isn't aware of the ability to make such tools, considering these are just crawling and attempting to use a series of already existing AWS calls.

The tool is great as a free tool and very helpful, but its also not like AWS doesn't already have the people smart enough to make something just as good, if not better. It just obviously not AWS's priority. They can just leave the blame on the user for not properly managing IAM permissions.


And yet, it now 404s on both the salesforce project and the owners own personal GitHub.


And? Thats not because "AWS" was like "OMG so smart", AWS is already well aware of this issue but lays the blame on "Shared Responsibility" and are likely annoyed that Salesforce, a partner of AWS< released this without communication.

Honestly, my guess is there was a lapse in Salesforce somewhere, where either legal or PR didn't check this because this likely goes against Salesforce and AWS NDA for their partnership. I worked as an AWS partner before, there are requirements that go into place before you can release stuff like this to the public. Plus, having worked with Salesforce as well, I assume they have a PR policy to not use the word "hacking" in tool names or description, especially in regards to partners. My company has similar rules for OSS stuff.

This was more of a bad PR / Legal issue. AWS is well aware that people misconfigure permissions...

And again... better tools and more popular tools already existed... This is not new

https://rhinosecuritylabs.com/aws/pacu-open-source-aws-explo...



Cool. Another tool in the space - https://github.com/cloudquery/cloudquery open-source framework to ask questions about your cloud infrastructure with SQL.


Except it's not "Pentesting tool to backdoor" anything. It's simply modifying an access given you already have credentials to do that. You can do the same with aws cli (oh horror /s).


404 now


It's gone now. :( I should have cloned it, anyone have a clone?


Not a clone but you can download the code from here:

https://pypi.org/project/endgame/#files

I was thinking about putting a new repo with the code in it but I'd rather not risk the wrath of AWS since my job kinda depends on the service. Which probably says something about the state of Faang companies that I'm even concerned about it.


We use both AWS and Salesforce and I'm surprised about this tool being developed by SF after all the whistle and bells about the partnership between the two.


nothing of security threat I guess. It uses your permissions, to modify the current permissions for different product. If u do have permissions to modify things, then this will work. if you have no permissions, it will fail.

So can it be used with bad intention, yes. But if I am a hacker, would i want to open all the available doors? or choose 1 or 2 doors only instead and keep the rest as is!!


Do analogous tools exist for GCP and Azure?


Not sure. I did uncover a ridiculously destructive approach to abusing Azure Service Principals in CI/CD pipelines that deploy infrastructure in Azure (Confused Deputy problem): https://kmcquade.com/2020/11/nuking-all-azure-resource-group...

for sub in `az account list | jq -r '.[].id'`; do \ for rg in `az group list --subscription $sub | jq -r '.[].name'`; do \ az group delete --name ${rg} --subscription $sub --no-wait --yes; \ done; done;


>I did uncover a ridiculously destructive approach to abusing Azure Service Principals in CI/CD pipelines that deploy infrastructure in Azure (Confused Deputy problem):

> for sub in `az account list | jq -r '.[].id'`; do \ for rg in `az group list --subscription $sub | jq -r '.[].name'`; do \ az group delete --name ${rg} --subscription $sub --no-wait --yes; \ done; done;

The CI provider giving you an over-privileged SP to play with needs to fix that, sure. SPs start with zero role assignments, so it's particularly egregious that they gave it unnecessary permissions.

(Though, for the CI providers I'm familiar with, you the user would be the one creating the SP and providing it to the pipeline. So making it over-privileged would be your mistake.)

But it's not a Confused Deputy problem when you have a service principal with delete access to all resource groups in all subscription and tell it delete those resource groups. Confused Deputy involves a higher-privileged server forgetting to downgrade its privileges on behalf of a low-prvivilege client. The SP is the client in this case - it was created with high privileges in the first place.


Something tells me this is not AWS specific - how do GCP/Azure/Heroku stack up in comparison?


I for one would specifically be interested in someone's review of Azure Defender, since it claims to be able to handle AWS and GCP


Seems to have been taken down. A shame id have enjoyed reading that code





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: