Hacker News new | past | comments | ask | show | jobs | submit login
CapOneMe – a vulnerable cloud environment to demonstrate the Capital One breach (github.com/avishayil)
146 points by avishayil on Dec 27, 2019 | hide | past | favorite | 41 comments

To be fair to the folks at Capital One, the mitigation leverages a feature that wasn't released until well after they were compromised.


I would also suggest taking a look at the various articles around AWS IAM privilege escalation:


https://know.bishopfox.com/research/privilege-escalation-in-... (just shared on HN recently)

I have nothing against Capital One, but I must say that Netflix published this article in November 2018: https://medium.com/netflix-techblog/netflix-information-secu... I would expect folks to be prepared for this kind of vulnerability. Anyway the repo is for educational purposes and it's sarcastic name is, well, saracastic. Cap One is a great technology company.

Interestingly, the author of that Netflix article had joined CaptitalOne about a month before the incident.

CapOne also maintains Cloud Custodian, which a lot of people use to great effect to help prevent stuff like this.

Ultimately I think it just shows that securing cloud infrastructure is difficult to do consistently when you move quickly and broadly at scale. It also shows that the specific mechanism for authenticating EC2 instances had some design issues. These have been known about for a long time of course and it is kind of disappointing how long it took AWS to do something about it.

Cloud custodian is maintained by the community, capitalone has not had any maintainers on staff for around a year, though they still use and occasionally contribute prs. The major contributors and maintainers over the last year have been the cloud providers. The community has been working with capitalone to move it into cncf in 2020.

Huge fan working with you on one issue and glad to see you are everywhere setting the record straight, Kapil!

I stand corrected by an authority on the subject. :)

Isn't netflix's tech implementation exceptionally good? I've heard much praise about how much they've leveraged python in their systems. I'd expect them to be much more on top of their security than financial companies, a fair few of which were still sending unencrypted emails as late as 2016 (at least out in India). I'm not sure if capital one is better than the rest of the crowd in some way though.

AWS IAM is a hot mess. Even setting up an S3 bucket with an access key for uploading is a 15 step process, minimum, with a lot of opportunity to fuck up.

Interesting. It takes me exactly 2 steps.

terraform plan && terraform apply

Terraform that shit bro.

Maybe through the GUI, it's a single step with the CLI.

Please elaborate. How is this achieved in one step with the CLI?

Well, you'll have a two-hundred line IAM policy and a half-dozen API calls encoded into a script, but once you have that script its just one step! All you do is run the script!

And if something in the script goes wrong midway through, you've automated a big mess! I love it!

Really should, I've always thought AWS was just a bunch of hacked together services and it kinda shows. This is why you don't let the engineers talk to the customers... er design for the customers.

With this.


(Disclaimer: I am the author)

Not one step exactly, but it is by far the easiest way to write least privilege IAM policies. Otherwise, it becomes impossible to ensure IAM policies are written securely and at scale. This way, all custom IAM policies are written with the exact same methodology.

Not OP but pretty sure they just mean this

`aws s3api create-bucket --bucket somecoolname --region us-west-2 --grant-write iamuser`

And now a single user has access, such scalability!

Well technically this could also be a group.

I don’t know why all the hate for IAM permissions here.

They are complicated but also extremely powerful if setup correctly.

We manage all of our IAM policies and groups with terraform and it’s incredibly easy to understand imho

Hella hate. Personally I found grappling with what they were initially difficult but then I finally dug in and watched howto propaganda...great job whoever did that at amazon. It’s the one thing I don’t hate about the company. [1]

It’s a ton easier for on boarding and giving contractors temporary access to resources.

*former worker at 3rd party merchant

[1] https://www.aws.training/LearningLibrary

If you think that effectively making a few json files, doing a crs on them and deploying via cloudformation is a difficult process, maybe you'd like to provision a server for storage and authorize access in a more simple way?

I think you just proved OPs point. For tons of simple use cases for S3 (nowhere near the complexity/flexibility of running a server), what you described is complex. You just hand waved over what has become obvious to you after experience (correct json schema, crs, and knowing to use cloud formation and how to use it).

You could also partially prevent this using vpc endpoints for s3. You can set the bucket policy to only allow if the connection is via the endpoint (vpc) so the bucket's no longer internet accessible. Vpc endpoints also reduce traffic charges since you pay for internal AWS bandwidth instead of internet bandwidth.

I'm kind of surprised AWS doesn't just enable VPC endpoints for all services by default. Having to leave the network to talk to services seems like overkill for most things.

Completely agree. I also don’t see the need to egress to the public internet to access AWS services.

It’s also worth noting that whilst I’m a HUGE believer in VPC endpoints, it comes with a cost. Which then makes it a security vs cost trade off. As you’ll still pay bandwidth charges on the network interfaces (ENIs) for the VPCEs, along with the hourly price for said ENIs.

If Amazon changed this, I’d happily change all my networking to use VPCEs for all AWS services (where applicable). Unfortunately it seems they’re not going to do this, despite continuously adding new services to the list of VPCE-enabled services.

Some others in this space:

- flAWS (flaws.cloud / flaws2.cloud) by Scott Piper

- CloudGoat (https://rhinosecuritylabs.com/aws/cloudgoat-vulnerable-desig...) by Rhino Security Labs

Must say I was inspired by them

https://application.security for a better demo of the CapOne hack !

The LMGTFY link is, as usual, unnecessarily hostile.

It wasn't just hostile, it was completely bizarre. The whole point of the web is to link pages together with hyperlinks. Why do that and then put a big "F U" behind one of those links?

I LMGTFY'd it and came away knowing nothing about how the exploit actually worked and what the attack was. Pointless and ignorant of the author.

Jesus, maybe sentence me to death and that's it? someone made a pull request and I approved it and replaced the LMGTFY link.

Agree, also noticed that one.

SSRF to metadata service to S3 access was the entry point. There's a lot of focus on the SSRF and metadata service components but the S3/IAM component is possibly more intriguing. Did the role/account follow the principle of least privilege? If not, how did they miss it? This is the company that opened sourced Cloud Custodian. They're capable of identifying risks and creating tooling to reduce lead time on finding those risks.

Possibly more intriguing: I'll bet Capital One deals with more compliance initiatives than 99% of the public sector. Another Heartland Payment Systems example? Did leadership have a false sense of safety after passing an audit?

That was a bush league SSRF that should have been caught by just about any static analysis tooling. That tells me something broke down early in the process.

That said, it's almost impossible to implement least privilege with EC2 instance roles unless you manage application identity in a separate control plane. Otherwise you have a single role that must satisfy the union of all access requirements for infrastructure automation, software deployment, logging, monitoring and discrete application runtime components. It's a mess and IMHO a terrible architecture.

The fact that the STS creds for that role are then made available via unauthenticated network service which then are by default active from any endpoint on the planet unless explicitly locked down is insanity. Furthermore, locking said roles down requires discrete references to VPC endpoints and CIDR ranges that are unique to individual regions and have to be able to breathe with the environment. This makes your IAM lockdown policies gnarly and volatile, not a good recipe for availability.

It's no mean feat to really prevent this kind of thing in a fast-moving environment. Defense in depth is essential.

Check out application.security for a great interactive demo of the capital one breach

Condition Context Keys [0] are pretty powerful. Even if an attacker obtains credentials vended by the EC2 instance metadata service, they wouldn't be able to make use of them.

I don't know why this isn't 'enabled by default' for IAM roles associated with EC2 instances.

[0]: https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_p...

These should absolutely be the defaults; I can't think of a single reason why an EC2 instance's IAM role credentials should be used outside of that instance other than perhaps troubleshooting.

To me, IAM is one of the most difficult AWS services to design securely. Every service has 20-50 sub-permissions (so developers simply default to using wildcards instead of carefully tuning each policy), there are 4+ different systems that combine to create each policy (a trust relationship, the permission body, optional conditions, plus managed policies and group policies), IAM roles can now have permission "boundaries" that further complicate their access rights, roles can be assumed by services (EC2, Lambda, etc.) and yet the credentials obtained as part of that role assumption can be used outside of those services (as demonstrated in the Capital One incident), and some services (e.g. S3, ECR) have additional permission policies at the resource level that interact with IAM policies in a two-way trust of sorts.

The entire thing is woefully complicated and it shows when AWS is in the news nearly every week because someone misconfigured yet another service.

caponeme is a vulnerable cloud environment that meant to mock Capital One Breach for educational purposes

All these finance companies keep trying to brand themselves as "tech companies" when they're really just technologically literate banks, even ycombinator is like this.

Are there any technologically illiterate banks? Between banking and insurance where did technologists operate at scale pre-internet (WWW)? Even with the rise of the internet banks were quick to adopt online features. Even my small credit union in Idaho had online banking in the 1990s.

If Facebook, Google and Amazon are technology companies banks definitely are.

> If Facebook, Google and Amazon are technology companies banks definitely are.

Only if the banks actually write the software. A credit union that pays someone else for an online banking solution is as much of a tech company as the motel that pays for an online reservation system. At that point the term is useless.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact