Hacker News new | past | comments | ask | show | jobs | submit login
BreakingFormation: AWS CloudFormation Vulnerability (orca.security)
90 points by gregmac 14 days ago | hide | past | favorite | 28 comments



> Our research team believes, given the data found on the host (including credentials and data involving internal endpoints), that an attacker could abuse this vulnerability to bypass tenant boundaries, giving them privileged access to any resource in AWS.

This is bullshit and their own report indicates the opposite. Hugely irresponsible of Orca to include this kind of unfounded speculation in their report. But also this is what AWS gets for having a "if there's no customer impact, there's no disclosure" security policy, it leaves the door open for this kind of shit.


Seems like the AWS Glue exploit [1] discovered by the same team is the more critical one of these two. The CTO of Orca confirmed that they were able to access an admin role in an AWS service account, and from there assume roles in customer accounts with service roles that trust the glue service [2].

1: https://orca.security/resources/blog/aws-glue-vulnerability/ 2: https://twitter.com/yoavalon/status/1481691075672694793


What’s the actual exploit? Both of the articles are completely barren.

https://twitter.com/colmmacc/status/1481670721449385984

There appears to be a denial of components of this from senior AWS figures, not a blanket denial but I think some of the statements could be a potential overreach.


FWIW: I trust Colm McCarthaigh probably more than anybody else working in this field.


Wholeheartedly endorse. I just wish AWS would every once in a while get ahead of the messaging on stuff like this. They knew it was coming, there's a principal engineer quote in the blog post, but right now there isn't any official statement past "tweets."


It's just speculation as to how this went down, but having been in this position before it's usually not an easy thing to handle. If there's a dispute about impact a responsible researcher will usually say "I still plan on publishing this information on X date". This gives the company time to both convince the researcher the impact isn't as severe as they think and also to prepare a public response for that date.

An irresponsible researcher will either say "I'm gonna publish because I think it's high impact" and not give a date (and then often publishing with no notice on a Friday afternoon) or won't even provide notice.

It's often impossible to know which type of actor you're dealing with until it's too late. I've even had people claim they won't publish until X date and then publish early. You can't just provide public notice or you'll both piss off the researcher and run the risk of accidentally giving the impression a low impact bug is actually high impact.


Wait, so they managed to make AWS trigger a request on their own bucket using AWS internal credentials, and they extrapolate that this means they now have access to $everything ?


Even worse, they got AccessDenied when making that request and then extrapolated that they have access to $everything


That’s ridiculous, it’s entirely possible that these are one-time credentials for a single purpose and/or severely limited in scope.

Not saying that it’s a certainty it’s not, but if you make such a claim, you should better have some evidence to back it up.



That definitely sounds plausible. AWS services undergo mandatory security reviews and threat modelling that usually cover these scenarios exhaustively. A lot of work and complexity goes into scoping down credentials for defense-in-depth protection, to protect against exactly these kinds of issues.


They keep calling it a zero-day although it is not. I dont know which one is worse, if they don't know the meaning or they are trying to make it look like something more important than what it really is.


Why do you say it's not a zero-day? It was unknown to AWS when they reported it.

That's not what zero-day means. Zero-day means that every affected system is vulnerable the day the vulnerability is publicly disclosed. That was not the case here as the vulnerability was addressed nearly four months before today's announcement.

Fair enough. I went off the wikipedia definition ("vulnerability unknown to those who should be interested in its mitigation"), which doesn't mention it has to be known to the general public. We had to treat it as a zero-day when it was reported, because we had to assume there might be other parties who knew about it. (I work for CloudFormation)

I wouldn't trust Orca's word. They're pretty shady and have implemented some questionable tactics in the past to get customers/vendors.


Can you provide some examples? Not doubting you, but some references would be useful.


This entire writing style just seemed off putting the moment I read it. Blend of marketing and security report?

They're turning this into a PR event for themselves and unfortunately others[1] are spreading it by hyping it up to what it really isn't. Plus, as another person put it - security researchers don't blur out sensitive details, they black them out.

[1] https://www.linkedin.com/posts/brainboard-co_aws-chaosdb-clo...


Why did they blur the information when there are pretty advanced de-blurring algorithms out there? Seems like if you want to redact the information you need to completely omit the subsequent pixels rather than just adding a blur effect. Perhaps since the issue is fully resolved and I would guess that AWS rotated everyone's credentials?


Ok, you don’t need to name single-vendor SAAS serverside vulnerabilities.


Looks like that ship has sailed.


> The AWS security team coded a fix in less than 25 hours, and it reached all AWS regions within 6 days.

That is really fast. It reminds me of the Google DHCP takeover vuln that Google sat on for 9 months (https://github.com/irsl/gcp-dhcp-takeover-code-exec).


https://twitter.com/0xdabbad00/status/1481693260087275532?s=...

Actually seems like, out of the 2 vulnerabilities that were discovered: the one in AWS Glue is more severe, actually gaining cross-tenant data access to some services (Glue & S3, affecting all present and past Glue users).

Also, AWS released bulletins confirming this.

- AWS Glue - https://aws.amazon.com/security/security-bulletins/AWS-2022-...

- CloudFormation - https://aws.amazon.com/security/security-bulletins/AWS-2022-...


I'm not understanding how "breakingFormation" provides access to other account data.

They tried it on their own account and were denied. AWS uses these crypto tokens, so it's request -> IAM signed -> Cloudformation -> S3.

Is there maybe a gap where if someone is using S3 a lot, the forward session token thing can be worked around (ie, the Cloudformation service WOULD still have a valid token for another accounts S3 bucket)?

The easy thing here is set up 2 accounts, and actually test it. Curious why they didn't do that.


They don't provide access to other account data. These credentials are scoped down to a specific purpose. Colm mentions this here: https://twitter.com/colmmacc/status/1481682859324760070

I do wonder if there could be a gap where the credentials are scoped down, but the service does have broader access because other users have made recent requests if the metric is just did a user request something from S3 (most do).

Or is the scope down to request for X object by Y customer, which is then signed / token attached by IAM, valid for a little bit. That would reduce radius a lot.

Kind of bummed they hyped this one because the Glue one is more interesting to me an a more credible route I thought.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: