Hacker News new | past | comments | ask | show | jobs | submit login
[flagged] AWS Communism: How we cut our Load Balancing cost by more than 96% (setops.co)
49 points by tobi_tobsen on Oct 19, 2021 | hide | past | favorite | 18 comments



Nothing is free. Resource sharing comes with the noisy & problematic neighbour issue. An ALB consumes 16.43 USD per month. You need to decide if it's worth the risk, blast radius would be larger in case of incidents. I've seen ALBs timing out because the ALB itself did not scale fast enough, so in a shared environment with 100 different applications, this might be amplified.


> the ALB itself did not scale fast enough

On the other hand sharing an ALB across 100 apps means that a single app's fluctuations are less significant. If your apps have completely independent traffic patterns then a 10x surge for one service would only be a 10% surge for the ALB if all applications have equal amounts of traffic. This likely can be handled with the current utilization buffer so the ALB scaling isn't even required (of course you hope that it will still scale up quickly to refill the buffer).

Of course in real life everything isn't this perfect, your traffic patterns are correlated between services and one service is the vast majority of your traffic but it can still be a nice buffer.


This is a very good point I hadn't thought of before, thank you! I've never "performance tested" an ALB.


Claim 96% savings, but nothing to show for it.

No numbers, no examples, no use cases.

Very basic article to promote their service/solution.


> Claim 96% savings, but nothing to show for it.

This.

I suppose the 96% number might be just a fancy way to claim they reuse the same ALB to power 25 applications, but cost estimates in AWS aren't straight-forward, and instead they are outright cryptic.

Also, mentioning "96% cost reduction" can sound like an impressive achievement in open reduction. However, claiming that they share a single ALB with dozens of independent applications is a very bad ideal all-around which is very hard to justify in any way.


is it just me or is about 65% of the AWS customer experience dealing with cost and implementing clever workarounds to reduce spending? wouldnt it make more sense to just shop around and use a different cloud provider? vultr? ramnode? alicloud?


I don't think it's that simple: AWS offers a ton of higher-level services — if you just need a Linux VM, yes, you have many options — but otherwise you're getting into tradeoffs like how many hours of human time it's worth spending to build the equivalent of a managed service. This can look like you're spending more because you get an itemized bill from AWS every month but you rarely get your staff time broken down like that and especially do not get an itemized report of what they could have been working on instead.

That's not saying that the right answer is AWS: simply that you really need to balance your budget against your time and capacity. For example, ALBs are “just” load-balancing — but if you try to build your own system with automatic failover and scaling, the various tools for logging, security, etc. it'll cost a lot more than $16/month. Now, if you're running 1,000 ALBs maybe that might no longer be true — but I would bet that you have other tasks for your engineers which would see comparable or greater returns, so it's still a business decision.


Sounds like AWS is incredibly simple to use if you have your developers trying to minimize costs instead of actually getting it to work. Which is a lot better than other products out there which may be cheaper but end up not working as advertised after spending half a year ‘integrating’.


"Incredibly simple to use" compared to what?

How hard is it to get the same functionality out of a set of rented servers? Or a fully hosted level-7 IaS? Or a fast starting VPS park?

The allegedly benefit of AWS is that you just focus on development, and let Amazon handle all the OPS noise. Well, minimizing costs is a lot of OPS noise.


They could re implement the solution in AWS to use Serverless

Only pay per requests: API Gateway/Lambda/Dynamo

Then there's no fixed costs or the need to share these services with unrelated services.

Works great for dev/staging environments (will probably cost 1$ per month each for low usage) and even prod could go a long way before becoming "expensive"

(We're doing that a lot in my company, the CDN caches 85% of requests coming to our serverless stack, with most responses having only 1/2 min cache times)


I don't think "what people write about" is the same as the majority of the experience.

You work on cost-cutting after everything else is working. It's usually a sign you like the service well enough - if you didn't like it, maybe you're not cost cutting, you're trying to get it out of your stack.

If you like everything else about it, shopping around instead of doing cost optimization carries much larger risks.


A less fraught name could help.


I'm not normally a fan of sharing ALBs between services as a lot of the metrics [1] are only recorded for the load balancer as a whole, not each individual target group (an application attached to the load balancer).

I can see the advantages of cost savings, but it's definitely a tradeoff.

[1]: https://docs.aws.amazon.com/elasticloadbalancing/latest/appl...


Hasn't been an issue for us but I agree. It's painful to pay them $20+ just for the sake of fine-granular metrics though.


There's a good post to be written about this but I don't think this is it: it makes a big claim for costs but has almost no technical information and the conclusion is basically treating it as a lead generator for their sales team.

The thing which would be more interesting would be talking about the cost savings (~$16/month) relative to things like risks: mostly noisy-neighbor and the administrative concerns of coordinating changes to settings or the security considerations of someone being able to compromise multiple sites rather than just their own. There's no wrong answer there but it's an engineering decision which will lead to different results depending on your environment, budget, and projects.

Speaking of engineering, it sounds like they're using some Go code to avoid hitting the limit on certificates and target groups. That's certainly effective but I do wonder how many organizations have enough applications/certificates which are appropriate for shared infrastructure and hit that limit. If your sharing is within that range, the potential cost savings are going to be lower as well.


Additional cost of not sharing ALBs is a small price to pay to avoid CloudFormation, as in this solution.


An AWS Application Load Balancer (ALB) can host up to 100 applications with 25 different TLS certificates. However, if you wanted to share this ALB, you'd need to watch how many apps you assign to it. If you tried to use it across Terraform projects, you'd need to expose its ID. At best, it's additional work. More often, this is too much work. Thus, it's more economical for most cloud engineers to create dedicated resources and let the client pay the bill.

Our way is AWS-native and allows for maximum efficient sharing – without complicating it for the user. When you can share a single ALB between 25 to 100 apps, the large cost saving comes in.


One of the main reasons I started sharing an ALB is because I had Terraform build a whole web stack's infrastructure for every Pull Request, test the app against that stack, then destroy it all on PR close... and that was hitting the limit of # ALBs per account. Sharing an ALB allowed us to scale that CI process without hitting the limit [as quickly].




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: