Of course they can be capped, you just turn off the services. If you're asking them to automate that for you, then the counterpoint would be people accidentally setting a budget that wipes out their resources and complaining about that.
Easier for both sides to just ask AWS for a refund if there's a reasonable case.
Mistakes will always be an issue. How you recover is more important.
Would you rather make a mistake leading to a big bill with the possibility of a refund or set your max budget and have your resources permanently deleted?
There would be no need to delete existing resources. Just prevent me from creating new ones until action is taken. For small projects in particular, I'd much rather have service taken offline and an email notification than even a $1000 bill. And $1000 is small in the scale of what you could end up with on AWS.
1) If you're AT the budget amount then everything must be deleted to avoid going over.
2) If it's a soft budget then it's no different than the alarms you already have.
3) If you want to stop it before it hits the budget, then you're asking for a forecasted model with a non-deterministic point in time where things will be shutdown.
This just leads to neverending complexity and AWS doesn't want this liability. That's why they provide billing alarms and APIs so you can control what you spend.
> 2) If it's a soft budget then it's no different than the alarms you already have.
Not if I'm busy, or away from work, or asleep. There is a massive difference between getting an alarm (which is probably delayed because AWS is so bad at reporting spent money) versus having low priority servers immediately cut.
Even without a priority system, shutting down all active servers would be a huge improvement over just a warning in many situations.
That's not a soft budget then, so which option is it? 1 or 3?
You want it to selectively turn off only EC2? Does it matter which instance and in which order? What if you're not running EC2 and it's other services? Is there a global priority list of all AWS services? Is it ranked by what's costing you the most? Do you want to maintain your own priority of services?
And what if the budget was a mistake and now you lost customers because your service went down? Do you still blame AWS for that? Or would you rather have the extra bill?
It's really not that complicated. "Stop paying for everything except for persistent storage" is sufficient for the majority of use-cases where a soft cap would be appropriate. When you need to do anything fancier, you can just continue to use alarms as you do now. A tool does not have to solve every problem that might ever exist to be useful.
It's really not that complicated... to watch your own spend. But yet everyone here keeps running into issues, and that's just with your own projects. I'm sure you can at least appreciate the complexities involved at the scale of AWS where even the minority use-cases matter.
"Everything except for persistent storage" is nowhere near useful enough to work and can cause catastrophic losses. Wipe local disks? What about bandwidth? Shutdown Cloudfront and Lambda? What about queues and SNS topics? What about costs that are inseparable from storage like Kinesis, Redshift, and RDS? Delete all those too? And as I said before, what happens if you set a budget and AWS takes your service down which affects your customers?
It's easy to say it's simple in an HN comment. It's entirely different when you need to implement it at massive scale and that's before even talking about legal and accounting issues. There's a reason why AWS doesn't offer it.
Just shut down everything, but don't delete existing data written to disks. That can cover a wide array of budget problems. If you set a budget like that you really do not want to go over it and any potential loss from customers is not as huge as going over that budget. At least have that option.
I sometimes for example fiddle with Google APIs. I do not even have customers so don't really care if things will stop working, but I have accidentally spent 100 euros or more. I have alerts, but those alerts arrived way too late.
I make a loop mistake in my code and now I suddenly owe 100 euros...
> "Just shut down everything, but don't delete existing data written to disks."
I literally just explained why this doesn't work with AWS services. You will have data loss.
And it creates a whole new class of mistakes. If people mistakenly overspend then they'll mistakenly delete their resources too. All these complaints that AWS should cover their billing will then be multiplied by complaints that AWS should recover their infrastructure. No cloud vendor wants that liability.
It's not an unreasonable use case to just nuke everything if your spend exceeds some level. (I'm just playing around and want to set some minimal budget.) But, yes, implement that and you will see a post on here at some point about how my startup had a usage spike/we made a simple mistake and AWS wiped out everything so we had to close up shop.
ADDED: A lot of people seem to think it's a simple matter of a spending limit. Which implies that a cloud provider can easily decide:
1.) How badly you care about not exceeding a spending threshold at all
2.) How much you care about persistent storage and services directly related to persistent storage
3.) What is reasonable from a user's perspective to simply shutdown on short notice
Don't let the perfect be the enemy of the good. In so many use cases, shutting off everything except storage would do a good job. And the cloud provider doesn't have to decide anything. It's a simple matter of setting a spending limit with specified semantics. A magic "do what I want" spending limit is not necessary.
> "shutting off everything except storage would do a good job"
Except it wouldn't. This is the 3rd time in this thread explaining that. Edge cases matter, especially when creating leading to new mistakes like setting a budget and deleting data or shutting off service when customers need it most.
If it's not a hard budget but a complex set of rules to disable services... then you already have that today. Use the alarms and APIs to turn off what you don't need.
Edge cases are the difference between good job and perfect job. It makes no sense to use edge cases to say it qualifies as neither.
> If it's not a hard budget but a complex set of rules to disable services... then you already have that today. Use the alarms and APIs to turn off what you don't need.
I have been describing a simple set of rules, not a complex one.
It used to be extremely difficult to get accurate usage data on all their services. Has that been fixed? If not, then the alarms aren't good enough. If the alarms can automate enough right now, in a non-buggy way, then that should be the answer to people "hey, the alarms do more than alarm, use them to trigger shutdowns". Don't say "it can't be done, sorry". If the alarms aren't good enough for that automation, then the argument stands.
And using the APIs means that each company that wants safety is duplicating effort in an almost untested way, a recipe for so many bugs it makes the problem worse. No, this needs to be a feature of AWS itself.
And this happens every time you go over budget? So it's a constant monthly emergency credit? Or extended free tier? Is there a dollar cap on that? What happens if you go over that?
> Of course they can be capped, you just turn off the services.
That's not a he's cap, since turning off services isn't instant and costs continue to accrue. But, yes, there are ways to mitigate the risk of uncapped costs and they are subject to automation.
See the sibling comment thread. It's just not that simple. It creates a lot of liability, could lead to permanent data loss, and doesn't really prevent any mistakes either (just swaps them for mistakes in budget caps).
AWS would rather lose some billings than deal with the fallout of losing data or critical service for customers (and in turn their customers).
it depends on the use case. For example, I would like to have developer accounts with a fixed budget that developers can use to experiment with AWS services, but there isn't a great way to enforce that budget in AWS. In this case I don't really care about data loss, since it's all ephemeral testing infrastructure.
In theory I could build something using budget alarms, apis, and iam permissions to make sure everything gets shut down if a developer exceeds their budget, but if I made a mistake it could end up being very expensive. Not that I don't trust developers at my company to use such an account responsibly, but it is very easy to accidentally spend a lot of many on AWS, especially if you aren't an expert in it.
So now we have another potential mistake - you setup a "delete everything/hard budget" for a production account instead of a developer account. What then?
It's impossible for AWS to know how to handle hard caps because there are too many ways to alter what's running and it's too contextual to your business at that moment. That's why they give you tools and calculators and pricing tables so that it's your responsibility (or a potential startup opportunity).
Money is easy to deal with. Alarms work. Bills can be negotiated. But you can't get back lost data, lost service, or lost customers.
Should be cap so you have a check. If your system does not allow threshold or assertion, please do not use it. If your cloud system do not have capped budget so you play in and alert you when you soon run out, do not use it.
Easier for both sides to just ask AWS for a refund if there's a reasonable case.