Hacker News new | past | comments | ask | show | jobs | submit login
I Followed the Official AWS Amplify Guide and Was Charged $1,100 (elliott-king.github.io)
461 points by thunderbong 26 days ago | hide | past | favorite | 276 comments



"Billing alerts" are a joke, give us hard spend limits. Then offer a way to set those limits during onboarding.

Building a business on blank cheques and accidental spends is shady. It's also a large barrier to adoption. The more times devs see reports like, "I tried [random 20-minute tutorial] and woke up to a bill for my life's savings and luckily support waived the fee this one time but next time they're coming for my house", the less they'll want to explore your offerings.


It's not just AWS. I think there are only two types of cloud providers: The ones like AWS and DigitalOcean that shift the risk to the customer and the ones that offer shady "unlimited" and "unmetered" plans.

Neither is what I want. I wish there was a provider with clear and documented limits to allow proper capacity planning while at the same time shifting all the availability risk to the customer but taking on the financial risk. I'd be willing to pay a higher fixed price for that, as long as it is not excessive.


> It's not just AWS. I think there are only two types of cloud providers: The ones like AWS and DigitalOcean that shift the risk to the customer and the ones that offer shady "unlimited" and "unmetered" plans.

Actually there is a third category, those who care. I will grant you it is a rare category but it is there.

One example name: Exoscale[1]

Swiss cloud provider, they offer:

    (a) hard spend limits via account pre-pay balances (or you can also have post-pay if you want the "usual" cloud  "surprises included" payment model).
    (b) good customer service that doesn't hide behind "community forums"
Sure they don't offer all the bells and whisles range of services of the big names, but what they do do, they do well.

No I am not an Exoscale shill, and no I don't work for Exoscale. I just know some of their happy customers. :)

[1]https://www.exoscale.com/


I guess you'd want Hetzner. You get a generous amount per server and then a reasonable price for each additional TB.


It seems that automatic billing is something that cloud providers invented. For example, home Internet providers or mobile providers usually use prepaid plans, where they simply stop the service once you ran out of money (but you can connect your card account if you trust them). So you cannot get charged arbitrary amount for home Internet, and for mobile unless you travel.


Landline phone companies were always usage-based billing where you could run up huge bills by, say, making an international phone call for an hour.


Yep. In 1995 my parents bought me my first PC after years of begging. We lived in a very rural area with no local dial-up provider. I eventually was able to connect and was having a blast on message boards until my parents got a $350 phone bill, which is $750 in 2024 dollars. Turns out I had been using a long-distance number.


Is that true? All of my Internet and mobile telephone service is post-paid with automatic billing. I know one can get prepaid plans from some providers, but how are you arriving at "usually"?


It means in your country people trust companies more than in mine. We use pre-paid system and optionally you can connect a card account for automatic billing (but the company cannot charge a card if there is no money left).


> So you cannot get charged arbitrary amount for home Internet

Comcast (mostly) disagrees, you have a 1.2 TB data cap and "After that, blocks of 50 GB will automatically be added to your account for an additional fee of $10 each plus tax." They do have a limit of $100 on these charges per month at least.


Bare metal server with unmetered bandwidth.


Indeed. Shameless plug of a toy I built that lets you see the price difference :

https://baremetalsavings.com


If anything this makes me feel better since my workload doesn’t require very beefy machines and the amount id be saving is basically irrelevant compared to my labor costs.


Yes, bare metal is not a panacea. Some use cases require 0 personnel change going bare metal (even having reduced labor), and some are very much the opposite.


Love the tool and UI you built. I homelab and while not always on 24/7 its way more affordable to run on my own bare metal than pay a cloud provider. I also get super fast local speeds.


> I homelab

Didn't know there was a verb for it! I "homelab" too and so far am very happy. With a (free) CDN in front of it it can handle spikes in traffic (that are rare anyways), and everything is simple and mostly free (since the machines are already there).


I rent a moderate sized Hetner box running FreeBSD and just spin up a jail (zfs helps here) or if necessary a bhyve VM per 'thing.'

I'd fire a box up at home instead but at ~£35/mo I can never quite find the motivation compared to spending the time hacking on one of my actual projects instead.

(I do suspect if I ever -did- find the motivation I'd wonder why I hadn't done so sooner; so it goes)


This is really nice, thank you!


Vultr has hard limits by default.

Hetzner also for CPU use, and last time I checked the traffic would drop to slower bandwidth if you pass some limit. I think they still charge you, though.


I don't think DigitalOcean shifts risk to the customer. They have "pre-paid" cloud VPS plans of 5/10 USD per month with hard-limits right?


"Excess data transfer is billed at $0.01 per GiB"


Hard spend limits are an anti-feature for enterprise customers, who are the core customer of AWS. Almost no level of accidental spend is worth creating downtime or data loss in a critical application.

Even having the option of a hard spend limit would be hazardous, because accounting teams might push the use of such tools, and thereby risk data loss incidents when problems happen.

Hard spend limits might make sense for indie / SME focused cloud vendors though.


> Hard spend limits are an anti-feature for enterprise customers

Yada yada yada, that's the same old excuse the cloud providers trot out.

Now, forgive me for my clearly Nobel Prize winning levels of intellect when I point out the following...

Number one: You would not have to turn on the hard spend limit if such functionality were to be provided.

Number two: You could enable customers to set up hard limits IN CONJUNCTION WITH alerts and soft limits, i.e. hitting the hard limit would be the last resort. A bit like trains hitting the buffers at a station ... that is preferable to killing people at the end of the platform. The same with hard spend limits, hitting the limit is better than waking up in the morning to a $1m cloud bill.


There's no way to implement a hard limit without getting in the middle of your system in ways that (a) alter the system design, (b) in ways you cannot correct for, and (c) not for the better.


Of course there is. If someone hits their spending limit, asynchronously shut off the services (using the same API call that your customers can use, so no need to alter the system).

Then apply the hard limit in the billing code. If it took a minute or two to shut off all the instances, maybe the customer's bill should have been $1.001M instead of $1M, but cap the bill to $1M anyway. Given their profit margins of x,000% I think they can afford the lost pennies.


> There's no way to implement a hard limit without ...

I don't believe that for a minute.

You know why ?

Let's turn it on its head. What happens if the credit card on your AWS account expires or is frozen ?

You think AWS are going to continue letting you rack up money with no card to charge against ?

I betcha they'll freeze your AWS assets the nanosecond they can't get a charge against the card.

The mechanism for customer-defined hard limits is basically no different.


From experience they will let you spend tens of thousands of dollars a month without a valid payment method.

Like many times more than you've ever even spent with them.

I mean it's like 6 months of that before you even get your first non-standard form email.


In my experience every AWS service I've worked with has an API to destroy the resource, eg destroy RDS instance without backup, terminate EC2, etc. I want it to irrecoverably destroy everything in my account when it hits the limit, no questions asked.


> Hard spend limits are an anti-feature for enterprise customers, who are the core customer of AWS. Almost no level of accidental spend is worth creating downtime or data loss in a critical application.

Great, so they don’t have to use the feature?

That excuse was a great excuse when AWS was an MVP for someone. 20+ years later… there is no excuse.


> Almost no level of accidental spend is worth creating downtime or data loss in a critical application.

That feels like a bit of a red herring — if that was their ethos, then you'd _have_ to choose burstable/autoscaling config on every service. If I can configure a service to fall over rather than scale at a hard limit, that points to them understanding different their use cases (prod vs dev) and customer types (start-up vs enterprise).

Additionally, anytime I've worked for an enterprise customer, they've had a master service agreement set-up with AWS professional services rather than entering credit card info, so they could use that as a simple way to offer choices.


A service doesn't just "fall over" at a limit. There has to be other machinery to limit it.

Given that all your usage and traffic other than that at the limit request should not be gated or limited, why would you want someone else injecting additional complexity and bottleneck risk inline?

Determine your own graceful envelope, implement accordingly.


The OP was saying that having any spending limit would be antithetical to enterprise usage because it could bring critical services to a halt.

My point was, if that's a reason to have unbounded spending, why allow me to spin up a service that can get CPU or RAM bound?

> Determine your own graceful envelope, implement accordingly.

Which most people do have, but we then also want an "ungraceful" backstop — trains have both a brake and a dead man's switch.


> Even having the option of a hard spend limit would be hazardous, because accounting teams might push the use of such tools, and thereby risk data loss incidents when problems happen.

"Hazardous" feels like the wrong word here - if your customer decides to enact a spend limit it should not be up to you to decide whether that's good for them or not.


So have a “Starter” account with spend limits then, don’t understand how an individual is supposed to learn this stack and actually sleep at night without waking up panicking something has been left running.


By checking what resources you are spinning up and double checking everything has been removed after you're done. I have used AWS for small projects many times and have never been hit with a surprise bill. The platform is built for actual customers, not your hello world app.


Being aware of everything you're running on AWS is trivial when it's all on the EC2 dashboard, and you create everything by hand or using Terraform.

Many of these new AWS-provided stacks, however, seem to create stuff all over your account.

The moral of the story? Don't ever use AWS tools like the one the OP describes, ones which create a bunch of different resources for you automatically.


> The moral of the story? Don't ever use AWS tools like the one the OP describes, ones which create a bunch of different resources for you automatically.

i use them to create a small test stack to look at it for a day or two.

then go through, delete all of the resources, put what i need into terraform etc.

has worked well for me in the past.

but yeah, i would never blindly use aws tools to magically put something into production.


Is every problem you see so insurmountable?

In the olden days if we spotted a customer ringing up a colossal bill, we would tell them. These huge Amazon bills are fast but still multiple days. They can trivially use rolling-projection windows to know when an account is having a massive spike.

They could use this foresight to call the customer, ensure they're informed, give them the choice about how to continue. This isn't atomic rocket surgery.

"Oh but profit" isn't an argument. They are thousands of dollars up before a problem occurs. The only money they lose is money gained through customer accident. Much of it forgiven to customers who cannot afford it. It's not happily spent. They can do better business.


Cost Anomaly alerts do exactly this.

You can't even setup Cloudwatch alerts on your actual spend, the only metric available to you is "EstimatedCharges".


Which is great if you understand them and set them up. The post here is an experience following an Amazon tutorial and without expecting it, ending up $$$ in debt.

Again, Amazon isn't stepping up to protect users from themselves. They could do a lot more.


They refunded him the money, without a lot of hassle. He toddled into a rough edge, bumped his head, Amazon made it better and apparently gave him a lollipop as well and sent him on his way. I think Amazon should get some credit here.


AWS provides forecasted spend and other alerts. People just don't use them and then complain.


Cool, have an "I know what I'm doing, opt me out!" button.


Make "unlimited spend" an opt-in. That way users who have explicitly chosen this and agreed to the terms can't then complain to support to try and get the bill waived.


> Almost no level of accidental spend is worth creating downtime or data loss in a critical application

But not all applications are critical, and the company deploying those applications should be able to differentiate between what's critical and what's not. If they're unable to, that's their fault. If there's no option to set hard limits, that's AWS' fault.


This guy is a straight shooter with upper management written all over him.

(Office Space)


So make it an option. That’s not hard.


Well, then enterprise won't use it.

OTOH for example the default quota of 55PB (yes, go check it) for your daily limit of data extraction from GBQ is funny, until you make a costly mistake, or some forked process turns zombie.

This is predatory practice, that I can't set up MONEY limits for cloud services.


> Even having the option of a hard spend limit would be hazardous, because accounting teams might push the use of such tools, and thereby risk data loss incidents when problems happen.

This is an interesting point and something I can totally imagine happening! I guess if you have fixed spending limits in a large enough organisation, you lose some of the benefit of cloud infrastructure. Convincing a (metaphorically) remote Finance department to increase your fixed spending limit is probably a tougher task than ordering a load of new hardware!


> Even having the option of a hard spend limit would be hazardous, because accounting teams might push the use of such tools

Tell me you're Shadow IT without telling me you're Shadow IT.

I know legitimizing shadow IT is still the value proposition of AWS to a lot of organizations. But it sucks if that's the reason the rest of us can't get an optional feature.


But the presence of a feature does not mean it has to be used?


The accounting teams of many orgs would want this feature to be enabled, but the tech teams wouldn't. Asking AWS not to add this feature means the tech teams win the debate before it starts.


If a company chooses to be run by accounting rather than by tech, that's fine; they should get the good and the bad outcomes that come from that choice.


That seems like a very hypothetical problem you are solving there..


I feel pain just reading this because I know where that guy is coming from; at many orgs as soon as some stupid thing is available it gets mandated by people who have no idea how it works or what impacts it will have, all the time. If it was just as simple as not enabling the stupid option surely we could just not ena...

Excuse me for a minute, I have to go reset my password that expires every 10 days and can not match any previous password and enter my enterprise mandated sms 2fa because authenticator scary -- woops my SharePoint session expired after 120 seconds of inactivity let me just redo that -- oh what's that my corporate vpn disconnected because I don't have the mandatory patches applied, let me just restart -- Woah would you look at that my network interface doesn't work anymore after that update -- yes yes I'm sorry I know my activity light on MS Teams turned yellow for 5 minutes I'm working on it, just gotta do these other 12 steps so I can reset my password to -- oh look it's time to fill out the monthly company values survey, what's that, it's due by end of day?


It's been mentioned several time in HN comments that the AWS billing code is a giant pile of spaghetti and there is generally a lot of fear around making big changes to it.

That's been one of the more interesting inside baseball facts I've learned here.


Well done AWS for finding a way to make your technical debt cost other people money.


[flagged]


... only governments, yet this is Amazon


If it wasn't clear, my point is: Amazon is so big and dominant at this point that it's got government-like powers.


It wasn't clear, thanks


Completely agree. Having worked previously for ... (humans?) ... I can authoritatively theorize it would be fixed in a jiffy if it weren't making them bucketloads of money.

To make my case: just ponder the opposite: "What would an honest version of AWS do?". They would address the concerns publicly, document their progress towards fixing the issue, and even try to determine who was overcharged due to their faulty code, and offer them some compensation.

"We're too big to fix our own code" is, sadly, taken from the MS playbook (IIRC, something like that was made public after the breach of MS manager mailboxes after the whole Azure breach fiasco that was discovered by, IIRC, the DOJ that paid to have access to logs).


And yet somehow every time they launch a new product they have no problem adding it to their billing code.


I mean, adding some shotgun changes to a messy codebase is always significantly easier than refactoring the whole.


If there was a lawsuit, change in regulation, request from an enterprise customer, or doing so MADE THEM MORE MONEY it would be fixed in a week.


> It's also a large barrier to adoption.

e.g. for me. I never dared to get my foot wet with AWS, despite interest. Better safe with a cheap, flat-rate VPS than sorry.


I've also shot myself in the foot with various APIs, e.g. I racked a 3k bill in a month with Google APIs for my side project, just because I wasn't checking the usage, I didn't think I was using that much. This is more of my fault I guess, but luckily support waived that fee for me after some back and forth. Also I'm not from the US so this bill is larger for me compared to US folks. But honestly also Google APIs are pretty expensive. For hosting my side projects I use a single dedicated DO droplet for $320 a month, where I have 40+ docker containers.


That’s still very expensive. At that point you’d be well served by going dedicated hardware.


The droplet I have is 32 GB ram, 160 GB disk, 16 vCPUs (dedicated CPU, Regular Intel).

What would you suggest me instead?

I see Hetzner and maybe OVH could cost significantly less.

Actually looks like with Hetzner I would get better specs at 115 eur per month. With 16 vCP, 64GB ram, 360 GB SSD.

That's crazy, I didn't realize there's such a huge difference.

I think your comment is actually impressively valuable to me...

I'm going to bring it all over during the weekend. That's really exciting to find out much cheaper prices, because I've had disk space running out constantly on my DO. I do all my docker builds, files, databases and everything there as well. And for side projects I usually value all the cache and development speed as opposed to trying to optimize everything to take min performance and storage.

Also I spend way too much on Supabase. I think also $300+/month and increasing, I will bring that over and self host with a large SSD. I think at this point I prefer self hosting a postgres anyway to Supabase. I just tried it for a while, but the costs started going up.

Thank you so much.

Edit: there's some limits in Hetzner initially, so it might take few months before I can actually migrate.


Also be aware it's not all roses with Hetzner.

There are stories online of folks getting their accounts deactivated or not even getting approved.

https://x.com/theo/status/1827480999305118135

https://x.com/heyandras/status/1856747621962191126


I was able to buy a 8 vCPU instance right now - do you know how frequent this problem may be?


Honestly can’t say how frequent it happens but I’ve read a few accounts that say you geographic location plays a big role in whether you are treated nicely or not.


If you don't actually need colocation for your side project, looks like you could buy an i7-11700k, 64 GB of DDR4, a motherboard, and a 512 GB NVMe drive for ~$430. That's looking at new parts on amazon. You might be able to do better with e.g. ebay.


You mean I would host at home? I like the idea however I maybe should get a separate internet package for that purpose I think, as to not mess with what I do at home.


I have a personal account that I'm meticulously careful about (but still terrified of).

I also have an account with L̵i̵n̵u̵x̵A̵c̵a̵d̵e̵m̵y̵ A̵C̵l̵o̵u̵d̵G̵u̵r̵u̵ PluralSight: and while the courses are very variable (and mostly exam cramming focused) it has their Cloud Playground as a super nice feature.

I get four hours to play around with setting things up and then it will all get automatically torn down for me. There's no cloud bill, just my subscription (I think that's about $400pa at the moment - can't check right now as annoyingly their stuff is blocked by our corporate network!) It has a few limitations, but none that have been problems for me with exploring stuff so far.


I miss my ACG account. I feel like the sandbox more cost effective than the Wild West of my company’s sandbox aws account.


Same. I will never touch AWS because the risk of accidentally bankrupting our business due to some small configuration mistake is just way too large.


I use a visa gift card for AWS, I only risk what I have on the card.


In that case you are risking getting perma banned if anything goes south and you end up not paying anyways.


I'd prefer a permaban over permadebt to Amazon!


Regulate it.

Asking Amazon to do something makes little sense. Create laws that force Amazon, and all the rest, to respect their users money. By default, corporations will do what makes them money, not what is ethical or good for the economy.


We missed the boat on that in the US, better luck in 2028


Not going to happen in the US for the next few years, at least.

Jeff Bezos killed the endorsement because he wanted Trump to win. Trump will return the favor.


You trust the government far too much. Regulated industries always end up costing the consumer more. See: medical care and education for examples.


One of the most American things you’ll ever hear. No, regulation doesn’t cost more. In Germany education is essentially free (yes, I know, we pay it from taxes — it costs German taxpayers less than it costs American students). Healthcare kind of the same: we pay for insurance and then only occasionally have co-payments for extras, but costs again are much lower than in USA.


Funny how no one these days says, “in the uk healthcare is free”. I wonder how much longer till “x in Germany” falls in that bracket.


I trust the government in a democracy more than I trust a corporation, yes.

Do I absolutely trust each government in every democracy to make the right decisions for any problem? Of course not!

But I still trust them way more than corporations or the "invisible hand of the market"


Because they're regulated in their favor, due to lobbying.

The problem is the electorate, and the lack of actual regulation.


People were saying the same exact think about the Pure Food and Drug Act, and babies were actively dying by drinking milk with formaldehyde in it so that it lasted longer for sale.


what?

Governments have a systematic pressure, at least in sane countries, to be at least partially responsible towards customers - their citizens and voters.

Corporations do not. especially in businesses with high barriers to entry, and where they can vendor lock you.


I can see the value in a _choice_ between billing limits and billing alerts, for those customers who don’t want their resources ever to be forcibly shut down, but you’re right in saying that choice should be front-and-centre during account creation.


I'm saying this for years and deleted all my personal credit cards on cloud providers which can scale/bankrupt me in a minute.


It's already ridiculous that the contracts are always one-sided. It is not a consumer service after all.


The main question to me is: how the hell could two Open Search domains cost +$1k a month in the first place!?

AWS prices are ridiculous. I pay OVH $18/mo for a 4-core, 32 GB RAM, 1 TB SSD dedicated server. The cheapest on AWS would be r6g.xlarge, which costs $145/mo. Almost 10x.

Yes, AWS hardware is usually better, but they give me 4 "vCPUs". OVH gives me 4 "real" CPU cores. There's a LOT of difference. E even if my processor is worse than AWS', I still prefer 4 real CPUs than virtual ones, which are overbooked by AWS and rarely give me 100% of their power.

OVH gives me 300 Mbit, while r6g.xlarge gives "up to" 10 Gbit. But still, 10x? 300 Mbit gives me ~37 mb/s. I use a CDN for large stuff: HTML, images, JS, anyways...

There are certainly cases where AWS is the go-to option, but I think it's a small minority where it actually makes sense.



Dear Customer,

You have reached your Configured Maximum Monthly Spend Limit.

As per your settings we have removed all objects from S3, All RDS Databases, All Route53 Domains, all ESB volumes, all elastic IPs, All EC2 instances and all Snapshots.

Please update your spend limit before you recreate the above.

Yours, AWS


A compromise solution to this could be to block creation of new resources if their monthly cost would exceed the monthly limit, unless the customer increases the limit.

It wouldn’t solve the problem for usage-based billing, but it would have solved the problem here.


All sorts of problems there. It means that you can't spin up a stack for an hour if the system calculates that leaving it online for a whole month would breach your limit. If the original author had a $100/month limit he wouldn't have been able to spin up the stack even once.

Also you have variable costs (like s3 traffic) that could put you over your limit half way through the month. Then how does AWS stop you breaching your limit?

On a more practical level I don't think AWS keeps tracks of bills on a minute-by-minute basis.


> If the original author had a $100/month limit he wouldn't have been able to spin up the stack even once.

Sounds perfect. Much better than having to negotiate your way out of a $1000 bill you don't expect to see.


> It means that you can't spin up a stack for an hour if the system calculates that leaving it online for a whole month would breach your limit.

Sort of related, another wishlist feature I have is a way to start an EC2 instance with a deadline up front, and have the machine automatically suspended or terminated if it exceeds the deadline. I have some programs that start an EC2 instance, do some work, and shut it down (e.g. AMI building), and I would sleep a tiny bit better at night if AWS could deadline the instance as a backstop in case my script unexpectedly died before it could.

> Also you have variable costs (like s3 traffic)

Yeah, that's what I mean by it wouldn't solve the problem of usage-based billing. There they could just cut you off, and I think that's the bargain that people who want hard caps are asking for (there is always a spend cap at which I'd assume something had gone horribly wrong and would rather not keep spending), but I agree that the lack of real-time billing data is probably what stops them there.


They can shut it down where it makes sense and keep racking up charges for storage. It’s generally the compute that costs the most.


For an AWS account dedicated purely to experimentation that would be fine, though.

Having one AWS account where you actually run stuff, and one that follows the rule of "if it can't be paved and recreated from github, don't put it there" is exactly how a lot of people do it anyway.



Nowhere (serious) does billing limits work like this

There's always a dunning period and multiple alerts


Then you go beyond that. Now what?



Spend limits are such an obvious and necessary feature that the only reason they don't have them is shady business practices.


I don't think it's particularly obvious or necessary. AWS makes its money on big enterprise customers who probably don't want or need this feature. Hobbyists learning AWS in their spare time are a rounding error on AWS revenue.

I would bet that the reason they don't implement it is not that they're being "shady" but because they don't care about the hobbyists and personal projects and implementing hard spending limits would be a huge, complicated feature to implement. And even if they did put in the huge effort to do it, individuals would still manage to not use it and the steady trickle of viral "I accidentally spent X thousands bucks on AWS" stories would continue as usual.


Not really. Do you think that this is trivial at AWS scale? What do you do when people hit their hard spend limits, start shutting down their EC2 instances and deleting their data? I can see the argument that just because its "hard" doesn't mean they shouldn't do it, but it's disingenuous to say they're shady because they don't.


At AWS engineering scale they can absolutely figure it out if they have the slightest interest in doing so. I've heard all the excuses — they all suck.

Businesses with lawyers and stuff can afford to negotiate with AWS etc. when things go wrong. Individuals who want to upskill on AWS to improve their job prospects have to roll the dice on AWS maybe bankrupting them. AWS actively encourages developers to put themselves in this position.

I don't know if AWS should be regulated into providing spending controls. But if they don't choose to provide spending controls of their own accord, I'll continue to call them out for being grossly irresponsible, because they are.


People kept up bringing this argument since the very beginning when people already asked for this feature. This used to be the most upvoted request on AWS forums with AWS officially acknowledging (back in 2007 IIRC), "We know it's important for you and are working on it". But they made a decision not to implement it.

The details don't matter, really. For those who decide to set up a hard cap and agree to its terms, there could be a grace period or not. In the end, all instances would be shut down and all data lost, just like in traditional services when you haven't paid your bill so you are no longer entitled to them, pure and simple.

They haven't implemented and never will because Amazon is a company that is obsessed with optimization. There is negative motivation to implement anything related to that.


That, and AWS just doesn’t really care for people with a spending limit as their customers, which is entirely reasonable.

Just forgiving all the ridiculous bills is a much better (and cheaper) strategy.


They've had two decades to figure it out. For EC2, they could shut down the instance but keep storage and public IPs. It shouldn't be too hard to estimate when the instance has to be stopped to end up with charges below the hard limit.


Now multiply the logic required by multiple cost factors across hundreds of services.


Imagine how impossible it is to actually build those services, with all their logic, if just this added logic is impossible :)


I didn't say it was impossible or even intractable. Its simply not as easy as everyone "why don't they justs."


When it's their money Azure manage to have a spending limit...

> The spending limit in Azure prevents spending over your credit amount. [1]

Once it's your money miraculously the spending limit is no longer available...

> The spending limit isn’t available for subscriptions with commitment plans or with pay-as-you-go pricing. [1]

[1]: https://learn.microsoft.com/en-us/azure/cost-management-bill...


> Do you think that this is trivial at AWS scale?

What a ridiculous point. AWS achieves non-trivial things at scale all the time, and brag about it too.

So many smart engineers with high salaries and they can't figure out a solution like "shut down instances so costs don't continue to grow, but keep the data so nothing critical is lost, at least for a limited time"?

Disingenuous is what you are writing - oh no, it's a hard problem, they can't be expected to even try to solve it.


> What a ridiculous point. AWS achieves non-trivial things at scale all the time, and brag about it too.

Many companies achieve non-trivial things at scale. Pretty much every good engineer I speak to will list out all the incredibly challenging thing they did. And follow it up with "however, this component in Billing is 100x more difficult than that!"

I've worked in Billing and I'd say a huge number of issues come from the business logic. When you add a feature after-the-fact, you'll find a lot of technical and business blockers that prevent you doing the most obvious path. I strongly suspect AWS realised they passed this point of no return some time ago and now the effort to implement it vastly outweighs any return they'd ever hope to see.

And, let's be honest, there will be no possible implementation of this that will satisfy even a significant minority of the people demanding this feature. Everyone things they're saying the same thing but the second you dig into the detail and the use-case, everyone will expect something slightly (but critically) different.


> "however, this component in Billing is 100x more difficult than that!"

Simply claiming this does not make it true. Anyway, the original claim was simply that it is not trivial. This is what is known as moving the goalposts, look it up.

> let's be honest, there will be no possible implementation of this

Prefixing some assertion with "let's be honest" does not prove it or even support it in any way. If you don't have any actual supporting arguments, there's nothing to discuss, to be honest.


> Simply claiming this does not make it true.

The people "claiming" this actually worked on it. I read a post from HN just yesterday talking about the complexities of billing. Look it up.

> If you don't have any actual supporting arguments

You can read other responses in this post. Look it up.


> Disingenuous is what you are writing - oh no, it's a hard problem, they can't be expected to even try to solve it.

I find it funny people bring this pseudo-argument up whenever this issue is discussed. Customers: "We want A, it's crucial for us". People on the Internet: "Do you have any idea how difficult is to implement A? How would it work?" And the discussion diverges into technical details obscuring the main point: AWS is bent on on never implementing this feature even though in the past (that is more than a decade ago) they promised they would do that.


You shut down their instances and keep the data for a week and delete it if they don't pay promptly. It's not very profitable though.


That’s what Hetzner does.


> What do you do when people hit their hard spend limits, start shutting down their EC2 instances and deleting their data?

Yes, why not? I don't see the problem here? If you didn't want that, you could set a higher spending limit.

If they want a little more user-friendly approach they could give you X hours grace.

> You've been above your spending limit for 4 hrs (200%), in 4 hrs your services will go into suspended state. Increase your spending limit to resume.


yes it's trivial for them, they are crazy rich and it's their core competence


set a policy for what happens when the spend limit is reached? not rocket science


> the less they'll want to explore your offerings.

Honestly? All the better. There are obviously use cases where AWS is the right tool for the job but it's extremely rare. It's coasting on hype and somehow attaining "no one was ever fired for buying IBM" status.


Except people do get fired for choosing AWS and drastically increasing costs or making big mistakes that cost several times their annual salary.

It’s just not the kind of thing you’re going to see in a blog post.


could you set a limit to the credit-card associated with the cloud service? Or would it still create costs after the limit has run out, which they would collect in other ways?


No, setting a limit doesn't work. They will still try to charge you. A friend had a charge for $0.35 running for a year after his credit card expired. They closed his account later but would definitely come after you if the amount was significant.


The latter: you've used the service so, even if your card rejected the payment, you'd still have a debt with your provider.


> give us hard spend limits. Then offer a way to set those limits during onboarding

GCP requires this when you set up a new project. GCP deserves as much credit as AWS does scorn.


I dont't think it is a meaningful barrier to adoption, at AWS's scale anymore.

AWS's growth doesn't come from courting small random devs working on side projects.


And yet most people swarm around AWS and similar providers like there is no tomorrow despite this barrier for adoption. People ARE irrational...


Totally agreed.


I know it’s minor in comparison, but I will never use AWS again after running up a $100 bill trying to get an app deployed to ECS. There was an error (on my side) preventing the service from starting up, but cloud waatch only had logs about 20% of the time, so I had to redeploy five times just to get some logs, make changes, redeploy five more times, etc. They charged me for every single failed deploy.

After about two days of struggling and a $100 bill, I said fuck it, deleted my account and deployed to DigitalOcean’s app platform instead, where it also failed to deploy (the error was with my app), but I had logs, every time. I fixed it in and had it running in under ten minutes, total bill was a few cents.

I swore that day that I would never again use AWS for anything when given a choice, and would never recommend it.


I've only used Azure and it looks like ECS is equivelent to Azure Container Apps. I found their consumption model to be very cheap for doing dev/test. Not sure what it is like for larger workloads.

Charing per deployment sounds crazy though.


> Charging per deployment sounds crazy though.

I think technically I was just being charged for the container host machine, but while each individual deploy only lasted a minute or so, I was being charged the minimum each time. And each new deploy started a new host machine. Something like that anyway, it was a few years ago, so I don't remember the specifics.

So I can understand why, but it doesn't change that if their logging hadn't been so flaky, I should have been able to fix the issue in minutes with minimal cost, like I did on Digital Ocean. Besides, the $100 they charged me doesn't include the much more expensive two days I wasted on it.


Yes I believe that is the equivalent to ACA. I use ECS in prod and its incredibly cheap and efficient. Like all things it requires a little legwork to make sure its the best fit. They just charge for the underlying machines not the deployment itself.


I gave up on AWS when I realised you can’t deploy a container straight to ec2 like you can on GCP. For bigger things, yeah the support’s better, for anything small to mid GCP all day. Primitives that actually make sense to how we use containers and such these days. And Bigquery


I'm not trying to defend AWS, they are kinda shit, but deploying containers straight to EC2s is an extremely supported feature and it is very clear how to do it in the ECS admin panel https://docs.aws.amazon.com/AmazonECS/latest/developerguide/...


For containers, you don't want EC2, you want ECS, possibly even Fargate depending on your use case. They're different compute primitives based on your needs.

There isn't a boxed product like Bigquery, but the pieces are all there - DynamoDB, Athena, Quicksight...


For AWS the solution for container deployments (without dealing with VMs) is Fargate, which imo works reasonably well.


I believe I was actually trying to use that. It’s been a few years so my memory is hazy, but isn’t Fargate just a special case of ECS where they handle the host machines for you?

In any case, the problem wasn’t so much ECS or Fargate, beyond the complexity of their UI and config, but rather that Cloudwatch was flaky. The problem that prevented the deployment was in my end, some issue preventing the health check from succeeding or something like that, so the container never came up healthy when deployed (it worked locally). The issue is that AWS didn’t help me figure out what the problem was and Cloudwatch didn’t show any logs about 80% of the time. I literally clicked deploy, waited for the deploy to fail, refresh Cloudwatch, saw no logs, click deploy, repeat until logs. It took about five attempts to see logs. Every single time I made a change (it wasn’t clear the error was on my end so it was quite a frustrating process).

On digital ocean, the logs were shown correctly every single time and I was able to determine the problem was on my end after a few attempts, add the required extra logging to track it down, fix it, and get a working deployment in under ten minutes.


This seems like a glaring bug in the scripts run by that `npx` command. The author is correct, the scripts should 100%:

- Choose the lowest cost resource (it's a tutorial!)

- Cleanup resources when the `delete` subscript is run

I don't think it's fair to expect developers to do paranoid sweeps of their entire AWS account looking for rogue resources after running something like this.

If a startup had this behavior would you shrug and say "this happens, you just have to be paranoid"? Why is AWS held to a different standard by some?


> do paranoid sweeps of their entire AWS account looking for rogue resources

That's the thing that annoys me the most about AWS. There's no easy way to find out all the resources I'm currently paying for (or if there's a way, I couldn't find it).

Without an easy to understand overview, it feels like I don't have full control of my own account.


You can set up daily or hourly cost and usage reports on the account. I built a finops function based on it, feeding the data into a Postgres db. Make sure to select incremental updates, if not you’ll en up paying for tb of s3 storage.


I'd like to immortalise this comment because it's exactly the kind of thing that annoys me when people say "cloud is easier" or that it requires fewer skills/people/resources.

It clearly does, it's just different skills/time/energy requirements compared to colocation.


"daily or hourly" isn't enough though.

There's some AWS resources, like for example route53 hosted zones, which bill only once at the end of the month, and so a daily or hourly bill won't tell you anything about leaked resources there.

There's at least a one resource that only bills once a year, so yet again you won't catch those with even monthly usage reports.


TB of S3 storage is surprisingly inexpensive though. Especially compared to everything else AWS.


$283/TB-year doesnt strike me as inexpensive. And that price does not include any data transfer.


The people who write these tutes have (I imagine) an unlimited budget with some special account which maybe leads to this situation


Also when you do these lessons with an in-person AWS trainer, they give you a specific unlimited training account that gets destroyed when the course ends.


And are experienced developers, who don't make costly mistakes, because they just follow their tried & tested routines.


Yes i was charged $400 once for services i had running for three months without any idea it was happening


We were billed about $5000 per month by Google even though we had asked what the billing change would mean for us and they said you will be inside the free limits. Turns out we weren't.


The billing department knows exactly what you were running, but they only tell you once a month...


They tell you in hourly increments for almost everything?


Even hourly isn't good enough. If I shut down some service, I want to know right now that nothing billable is left over. I don't want to have to wait an hour and come back and check only to find I forgot to clean up some IP address.

Why can we not have a "billable items" dashboard which simply shows, globally, a list of all items in your account which are billable, and how much they will cost if left running for 1 more hour/month?


There is, its called the Cost Explorer.


> There's no easy way to find out all the resources I'm currently paying for (or if there's a way, I couldn't find it

Cost Explorer, in the management account if you’ve got Organization set up.


this is one of the things i love about azure, easily being able to see everything.

closest i found in aws was something like tag manager?


What about the billing dashboard? You can break it down by service and say CPU or memory, or tags if you use them. That has always given me good enough insight into where my client's money is being spent. I'm not sure it's totally realtime, but certainly daily.

BTW I'm a supporter of spending caps, not saying this should be the only way.


Every so often, I'd get a random bill from AWS totaling to a few cents. No idea where it comes from and it's not worth the non trivial effort to find out about it. Just another reason I avoid AWS unless necessary.


Same here. And I'm worried that one month that bill will suddenly be $20k because whatever was costing a few cents suddenly gets hit by some DDoS attack.

Or that my card will expire and AWS will send that $0.03 bill to collections and slap court fees on and send a bailiff.

Their whole setup seems intended to cause expensive mistakes.


My first line of research when I have to use something new is: Can I get a fixed bill every month? What happens if I use more than that, can I limit surprises? If not, I will find something else. We are also very careful with building us into "free" google services after the map surprise a few years ago. That cost us a lot of money in the end.


Is there even a simple way of listing all the existing resources in an AWS account? I’ve always had to check service by service, region by region. It’s tedious and error-prone.


Cost and usage reports will show you what is being paid for. Then there are resources that won’t show up on that so I have used aws:config to pull down other resource lists and finally you can cross both reports to more less find everything.


Cost and Usage reports tells you what services you are paying for and which regions they are in, but doesn't give you a list of resources themselves.


I thought the tag editor was where one could get a comprehensive inventory of account resources? (Unable to check as I don't currently have easy access to the AWS console)


Yea it’s ok for that but won’t list everything. Example: ec2 snapshots won’t show up in the aws:config report but you will be charged for it, so Cost and Usage reports will show you what you will be charged for.


> This seems like a glaring bug

Amazon earns an easy $1000, it is not a bug but a feature. Even if they think that it is a bug it is going to be pretty low compared to anything else that hits THEIR bottom line.


> I don't think it's fair to expect developers to do paranoid sweeps ...

Agree, it isn't fair. I think it's sensible though. When creating anything on AWS I always behave like AWS is an hostile financial institution gone rogue.


Sooooo, if a scripts asks you to run it as root, you just trust it and don't check what it does before?


AWS docs are terrible. Half the time their code examples are flat out wrong.


I've been putting off digging into AWS for years now, and it's because of stories like these. There really should be a standardized training course that requires no credit card info and lets people experiment for free.

Instead they have some pencil pushers calculating that they can milk thousands here and there from "user mistakes" that can't be easily disputed, if at all. I'm sure I'm not the only person who's been deterred from their environment due to the rational fear of waking up to massive charges.


It is very unusual for AWS not to issue refunds in situations like this, so I don't think it's a function of them finding an edge to milk thousands from user mistakes. More likely they've found that issuing refunds is less onerous than it would be to provide accurate and cheap tutorials.

Perhaps that does not excuse the behaviour but AWS reversed a $600 charge I incurred using AWS Textract where the charges were completely legitimate and I was working for a billion dollar enterprise.


I hear people say that all the time, but it's not my experience.

I once ran up a bill of $60 accidentally, didn't get a refund. I've had three friends with bills, one got a refund.

It might depend on who you know, if you look like someone who is likely to spend more money in future, how stupid your mistake was, I don't know.


I accidentally pushed an AWS key to a public repo, and by the next day, had like $50k in charges from crypto miners. AWS reversed the charges, with the only condition that we enable some basic security guardrails that I should have had in place to begin with.


What were the guardrails?


Automated secrets scanning is one of them. You can do it as a pre-commit hook. GitHub and Gitlab can scan it too.

https://docs.github.com/en/code-security/secret-scanning/int...

https://docs.gitlab.com/ee/user/application_security/secret_...


> It is very unusual for AWS not to issue refunds in situations like this

...when asked to. But what percentage of mistakes like this end up just being "eaten" by the end-user, not realizing that they can ask for a refund? What percentage don't even get noticed?


I encountered similar situation twice and AWS did not issue a refund both times. I'm avoiding AWS like plague now. Not going to rely on goodwill of support person handling my ticket today.


Care to share any details?


First time I've used S3 glacier in a wrong way and downloading few gigabytes resulted in multi-hundred dollars bill. I don't remember all the details, but it was absolutely non-obvious behaviour. I think it was corrected since then and today it wouldn't work like that.

Second time I've configured virtual machine with some fancy disk. It was supposed to work as CI build server, so I've chosen the fastest disk. Apparently this fastest disk was billed by IOPS or something like that, so it ate few thousands of dollars in a month. I couldn't even imagine disk could cost that much.

Basically these pricing nuances contradicted everything I ever encountered on multiple hosters I worked with and it felt like malicious traps designed specifically for people to fall into.


Dang, they got you a few times. Those are all things I could've been bitten by.

They used to be better about refunding accidental or misunderstood charges. I had a couple winners a long time ago like a $600 bill for a giant EC2 instance I meant to stop. They refunded it quickly, no questions. The last time I needed to refund some accidental charges though, there was a lot more stalling and forms.

You know what's insane? RDS (database) instances can be stopped, but automatically restart themselves after 7 days. Didn't read the fine print and thought you could spin up a giant DB for as-needed usage? There's a thousand bucks a month.


This is basically a whole section in the grifters guide to business. Placing small hurdles to refunds via things like asking for one / filling out a form / cashing physical cheques etc will result in not having to give back 100% of the money that you have taken from people.

https://en.wikipedia.org/wiki/Embarrassing_cheque


I very recently had a run away SageMaker issue that wasn't refunded. Wasn't much, only $50, but they said no.


> I've been putting off digging into AWS for years now

In my opinion people end up in these billing situations because they don't actually "dig in" to AWS. They make their pricing easily accessible, and while it's not always easy to understand, it is relatively easy to test as most costs scale nearly linearly.

> the rational fear of waking up to massive charges.

Stay away from the "wrapper" services. AWS Amplify, or Cloudformation, or any of their Stack type offerings. Use the core services directly yourself. All services have an API. Getting an API key tied to an IAM user is as simple as clicking a button.

Everything else is manageable with reasonable caching and ensuring that your cost model is matched to your revenue model so the services that auto scale cost a nearly fixed percentage of your revenue regardless of current demand. We take seasonal loads without even noticing most years.

Bandwidth is the only real nightmare on AWS, but they offer automatic long term discounts through the console, and slightly better contract discounts through a sales rep. Avoid EC2 for this reason and because internal bandwidth is more expensive from EC2 and favor direct use of Lambda + S3 + CloudFront.

After about 3 months it became pretty easy to predict what combination of services would be the most cost effective to use in the implementation of new user facing functionality.


Pretty ironic that you're actually listing more things why I would not use AWS at all. You mention: "stay away from", "ensure that you", "reasonable caching", "bandwidth is the only real nightmare" are all huge red flags.


I thought the point of deploying to the cloud using higher level services was so that I could worry about my app and stop worrying about the minutia of managing load balancers or database servers.

Instead of interesting technical challenges I now get to worry about the minutia of Amazon's billing system. Neat! Where do I sign?


As with all things, you are trading away old problems for new ones. The question becomes: are the new problems easier for you to solve than the old ones?

There are parts of AWS that feel like magic and parts that cause me to bang my head against the wall, overall I like it more than it annoys me so I use AWS but it’s not a silver bullet and not all workloads make sense on AWS.


A Cloud Guru / Pluralsight has something they call "Cloud Playground" or "sandboxes".

It might provide that, but I’ve never tried it myself, so I could be wrong.


>Instead they have some pencil pushers calculating that they can milk thousands here and there from "user mistakes" that can't be easily disputed

User mistakes of this type must be a drop in the bucket for AWS and in my experience they seem more keen to avoid such issues that can cost more in damaged reputation.

AWS is not cheap, and in some cases it's incredibly expensive (egress fees), but tricking their customers into accidentally spending a couple of hundred extra is not part of their playbook.


Par for the course for AWS. I tried following their quickstart Sagemaker guide to run Llama 2 a few months back. And it certainly spins up quick, but next day I realize it's running me $400/day.

I was able to get the charges reversed, but definitely learned not to trust their guides.


Bug: Guide charged the user $400/day

Status: Won't fix (working as intended)

Notes: Got the author promoted to SDE III for great impact and revenue boost


Feature request: Write more guides that charge the user $400/day for a toy example.


Sagemaker is one of the biggest bummers in terms of product and is a clear case of enshittification.

When the product was starting (2017/2018) the whole setup was quite straightforward: Notebook instances, Inferences, REST APIs for servicing. Some EFS on top and clear that the service centered around S3. And of course, closed price without any surprises.

Was a kind Digital Ocean vibe the whole experience, and a Data Scientist with a rudimentary knowledge and curiosity around infrastructure could setup something affordable, predictable, and simple.

Today we have Wrangler, Feature Store, and RStudio, the console for the notebooks has an awful UX, and several services are under the hood moving data around (and billing for that).


unrelated rant, but I'm still salty about it.

needed to send "raw" http requests instead of using their bloated sdk for reasons, and requests failed with "content-type: application/json" header, but succeeded with "content-type: application/x-amz-json-1.0". get out of here with that nonsense.


I feel this way about pretty much every aspect of AWS I have touched in my career. Overly bloated, overly complex or weird home brew implementation for no clear gain.


In some cases (cough javascript sdk) you can literally feel the fear of being fired from the people who were writing these services.


Didn't you know, Amazon owns JSON? They acquired it this week, please update all your Content-Type headers within 12 months otherwise you will be in violation of their IP holdings.


If they use a non-standard version of JSON (for example, one supporting comments, or one with rules about duplicate keys, or any other rule that's not part of the underspecified JSON spec) they should use a custom content type. Something can be valid JSON but invalid AmazJSON and this is exactly how you would distinguish between the two.


that's honestly a leak of internal details lol. (leaky abstractions)

because internally most apps are using the coral framework, which is kind of old, using this json format as it has a well defined shape for inputs, outputs, and errors.


Every official AWS guide is designed to make you use as many AWS services as possible, which increases the risk of spend. You have to be extremely critical of anything they recommend (GUI defaults, CLI tools, guides, recommended architectures etc).

There's a reason there are very well paid positions in companies to guide colleagues on how to use AWS cost-effectively and with lower risk.


Exactly. And for small scale deployments or tests, the most expensive parts are almost always the ancillary things or the newfangled services they recommend in lieu of something simpler.


> I’ll admit that I myself am only using OpenSearch because it supports geo_point bounding-box queries, a subject that I don’t have a full understanding of. Perhaps there is a way to do these with a simpler product, and OpenSearch is overkill.

How about postgres with postgis? https://postgis.net/docs/using_postgis_query.html


> Even if you are not using Amplify/OpenSearch, I recommend getting familiar with AWS budgets. > It’s so difficult to be paranoid about every single technology you use.

I do not know what others feel but with this kind of frictionless setup, plus low intuitivity in the UX/UI of those services, people are not concerned about setting up a credit card, and billing bundling between services (e.g. AWS batch + Lambda + EC2) is part of the business model.

I do not know how to articulate it, but it's more or less like those modern amusement parks where you pay to enter the facility, and for every attraction and even the toilet you pay to go.


I feel so to. The cloud billing model shifts responsibility to users under the guise of "flexibility" and "customization". Imagine a car rental company that charges you by the millisecond for every component - engine camshaft revolutions, tire rotations, windshield wiper activations, seat heating time and so on, but has the ability to set up alerts for each component so that customers "control" their usage and budget. It's just user hostile, risky by default cost model.


The alternative is charging for the rental by the day but a huge proportion of your customers (who probably own a bmw) constantly complain that they’re being overcharged and they want a discount cause they don’t need the blinkers, and refuse to pay for them.

I joke, but this persona is very real, and it leads you to this nickel and dime billing model.


Being cheap (in this way) is so often incredibly expensive.


I'm on the AWS Amplify team and wanted to give folks an update. First off, definitely empathize with the pain that Elliot went through. The referenced blog post is part of our advanced extensibility documentation, which covers how customers can use AWS CDK to add features that are not directly supported by the Amplify tooling, such as integrating with OpenSearch. Our initial OpenSearch extensibility documentation did not include the removalPolicy config, which led to the issues Elliot experienced. To mitigate this, we updated our documentation to include `removalPolicy: RemovalPolicy.DESTROY` for all stateful extensibility resources, ensuring they are cleaned up when the stack is deleted. Additionally, we will be updating the default behavior for `npx ampx sandbox` and `npx ampx pipeline-deploy` to apply this removal policy.


CDK changing default removalPolicies was obviously problematic years ago:

https://github.com/aws/aws-cdk/issues/12563#issuecomment-771...

Another implicit surprise under the hood of CDK


It is sad experience but I disagree with the author here

> It’s so difficult to be paranoid about every single technology you use.

I would be paranoid with anything related to AWS, I don't want to risk my bankruptcy (or near bankruptcy experience) on small mistakes or the goodwill of the AWS support.


I see no conflict between "I would be paranoid" and "It's so difficult to be paranoid".


I think you're agreeing with them?


It's a bad user experience if you have to keep checking for hidden extra costs.


And here I'm concerned about the 30€/mo I pay for running my (quite fat) home server... at least the metal can't randomly expand and consume 10kW all of a sudden.


One thing I find people don't talk enough about is how bad AWS documentation is, across the services. They contain outdated information about services, inconsistencies for the same feature, and lack explanation and meaningful examples to help you understand how things work and work together, even for common workflows. Example: try to make Sagemaker use EMR for Spark analysis, as a beginner who has little idea what IAM is and how to grant permissions for different services. It will be fun.

I am just amazed that people are able to navigate the services and configure them properly.


> I am just amazed that people are able to navigate the services and configure them properly.

Most people are forced to use AWS because CTO at their company was pushing a "wE mUsT shIp tO cLoUd" initiative. It's not out of choice, but out of survival.


Why are people still doing the major cloud thing?

I just don't get it.

The story of "it's easier" is fake.

The story of "you won't need highly paid technical experts to maintain things" is fake.

The story of "it's cheaper" is fake.

The story of "you can't run your own computers it's too complex for ordinary companies to work out" is fake.

Its all fake and people still are diving headlong into the clouds, falling through and hitting the earth hard.

There's enough discussion in the community about the risks and hazards of major clouds - you only have yourself to blame when that huge bill hits because you did some thing that would have not cost an extra cent on self hosted systems or virtual servers.

Go learn Linux. Go buy virtual servers from IONOS where they charge zero for traffic.


I agree with your overall sentiment, but there are a few areas that the public clouds excel at despite this: geoscale and startups.


ever since I learnt self-hosting, it feels so liberating


At our company we host our web app stack and services in a cloud provider, but we don’t use managed services other than Kubernetes. Then everything on top is open source services and apps we host. It’s a big overhead to setup and maintain but I feel once the learning cost is absorbed one time we have high degree of flexibility and also resiliency against random resources getting added behind the scenes and costing us money. The critical part is to stay on top of config & updates since a lot of the apps won’t update themselves or even report an update is needed, and ending up with a vulnerable dependency may be orders of magnitude worse than 1-5K incidental expense.


> ever since I learnt self-hosting, it feels so liberating

+1

Last year I went to self-hosting and I felt the same. I paid less than USD 2000 for a small laptop that I use as a server plus a home NAS and by my current utilization I got in 3 months the return plus the ownership and flexibility.


I'm old and was self hosting 20 years ago, but we're comparing apples to oranges a bit here!

Using AWS for smaller personal projects will always be more expensive and probably less fun.

On the other hand I recently had to run an ML model over hundreds of thousands of media files. I used AWS to launch 100s of GPUs using spot instances and complete the job in a few hours, then just turned it all off and moved on. It cost a few hundred dollars total.

In my mind it's at this kind of scale AWS really makes sense.


On the other hand, for some serverless services, it makes sense to use AWS instead of self-hosting.

I've deployed multiple Lambda for many years and I have yet to pay anything for them given how _generous_ their free tier is.

Nowadays I must be at around ~100 Lambda executions per day and my billing for Lambda is still $0/month.

To achieve something similar with self-hosting it would require me to have a server running 24/7 just to allow my code running when needed.

So, almost as with everything else in tech (and life in general), the idea is to not see AWS or self-hosting as the best tools for everything. Sometimes AWS is better, sometimes self-hosting is.

Having the freedom to pick the best one in each situation is quite nice!


I've only ever self-hosted. Granted, I'm only running a website with a few visitors, but I'm just using a cheap raspi, and not spending loads every months for AWS or whatever else people use these days. Plus, its right in my cupboard, and I have full control over it


I feel like a dinosaur now


_Learnt_ self-hosting? You mean people this days start from the cloud as a default?

…ah, yeah, right. Damn I’m old.


Yes, thats why all these services are popping up and earning tons of cash… look at next.js for example.

They lead young devs into their framework and make them believe that the only way to serve their sites is through them, and to pay their extortionate prices…

People are not educated to self host. Everything is run in a “droplet” and just a click away.


for our businesses self hosting we avoid all the bells and whistles extras on platforms like AWS as the billing adds up very very quickly. And besides there are a number of providers that don't have all the bells and whistles of AWS (which we don't use) that are quite frankly 25% of the cost of AWS.


If the original author reads this comment, I have a question.

One of the problems highlighted was that the documented teardown procedure did not properly delete the OpenSearch domain. Would AWS Nuke (https://github.com/ekristen/aws-nuke) correctly destroy everything that the tutorial sets up?


I set up a private account on Azure to host a small, static website using 100% FREE components. However, it wasn't possible to do this without registering a credit card. Even when I tried to add a different card from Revolut, which is a prepaid card, I received a message stating that such cards are not accepted and that it has to be an actual CREDIT card.


Warning: As an aside, this setup creates mid-price r5.large.search OpenSearch instances by default. Nowhere in the boilerplate code or guide is this mentioned. That will run you $134 per month at minimum.

A good rule when working with any sort of cloud service: Everything that can be charged for, will be.

There are plenty of stories of people getting charged massively, and one may wonder whether this has any negative effects on them getting new customers. Unfortunately it's usually not the ones working with it who are the ones making the decision to use AWS or other cloud services, and the ones who are have their minds fully clouded by the propaganda --- I mean marketing.


AWS has good base building blocks (ALB, EC2, Fargate, RDS, IAM etc). But it takes knowledge to put the pieces together. Thus AWS tries to create services/tools that orchestrate the base blocks (Amplify, Beanstalk) for you, which in my experience always becomes a mess where you don't actually understand what you are running in your cloud setup.

I'd recommend either learning the basic building blocks (these skills also transfers well to other clouds and self hosting) or using a higher level service provider than AWS (Vercel etc) - they do it better than AWS.


> It’s so difficult to be paranoid about every single technology you use. When using new technologies that promise to speed up the developer flow, I already expect them to be more expensive than bare metal, but I think this is beyond the pale.

It's always been. They are always pushing boundaries and checking what they can get away with. The response "we’ve processed a billing adjustment for the unexpected charges as a one time courtesy." even though it looks like a bug and it hasn't been fixed since is already telling.


Cloud computing is ridiculously expensive. The other day I wanted to increase Mongo Atlas iops by 1000 on a 3 node cluster and it costs $3000 a year. How does that make any sense?


EBS pricing (and by extension anything the effectively bundles EBS) is absolutely insane.


It makes sense in a world where they want to monetise every little thing you do.


They are decent about giving refunds if you ask, but also.... Got what you deserved there for even trying Amplify. That shit pile needs to die.


The repeating problem with cloud is that it no longer stops my bad code by OOM, rather the sky is the limit for both the memory and the cost. I heard funny story about people who bought badly written code which was constantly going up through the roof with resources. They got angry with my friend, told him the software is not bad, but the hosting is (software was bought with hardware from the manufacturer) and migrated to cloud, only to meet the hard iron hammer of karma (which in this case was embodied by high bill)

Also in my opinion billing is the new perf test but post factum and obscure, i.e. it is super easy to miss some key points in the development and then wake up with the costs falling down the responsibility sink (https://news.ycombinator.com/item?id=41891694)


Some student / beginner trying to learn and then getting smoked by a footgun is a weekly occurrence on the big cloud subreddits.

It's better business to have people beg for mercy and then magnanimously waive fee than to have any discussion about actual hard limits (which would be used by big corps too not just students).

Yes it can be done technically - Azure already has a not loudly advertised account type that is hardcapped. And no billing alerts aren't a solution. Hell you could even do opt-in "yes I understand my data will be deleted" hardcaps.

This is a fixable problem - they just don't want to because a fix would be bad for earnings.


She said "take me somewhere expensive"

--> https://files.rombouts.email/IMG_0092.jpeg


I think anyone doing anything with anything AWS amplify will get burned sooner or later. It seems purpose designed to be easy to set up, and absolutely nothing else.


I remember my first usage of S3 close to its initial launch. It was the first time I came across the very concept of pay-as-you-go. I remember being anxious about unexpected charges and the forum was full of people begging for a cost cap feature.

That was 18(!) years ago. It's still nowhere to be found.

There's like 17 ways to do analysis, some of them paid, but none address the actual problem of capping a bill. It's pure malice.


i opened an aws account, but then was too anxious to move on and build something with it


Maybe official AWS guides should include a tutorial to set billing alerts and limits as a prerequisite to many of their guides.


Maybe AWS could develop the feature that everybody asks for - the OOM killer (where OOM stand for 'out of money')


Yeah, but then they can't win "customer service" points by not making you pay outright extortionate rates, thus further inducing lock-in.

It's very clever - either people pay the overages, or contact you and you can look good by giving them company scrip to spend on other of your services.


It's Official AWS Guide on Being Charged?


Looks like the guide is working as intended.


Done something similar. The root issue is that hosted search is much more expensive on AWS than you might expect.


While learning the AWS Python Boto3 library I once accidental subscribed to the $3,000 a month AWS Shield Advanced DDOS protection service and THAT was stressful. But I did get a full waiver.


It is unfortunate that most of these services don't have a pre pay tier.

On the one hand I get that if your business depends on such a service you don't want it to suddenly go down. But on the other hand there is almost never a hard mechanism to limit your risk. Or if there is, it is opt-in. The conspiracist in me says this is working exactly as planned for AWS as they have no financial incentive to limit customer risk.


As someone new to cloud services, I'm curious are there better experiences with the billing of GCP, Azure or Oracle Cloud? Also, is the multi-cloud approach doable?


All this pay-as-you-go BS is such a scam (note: I'm not blaming victims here.) There's just so many ways to screw you. Like a program gets bug and starts sending out a metric crap load of traffic. Or you get DDoSed and charged for the bandwidth. Or your disk fills up from log file spam, it gets 'backed up', you get charged for it. Or a program is caught in a loop and uses limited CPU usage.

I think its all hiding the fact that people don't want to take the time to design (and maintain) scalable infrastructure and instead rely on fake abstractions that pretend to be infinite, always-available, magic, or w/e. I'm sure there is some open source software that helps here.


oh god , what a timing , I had actually created a blog post on why I didn't want to host a hobby project with credit card because I didn't want to even risk a 0.01% chance of me getting an insane bill and here aws's official guide is causing such things.


After handling so many of these cases I decided to build a solution that helps with cost monitoring and optimization. It's a single click integration with AWS and Azure. We're currently working on a solution for these specific cases as well, would love to hear some feedback. CloudExpat - www.cloudexpat.com


While I don't like Amazon one but, I appreciate that you don't go into the Evil Big Tech trope. Things at that scale are indeed complicated and hard to coordinate.

That said: fuck, that's expensive and poorly explained! Not doing anything cloud without hard limits!


I find that it's advantageous to be consistently villainous for politicians and companies, because their detractors get condemned for 'vitriol'.

Genius really.


If you don't know how to use a knife you will get cut!


I suggest avoiding backend as a service like the plague


i think this might fit on https://serverlesshorrors.com/


AWS Amplify and OpenSearch*, misleading title.


This is the reason I use AWS wrappers (render.com/fly.io) for small projects. It may be more expensive but you can't pop the free tier/selected machine.


See also Amazon Web Services dark patterns: https://lapcatsoftware.com/articles/2024/6/7.html


obligatory: "its not a bug, its a feature"

billing surprises is among the top 5 reasons I keep a homelab for experimenting. if your project can't be deployed to non-cloud infra, I likely won't be using it in the future.


"I followed the official casino guide to play at the casino and lost all my money"


I remember checking with and calling an amazon rep on the phone - he assured me I could use one of the "heavier" graphic card instances and would only be charged per minute usage at the rates shown.

I ran it for 1 minute expecting to be paying the $5 or whatever it was per minute and was charged around $100 for it to "boot up". Cancelled it. Never trusted amazon billing again = (

Bezos keeps waxing lyrical in all his interviews on how he "tests" his company services by calling them on the phone to make sure the SLAs or whatever are accurate. But they aren't. TBH I was kind of confused how proud he was that it took 10 mins to get through to someone on the phone instead of 1 minute or something, on how they noticed it and had to "rearrange" things. Like wtf, I would have fired ALL of my executive below me if such an egregious false advertising existed. It can't be that bloody hard as one of the richest people on the planet to just pay some dude $5 an hour to make sure services are billed as expected, run as expected.

I am sorry to complain, I know they have all done great jobs, but it makes me wonder whether I would be "out of touch" if I was ever in a C-suite role. From what I see around me, I definately would be. But maybe those margins don't matter?

I honestly am confused after decades in IT why management is never held responsible. If I ran a company mgmt would be the FIRST to be fired if there were any issues. I swear I read a comment on HN once from a manager saying why should they be held responsible if there is a fuck up lower down the chain, I was like wtf, the whole point of being a manager is to be RESPONSIBLE. Management isnt a luxury "earn the big bucks" because Im better than everyone else, and thus should be protected.

The easiest way to diagnose this as a CEO is to see how often MGMT have been let go at different tiers, and if there havent been any, well there must be a form of corruption / nepotism occuring.

Been burnt as a "small" entrepeneur by all of the greats, Google ( shutting down my instance when it went viral because I decided to upgrade the hosting which for some unknown reason mean it had to be shutdown WITHOUT warning for 24 hours, possibly to transfer it or something god knows. AWS etc.

I know it might seem like a small gripe, but as a millionaire now I remember how I was treated by these companies.

Maybe I should just be grateful I could use them at all.

I think I'm just saying its crazy how the "low" b2b customer is treated when it would be so cheap to just make sure these collosal fuck ups don't happen.


Let's say 25% of people dont argue with customer service for a chargeback.

Best run company in the world /s


It's a Tim foil hat pet conspiracy that I have, but I strongly believe that part important of the business model (or revenue share) is from people making mistakes or forgetting resources; more or less like gym subscriptions where the gym owners are more than happy to sell the [maximum_amount_of _people + 40%] knowing that the absenteeism will give the revenue offset that sustain part of the business.


There's certainly wastage in large organisations. Over provisioning or keeping that huge S3 bucket full of data from some long forgotten project.

I suspect a lot of the huge AWS customers just eat this because it's so hard to mitigate.

If a business has hundreds of AWS accounts it becomes very hard to track, and if each account can only shave a few hundred dollars a month off their individual bill then there's very little impetus to actually do the work for the individual teams. Despite that possibly adding up substantial savings for the overall business.


My team was migrating from one cloud provider to another and I was tasked to go through the soon to be deleted accounts in the old provider and delete resources that we were absolutely sure were serving no purpose whatsoever. You'd wonder why this wasn't something done on the regular.

Ended up saving at least $4000 dollars a month. And this was mostly sandbox environments that people forgot about.


I can well believe it, I know in my own team's account I can save hundreds a month and multiple thousands a year.

I keep raising this but it's never prioritized as it means taking my time from developing our products

Essentially my manager then loses dev time to reduce a bill that due to institutional accounting practices they never actually see!


I believe you are right, but no tin foil hat theory needed just sound pricing strategy. Eg. factor in underutilization but also refunds. For a flat rate, a price point at 70% of total value of included volume still is better bottom line if you set included volume high enough that the average usage is 50%, yet it still feels more generous and less risky from customer point of view. OTOH very granular unit pricing lets everyone underestimate total costs and complicates comparison with competing offers, „bread and butter“ product prices mask more uncommon ones, etc. In IT underutilization is high, also on own infra, that’s something cloud vendors (and hardware and software vendors) can rely on. Paying for it at least incentivizes to improve utilization, autoscaling is better, sometimes the cloud vendor does it for you, sometimes you have to donitnyourself. In IT laziness is rampant, time is often seen as more valuable than cash out. Also someting the vendors can rely on. I‘ve seen a howto for Azure for a scalable LLM API Gateway. It cost me almost 2 hours to get an estimate for min costs - with default values it would have cost almost 10k per month, I could size it down to less than 1k. A simple loadbalanced reverse proxy on vms cost less than 400. Be especially cautious when everyone is in a hurry. The pre-made solutions may try to cash in on that. Over all: Pricing and pricing models are part of the product properties you buy, and looking for and dealing with psychological pricing strategies and tactics is part of doing business.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: