Hacker News new | past | comments | ask | show | jobs | submit login
AWS free tier data transfer expansion (amazon.com)
264 points by beheh 11 days ago | hide | past | favorite | 172 comments





Was only a matter of time after their egregious egress fees were exposed, 2 Cloudflare posts that help make this happen:

https://blog.cloudflare.com/aws-egregious-egress/

https://www.cloudflare.com/press-releases/2021/cloudflare-an...


I've seen Cloudflare Enterprise accounts for two clients, the current one is paying the equivalent of $0.035/GB, the other was substantially higher although I've forgotten the numbers. Equivalent pricing from another vendor for the same service with enterprise support was $0.0021/GB. Their free bandwidth promises are grossly overstated, and possibly even a marketing fabrication, given previous reports here and elsewhere of high bandwidth users being cajoled into upgrades.

For an honest bandwidth offer, I'd much sooner consider Fly.io rather than Cloudflare. At least with Fly their true pricing is transparent

edit: speaking of transparency, https://imgur.com/HnlWFUe


We save 7TB per mo egress because of Cloudflare's free cache (through Workers) and pay nothing for it.

Granted Cloudflare, the CDN, has Enterprise plans for higher TB bandwidth (esp video), but Cloudflare, the Cloud platform, has more than generous free-tier, batteries included. AWS' value-based pricing has them extract fees for things as trivial as builds and deploys, and their bills are nothing but nightmare to parse or estimate. This is in stark contrast to the simple and straight-forward pricing with Cloudflare, which we pretty much prefer as a small dev shop. So much so that we choose to pay Cloudflare money to host our services even though we've got 5-digit AWS$ credits.


The question is whether what you're receiving is genuinely free, or a part of some squeezable marketing budget. In my experience it always makes sense to consider the latter. At some point that $595/mo. you're saving will appear on a lead sheet, whether it happens today or (similar to e.g. Google Apps) after 5 years. Also like Google, they're a public company nowadays and will eventually succumb like every company before them to the realities of reporting growth.

I'd always prefer paying for certainty than design a solution built on a lottery.


> The question is whether what you're receiving is genuinely free, or a part of some squeezable marketing budget.

Using any "free" service is generally not free as you scale. That's the freemium model we live in today.

> Also like Google, they're a public company nowadays and will eventually succumb like every company before them to the realities of reporting growth.

This is an unfortunate assumption with nothing to go on at this point. There is no more certainty with AWS, as implied in your statement, than with any other cloud provider. Not all organizations have an end goal in being the scale of AWS. And not all organizations put profit over product with respect to an outdated perspective that said organizations need to grow 40% YoY for all of eternity to be successful. It's now, more than ever, very clear that AWS profit margins on data transfer are egregious and they spin the backpedal as "Oh - look at us dropping prices, for you, our esteemed customer!". This is the real marketing slight of hand here, not the other way around.


I really would love if folks talking about volatile AWS prices would actually give examples.

I have been screwed, personally, by enough "free" and "unlimited" offerings to never believe them.

On AWS, all the price changes I've had have been to reduce my costs. This is over a pretty long period.

So inform us of the uncertainty with AWS.

Google, sure, they could cancel or 3x your bill (hi Maps API customers etc). AWS does not have that history.

Cloudflare has secret pricing - that's the really annoying thing. Seriously, put a porn site up online with cloudflare and see how far "free" gets you.


> I really would love if folks talking about volatile AWS prices would actually give examples.

There was never any mention of "volatile" pricing. Egregious? Yes. Volatile? No. There's a significant difference of meaning with those words.

Here's a perfect example [0] by Corey Quinn.

> So inform us of the uncertainty with AWS.

I didn't mention "uncertainty with AWS". I mentioned that there is no more certainty with AWS than with any other major cloud provider with respect to your statement about public companies who "eventually succumb like every company before them to the realities of reporting growth". And then for some reason you pivoted to your own, personal, AWS bill from there. I'm not exactly following the logic.

> Cloudflare has secret pricing - that's the really annoying thing.

At this point I'm not sure if your comment is even serious. First of all, please elaborate on "secret pricing". Sounds like serious charges we should all be aware of. Maybe it's with the article from 2019 on The Register about domain pricing? That's not exactly in the context of this thread, but please enlighten the masses.

> Seriously, put a porn site up online with cloudflare and see how far "free" gets you.

I'd charge you with the same ask on AWS. You seem to imply the "free" tier, on AWS, will provide proper capabilities to host an adult content site. I have strong doubts about this. The logic of this argument is ill conceived at best. Or is your logic just that you can't do this on Cloudflare and that's the root of your argument on why AWS is better? Again, I'm not exactly following your train of thought.

[0] https://www.lastweekinaws.com/blog/the-compelling-economics-...


Let's be crystal crystal clear here. If I host a high data use video site on AWS, I can calculate what my costs will be. That provides me some certainty with respect to a business plan. Even better, AWS does have a history that is much better than others in terms of pricing stability. This doesn't mean best price.

Can you say the same about cloudflare? No. Can you say the same about oracle? No - they have a miserable history of screwing customers.

AWS is offering clear pricing, cloudflare is not. It's really that simple.

This makes me realize that folks just don't understand the value AWS is providing, and is perhaps why they can charge such insane prices.

People with actual money to spend don't want "free" because they don't believe it's actually free.

In terms of cloudflare, they have something like a negative 60% operating margin. The idea of building a business on a company with a negative 60%+ operating margin is insane, either they will go bust or have to raise prices.

AWS by contrast makes money. Because of this, they can shave a point or two off margin to give (another) price reduction.

1TB per month cloudfront, 2M cloudfront functions etc etc. They are under almost NO financial pressure to raise rates.

Cloudflare is under pressure or will be. With VC money perhaps they will get a longer runway.

The "free" offerings are an old story by now.


It is crystal clear that this comment validates the misunderstanding of how bandwidth is priced in the real world and how, just because Amazon's price model has been reliable, that it must be a viable cost because people pay it. It's unfortunate that the reality you're basing this comment on is so far off base that you've convinced yourself AWS egress fees are somehow "stable". It's also very interesting that you're stating Cloudflare is under some unforeseen "pressure" because they're not turning a profit today. I would gather you're assuming Cloudflare is being soaked by infrastructure costs, which is completely incorrect, as they build out their pivot towards targeting the enterprise market by investing in their field (sales) and continually reinvesting in R&D.

It's also fantastic to read the misconceptions of the "value" AWS is providing. In some cases they do provide a much greater value over cost ratio, but if you've convinced yourself that blindly for AWS proper as a whole - boy do I feel for you the day you realize the economic advantage they manipulate to monopolize the cloud market, and not for the greater good of their customers.

Clear pricing you say? I'll use Corey Quinn (The Duckbill Group [0]) as an example again - his entire liveihood and business runs on the fact that AWS pricing is not even remotely clear. It's laughable that anyone would publicly make that statement at this point in time knowing what we know. Sure, if you're running a static site on S3 for a few users a month I'm sure you've got it covered. For those of us dealing with large scale enterprise everything stated here is, at best, bending the truth and at worst flat out ignorance.

[0] https://www.duckbillgroup.com/


> his entire liveihood and business runs on the fact that AWS pricing is not even remotely clear

Corey's business exists because prevailing engineering culture encourages pretty much the entire industry to consider optimization as an afterthought, not because engineers can't understand AWS pricing, or interpret a few bar charts in Cost Explorer, and in the face of a deadline, if it's not on the agile board everyone knows it doesn't exist.

Many of Corey's technical posts are around finding sweet technical substitutes for niche use cases, but as I'm sure he'll tell you, 80% of what he does is easily discovered a few clicks away from the AWS home page.

> as they build out their pivot towards targeting the enterprise market

Cloudflare have a solid sales pipeline, but they're a sitting duck if any of the big clouds ever decide to replicate the business model like-for-like. One of the reasons this may not have happened yet is because Cloudflare's whole presentation is consumer oriented, starting with domain configuration management that is hell to version correctly when 20 people have access to the account. Outside some sweet Javascript cold start hacks they basically have no moat, and there are far more situations that could send the company into desperate measures than otherwise.


> Corey's business exists because prevailing engineering culture encourages pretty much the entire industry to consider optimization as an afterthought, not because engineers can't understand AWS pricing, or interpret a few bar charts in Cost Explorer, and in the face of a deadline, if it's not on the agile board everyone knows it doesn't exist.

Nobody is actively encouraging the entire industry to consider optimization as an afterthought. This makes zero sense. Why would an organization pay a business to reduce their AWS costs if it wasn't worth the cost to realize the savings? Cost optimization is an easy task, as you've stated - so the cost/value proposition of a business like The Duckbill Group must not be worth it according to your statement. Yet they exist and do, seemingly, well. Maybe... Just maybe, cost optimization in AWS is not easy, not straightforward, and designed to be painful enough to where smart engineers are incented to leverage Amazon's dark patterns of hiding costs at time of deployment.

You even state...

> 80% of what he does is easily discovered a few clicks away from the AWS home page.

So why is cost optimization a constant point of conversation with AWS if it's so easy? Why do outfits like Digital Ocean advertise on the notion of clear billing as a positive differentiator compared to AWS?

> Cloudflare have a solid sales pipeline, but they're a sitting duck if any of the big clouds ever decide to replicate the business model like-for-like.

Let's take a stroll back in time. Do you think that Amazon and AWS have always posted a profit? Go look, they've posted many quarterly losses to get where they're at. That's how it works as you build a business like that. To your point - AWS does compete directly with Cloudflare in certain products, yet here we are, Cloudflare and AWS both continue to grow (negative operating income / net income are not a direct correlation of company growth BTW). A mistake you've made is around brand and reputation. Nobody thinks of AWS as a security company. Customers continue to buy Palo Alto Networks, Fortinet, Zscaler and, yes, Cloudflare - even though AWS offers some overlapping portfolio. Why? AWS isn't viewed as a security portfolio. Cloudflare has brand reputation in security and content distribution. And it's a pivot that easily works with both their brand and reputation.

> Outside some sweet Javascript cold start hacks they basically have no moat, and there are far more situations that could send the company into desperate measures than otherwise.

This just screams of the competitive argument low-road. I don't have ties to Amazon or AWS. I'm not an employee. When I read statements like this it's affirmation that there's some agenda. I laughed out loud reading that as the closing argument, thanks for that.


> So why is cost optimization a constant point of conversation with AWS if it's so easy?

Limited IO bandwidth in middle and upper management alongside difficult schedules (we covered that one already). Take 2 steps above engineer on an org chart and detail becomes invisible, the vast majority of tasks begin to resemble a teenager at a mall with their dad's credit card. Meaningful technical validation phases are almost unheard of in many organizations, and largely antithetical to agile.

> Why do outfits like Digital Ocean advertise on the notion of clear billing as a positive differentiator compared to AWS?

Because they market to folk who never take the time to model comparative costs. In any project I considered them for (3 I think, <$15k/year each), Digital Ocean was significantly more expensive than AWS. I follow my spreadsheets, the industry follows marketing.

> Nobody thinks of AWS as a security company

They're the only vendor I deal with who are on first name terms with the NSA and sell in tremendous quantities to the US government. CloudFlare on the other hand, to this day, default to MITMing SSL connections for new accounts and downgrading them to cleartext en route to the back end. It seems our perceptions differ wildly.


> Because they market to folk who never take the time to model comparative costs. In any project I considered them for (3 I think, <$15k/year each), Digital Ocean was significantly more expensive than AWS. I follow my spreadsheets, the industry follows marketing.

You make valid points, but what you're missing is the same accusations you level at Cloudflare were once leveled at AWS (which is why AWS had virtually no credible competition from Y!, MSFT, GOOG for 7 years!).

Also, Cloudflare's moat isn't 0ms cold-starts, but their persistence in commoditising bandwidth. Think Amazon Prime free 2-day shipping and how that worked out...


> Limited IO bandwidth in middle and upper management alongside difficult schedules (we covered that one already). Take 2 steps above engineer on an org chart and detail becomes invisible, the vast majority of tasks begin to resemble a teenager at a mall with their dad's credit card. Meaningful technical validation phases are almost unheard of in many organizations, and largely antithetical to agile.

I'm sorry, but what are you trying to say? "Antithetical to agile" - is there a point beyond some non-nonsensical, made up scenario? Every major organization with a cloud budget cares about optimization today at some level. This hasn't changed in over 20 years because those budget dollars used to be directed at data center costs. Now they're more fluid and can be more impactful when people make mistakes or aren't making sure to optimize up front.

> Because they market to folk who never take the time to model comparative costs. In any project I considered them for (3 I think, <$15k/year each), Digital Ocean was significantly more expensive than AWS. I follow my spreadsheets, the industry follows marketing.

Then share some examples from said "spreadsheets". Because for straight instance pricing DO beats AWS pricing in most every way. This is one of a handful of bread and butter services AWS offers (lift and shift compute). DO also does, most often, better with respect to performance (CPU/compute) when comparing directly [0][1][2][3]. This calculator [4] at DO showcases cost comparisons across all major cloud vendors and I've validated comparison in the last month with AWS - the prices check out.

> They're the only vendor I deal with who are on first name terms with the NSA and sell in tremendous quantities to the US government.

This is a rather naive comment. I happen to work in the security industry and every cloud vendor has direct ties to 3 letter agencies - that's not at all unique to AWS. Sorry to burst your bubble, but also every notable security player in the industry has similar relationships. It's not unique. It's also more advantageous for the three letter agencies in many of those relationships. Also, AWS, along with everyone else - has relationships in security information sharing for verticals. So, yes, AWS is part of FS-ISAC, as one example. Just. Like. Everyone. Else.

[0] https://www.vpsbenchmarks.com/compare/docean_vs_ec2 [1] https://www.bunnyshell.com/blog/aws-google-cloud-azure-digit... [2] https://www.digitalocean.com/resources/cloud-performance-rep... [3] https://www.upguard.com/blog/digitalocean-vs-aws [4] https://www.digitalocean.com/pricing/calculator/


> what are you trying to say

I'm saying this is my bread and butter, and I've seen the same pattern in every company I've crossed the doors of. It's no made up scenario when a team of contractors have spent 12 months building out some service, not a single person will have given a damn about costs, so cost optimization is purchased separately. The contractors aren't wrong nor is the business wrong, nor is this pattern unique to cloud spend.

> I happen to work in the security industry

Then surely you will understand how an entire subindustry can exist to mop up after engineers who could otherwise 'simply' avoid most mistakes if they just spent more time Googling.


> Then surely you will understand how an entire subindustry can exist to mop up after engineers who could otherwise 'simply' avoid most mistakes if they just spent more time Googling.

Yes, I'm sure an entire $50 billion dollar industry comes down to, like all your arguments, just lazy people and the simple fix is "Googling".


"It is crystal clear that this comment validates the misunderstanding of how bandwidth is priced in the real world and how"

If you don't think AWS bandwidth pricing is something in the real world I don't know what to say :) AWS is now broken out in Amazon financials - worth a look to see what they are raking in and the margins they are getting in the real world :)

"Clear pricing you say? I'll use Corey Quinn (The Duckbill Group [0]) as an example again"

His entire business depends on the fact that AWS pricing is clear and public. For a service like cloudflare (ie, call for pricing / dealing with a salesperson trying to figure out how much they can squeeze you for on upfront or on renewal) this type of service is much harder.

In short, if your bill is too high, you can talk to someone like Corey and they can probably help you bring it down.


> If you don't think AWS bandwidth pricing is something in the real world I don't know what to say :) AWS is now broken out in Amazon financials - worth a look to see what they are raking in and the margins they are getting in the real world :)

This comment confirms that you have a misunderstanding of how organizations can and have bought bandwidth via high-cap Internet offerings in the real world given how you've misunderstood my argument completely. My point is that organizations like AWS, Cloudflare, GCP, and many, many organizations around the world still buy Internet connectivity this way (directly). My unchanged statement, all along, has been regarding the egregious margins AWS makes on bandwidth that they charge for in a very asymmetric and oversubscribed manner. It's hard to have an conversation about these things when the basis for how businesses operate aren't understood by those making comments, unfortunately.

> His entire business depends on the fact that AWS pricing is clear and public.

I'm not sure how many enterprise agreements you've helped derive or review, but you can get agreed pricing from any cloud vendor - Cloudflare included. If you're spending any amount monthly beyond what appears to be a personal account this is not your issue.


Google Apps is likely a poor example of your point. All the people who were using the free tier were never forced to paid plans.

I'm still using free tier Google Apps in multiple places, even though they haven't offered new free accounts for about ten years now. They even still let you create new users for free in these legacy GApps.

Interestingly, I would likely migrate off of GApps to a different paid service if Google changed their minds, however I don't think they have a strong incentive to apply pressure here at long as Gmail.com accounts are free.


> We save 7TB per mo egress because of Cloudflare's free cache (through Workers) and pay nothing for it.

That's huge, Could you explain a bit more on how you achieve that? I've been exploring Cloudflare cache, AFAIK for anything not cached by default you need a special page rule e.g. for .html, I tried a wildcard domain.com/* and it doesn't seem to make any difference. Are you using workers to cache specific file types?


> Are you using workers to cache specific file types?

Yes: Using Workers as an api gateway and the Cache API as CDN.

[0] https://developers.cloudflare.com/workers/learning/how-the-c...


Thanks, what type of files do you serve through that?

executables, raw binary files (like apache parquet), and (rarely) json.

What Cloudflare misses is a simple cost calculator. I don't want to contact sales for it.

Interesting. Can you be more specific? Is it possible those prices are conflating different services, e.g. CDN vs network tunnels vs video streaming vs workers vs Pages vs object storage egress?

Even within just the CDN category, Cloudflare does a lot with its basic CDN offering (bot blocking, transformations, dynamic caching, etc.) that other vendors may charge as separate options.

That's not to say I don't believe you, I just wanted to make sure it's apples to apples.

Cloudflare's pricing model only really makes sense, IMO, if you're either a small/med business (which makes it an incredible deal) or if you're working cloud-native using their edge functions (workers, pages, etc.). If you're primarily using them to shield and proxy a LAMP monolith or similar for a large number of users, yeah, I can see how that would get expensive really quickly. There are other vendors who specialize only in that, being a dumb CDN. Cloudflare's value is that they enable completely new architectures based on their network topology... you can't easily do something like that on Fastly, for example.


$0.035/GB sounds about an order of magnitude too high once you get to a large scale, were these clients doing small amounts of bandwidth?

On the other hand, $0.0021/GB is far on the cheaper end of the spectrum, who is it that offers something that low?

Agreed all around that the pricing is frustratingly opaque.


We're talking about AWS though and AWS if you do < 10TB is 0.085 per GB from CloudFront. So CloudFlare is still much cheaper in that case.

fly.io looks interesting, though, never heard of them.


I can confirm these $s

AWS free tier is horrible. You have to enter a creditcard to sign up, and then there is no way to prevent it being charged when you go over the limit for some reason. If you get on the frontpage of HN for example, you might be majorly screwed.

It's a test account, I just want it to shut down when the limit is reached.


I think AWS drastically needs to create some type of "sandbox account" flag that severely locks down the services you can use and the amount you can scale up, exactly for reasons like you said.

However, I also think a big problem is that many people on the internet and especially people who try to sell AWS tutorials or learning courses push AWS as some toy that every developer should sign up for on a whim without understanding what they are doing. An AWS account is an industrial-grade tool, it's not a toy, and it should be treated as such. It's like renting a backhoe when you don't even know how to use a shovel yet, and then being surprised when you completely screw up your yard.

Sites like acloudguru that offer ephemeral sandbox AWS accounts are becoming more popular, and people new to AWS should really be steered towards those.


It has been a while since I messed with the AWS panel, but IIRC you can set budget alerts so you’re notified once a threshold is crossed. It’s not a perfect solution, but if you expect to spend $0 then an alert on $0.01 is pretty trivial to set up and goes a long way to prevent end of the month surprises.

Yes, you can set up a budget alert. I have it set up for one of my work accounts. Problem with that is that it's an email, and there's usually some delay in it going out. It's not a great solution for accidental traffic spikes (surprise HN post, or someone finding and abusing a public S3 object you have hosted, etc).

So you get an email saying your $10/mo site is now $1000 for this month, and climbing.

In general I wouldn't recommend using AWS and expecting the free tier for anything that's going to be public facing or autoscaling.


It’s not a perfect solution

It's not a solution at all given the fact the alerting process can lag behind the logging process by several hours or more. If you've hit a traffic spike it could have rolled over your site and gone in that time, leaving you with a bug bill.

Alerts are not a viable solution to traffic spikes unless they're real-time and absolutely bulletproof. AWS's alerting is neither.


An alert is not a cap. They should add a cap to just shut down/delete resources if the cap is hit.


Note that these are not instantaneous. You can still incur charges before the budget action kicks in and terminates your resources.

Ok so how would this work in any other way?

Should _every_ S3 action have a `if will_incur_charges() and should_not_incur_charges(): raise Exception()` statement in it's critical path? No, of course not. Everyone get's slower for nobodies benefit. It has to be delayed.

But then you run into an issue: what if you end up costing AWS 100$ before the budget action kicks in. Should you not pay that? Why not?


It’s a business decision on AWS’s part. If they want to reduce cognitive load or objection of some customers, they’ll be more willing to “eat it”.

My strong guess is if you had a free account, setup a budget cap, went over it, and they decided to charge you*, a quick email to support would get it waived.

I’m very much a fan of AWS, in part because while they have the chance to uphold the legal terms, my experience is that they’re pretty customer friendly.

* Early on, I had many bills under $1/mo that they just comped without me having to do anything.


> Should _every_ S3 action have a `if will_incur_charges() and should_not_incur_charges(): raise Exception()` statement in it's critical path?

Budget actions work by applying an Deny All to IAM, which has essentially exactly that.

The problem is not the shutting down, it's the detection. AWS billing has a resolution measured in hours, which has limited usefulness on a platform where you can rack up thousands of dollars in charges in just a few minutes.


> Should _every_ S3 action have a `if will_incur_charges() and should_not_incur_charges(): raise Exception()` statement in it's critical path?

We're talking about one account-wide flag `has_exceeded_billing_limits`. Changes are infrequent, and can be pushed into caches. Small overruns while the flag pushes are trivially eaten by AWS.


You claim that a bunch of GPU crypto minors won't exploit a chance to incur billing overruns if amazon waives charges?

The amazon deal is simple. Very clear pricing for pay what you use.

Cloudflare - can you link to the page where they show the cost of bandwidth? Still waiting.


True. But you can do it on forecasted costs.

And if you get in a car accident and don't check your email for a few days? It's extremely hard to understand how a company with their resources is able to send a notification but not able to shut it off if that's what the customer wants.

Well they're two different problems. For AWS to give you an alert a service X needs to send billing data to the billing system.

For the billing system to then "turn off" X it needs a number of things.

1. It needs the ability to reach back out to that service. It probably has no idea what it is, all the billing system is likely to receive is something like:

    {service_name: "X", action: "Put"}
ie: Pretty opaque data with just enough structure to know "This costs X cents and happened Y times".

So now your billing system needs to be able to resolve "X" back to some AWS resource that it can talk to. Both the resolution of X as well as the "billing can now talk to every single AWS service" are pretty heavy lifts.

2. It needs to know what "off" is. "Off" for S3 could mean a lot of things.

a) Delete the bucket and all data inside of it

b) Keep the bucket but delete all data inside of it

c) Keep the bucket and the data but disable API access

etc etc. Do you disable PUT? GET? Both? What if what's blowing up your billing is GET?

And this really doesn't get easier for other systems. Do you back up a database before killing it? That incurs charges too.

I don't see AWS somehow solving this in a "one size fits all" way because there isn't one.


Admittedly, all of these are real issues. The thing is that for a company with 1.816 trillion (1816 billion!) market cap as of today, all of these issues are easily solvable. But it's not a matter of "we can't solve it" or even "we don't have the resources to do it, there is some higher priority problem to tackle first".

It's not an engineering problem at all at its heart.

It's a marketing/business problem that someone somewhere is thinking that Amazon can provide a free service to X users, knowing that Y (X, Y positive, Y << X) users will go over their "free tier" usage and pay for all X's costs, maybe even more, making the free tier a profitable business on its own.


> It's not an engineering problem at all at its heart.

My point is that it is definitely an engineering problem as well as a product problem.

a) It's going to be super technically difficult to build (especially in a way where it's responsive at a granular basis to handle huge blow-up bursts)

b) It's not even clear what you're supposed to be building

None of what people are proposing is well defined or easy to build.


The type of questions you are mentioning are basically saying different people would need different configuration options and come up every time you design any complex technical feature that is going to be used by millions of people. These are mostly product and UX questions, and NOT engineering ones. The only engineering problem I can think of is the latency in getting real time billing information.

The way you usually solve this is by having sane defaults, and giving users different mechanisms for configuration based on how complex their configuration needs are. This can take a tiered approach.

As an example, simpler and straight forward things (such as disable egress traffic from S3 if the bill exceeds X) can be in the UI itself. Then, for customers who need more control, an option to configure via json or yaml similar to cloud formation. For anyone who needs even more giving an option to call a customer defined lambda function would give them the ability to at any metrics and take appropriate action.


> NOT engineering ones

Engineering problems:

1. How do you actually have a billing system reach out to every other system? It has to resolve the resource, network with it, have IAM permissions, a network route, etc.

2. How do you handle consistency?

3. How do you make it responsive?

4. How do you add this to every billable entity?

I mean, it's just a shitload of work, and all of that just to get to a terrible idea.

> The way you usually solve this is by having sane defaults, and giving users different mechanisms for configuration based on how complex their configuration needs are. This can take a tiered approach.

The sane default is you pay for what you use, and you can listen to billing events and build all of the logic you're talking about if you want to.


Its not hard to understand at all, they willingly screw over customers, just look at all the posters here falling all over themselves to excuse this blatantly shitty customer hostile behavior.

The solution is to not use AWS. Is is not a sacrifice to not engage with shady companies.

Paying for what you use, the peak of shady business practices!

That's really disingenuous.

The complaint here is that Amazon offers a free tier supposedly for learning the platform, but it is a giant footgun that shoots a ton of people in the foot.

People are reasonably asking for hard limits to protect them from this highly foreseeable situation wherein a complicated cloud offering can go on a spending runaway.

It is literally as easy as following a beginner tutorial and selecting the database instance the tutorial uses and leaving it running. That could be a several hundred dollar mistake.


"Kill my service if I hit a billing limit" is a scary footgun as well, and one that could impact larger customers.

I don't think it's unreasonable to say that if you're using AWS you're taking on some responsibility to make sure you're not blowing up your bill. AFAIK you are automatically enrolled in emails that will tell you when you're about to exit a free tier limit, so it's not like they won't warn you.


Make it a clearly selectable option, explaining both downsides. Be vocal about it being selected if you want to.

But the thought that not immediately reacting to an e-mail can cost me a month's salary or more is terrifying. Maybe it'll get waived. Maybe it'll be waived in the form of credit that I can only spend on the product (i.e. for my purposes, not waived). Maybe I'll be stuck with it. The "maybe" is the danger here.

Let's say I put some obscure 50 MB dataset into a public S3 bucket, and pay my <1 cent per month to host it and like a dollar for each 200 downloads. Rarely does a month exceed a dollar or two, all is good.

Then someone builds a poorly made colab that downloads the entire dataset each time it is run, and the colab hits the front page of HN while I'm traveling, and makes it to social media the next day because it shows something funny. And people don't run it just once, they play with the parameters, running it multiple times.

By the time I'm back and see the e-mail, a 10000 USD bill may be waiting for me.

To obscure? How about this one:

The operator of a semi-popular website has decided to hate me. Each page load now contains an <img src=[my image]?t=[timestamp] width=1 height=1> in the header, pointing to the biggest image I'm hosting for my small static website.

Edit: Even better example

I've accidentally left an API key in a git repo that I pushed online. My carefully set up billing alert was deleted, then my account started spun up as many of the most expensive GPU instances as quota would allow and started mining Monero. In this scenario, I think a $10000 bill would be "getting off easy".

(Just to be clear, these are hypothetical scenarios. If anyone knows how various cloud providers would react to those in practice, or if you know that there are countermeasures that would reliably stop them, please do tell!)


Nobody is suggesting to make that the default uniformly across all AWS accounts.

It would be excellent for anybody intending to use the free tier for its stated purpose (getting to know the AWS platform) who would like to make the _choice_ to shut off their services if they are going to exceed the free tier quotas.

That way you are free to experiment, and if you blow up something while learning, you're not then leaning on the mercy of AWS support to refund you.


The free tier is not just for getting to know the AWS platform. I worked with customers hosting the entirety of their early applications on free tier services.

Playing around turns into production.

When your service goes down and you lose $XXXXXX revenue suddenly it's AWS's problem. AWS has taken the approach that keeps the lights on, assuming its customers understand their unit economics.


Right there at the top of https://aws.amazon.com/free/ is this:

  AWS Free Tier
  Gain free, hands-on experience with the AWS platform, products, and services
Just because you've worked with customers hosting in the free tier doesn't mean that all free tier users would or should choose to prioritize uptime over cost.

> "Kill my service if I hit a billing limit" is a scary footgun as well, and one that could impact larger customers.

Then make it opt-in but a highly visible one during account creation so that people who just want to test can enable it.


I think I set an alert recently on AWS emailing me if I surpass a limit that I set.

Have you checked out AWS console recently?


That's been a feature since AWS started. That isn't at all what I am talking about above.

Paying for what your users use, with no way to say no to the traffic, is exactly why billion dollar companies like Cloudflare exist - to protect people from "paying for what you use".

So yes, it is shady not to protect customers from that IMO.


Um. They'd much rather deal with a waiver request from someone for $100 then to deal with a bad credit card in accounting spinning down a major account.

BTW - I've never heard of the later happening at AWS ever, and I have at other hosting providers.


I don't think AWS's billing system is robust enough for that. In pure AWS fashion you have to create a billing alarm, which pushed an event to SNS which triggers a lambda to shut down your stuff but it is possible. The catch is that billing metrics are estimates and alarms might be delayed.

Yup, seems like they have a lot "eventually consistent" batch processing going on. Certain things like CloudTrail have 4-6 hour lags in billing (unless it's improved in the last year)

It'd be a huge engineering effort to make something instantaneous--I think the closest thing they have to such a system is whatever they use for rate limiting or IAM.

I'm guessing there's a pretty high overhead to trying to do realtime instead of batching

AWS oopsies suck but I think their billing system is pretty robust compared to lots of usage based billing systems (like, say, utilities)


i don't think you'll find much sympathy for the free thing not being good enough

it's pretty reasonable for them to ask for a CC -- making it too easy to get free compute/bandwidth is opening the door wide for abuse.

..but yeah, everyone wishes they'd have a sane way to halt services if over budget.


Yup, this is one of several reasons I just pay the $5/mo for a similarly specced VPS with another provider.

Off topic, but Oracle cloud's free tier never charges unless you upgrade your account. You can have 5 VMs, 200 GB block storage, etc.

Only 4, not 5. Because each instance requires boot volume of minimal 50GB size. And free tier gives you 200GB of volume. I've tried to setup fifth instance but received obscure errors, till I realized that there is not enough space for boot volume. Also there is no possibility to create smaller boot volume or decrease size of already existing volume. Have I miss-looked something?

Yeah but they also probably desperately need customers. I can imagine that they have lost a lot too AWS Azure and Google.

Also the AWS free tier expires after 12 months (unlike, say, GCP).

AWS has both 12 months free tiers for some services, as well as pretty generous always-free offerings, like the data transfer announcement today. The 1TB/mo CloudFront traffic is now always free, as is the 100GB/mo of data per month from AWS regions to the Internet.

https://aws.amazon.com/free/


It went from 1GB to 100GB, meanwhile Oracle is offering 10TB on their Free Tier.

https://www.oracle.com/cloud/free/


Yeah, but then you have to use Oracle

I've used Oracle Cloud, and it's actually pretty nice and well-designed at least from my point of view. It gives the AWS Dashboard a run for their m̵o̵n̵e̵y̵ egress fees.

But I still don't run anything important on it or push the limits of the free tier. Oracle doesn't have a good reputation. Also "Oracle Unbreakable Linux" is literally just RHEL rebranded, but it's not a community project and they don't like to acknowledge it so it feels particularly shameless; especially since they are selling "support" for it.


Bear in mind that premier support is included for free for cloud customers. So for example ksplice and dtrace are included. Ksplice is fantastic and it's really worth it.

I also have an arm machine hosted, good if you want to compile stuff, but the networking has been super flaky and unreliable. I would definitely not call their service nice, when quite often you can't login because network issues. Also their separate identity(legacy stuff)/cloud service dashboard was also confusing.

People have reported having their Oracle servers shut down without warning for the horrendous crime of... running IRC servers.

That's not an uncommon clause in terms of service; IRC servers tend to be DDoS magnets. Other common nonos are game servers and streaming hosts.

Hosting is one of those things you really ought to spend 10 minutes reading and find out what you are and are not allowed to do. It's basic due dilligence when you're renting someone else's hardware and bandwidth.


Let's not poo poo actual competition. If Oracle _wants_ to compete with AWS that's a pretty good thing for everyone.

Yet signing any contract with Oracle is risky, and i would rather pay more than risk dealing with Oracle itself.

but yeah, more competition is always good.


Yes, it's definitely good if it causes AWS, GCE, Azure, etc to increase allowances to compete, even if you (like me) plan to never touch that Oracle service with a ten foot pole attached to someone else's computer.

You're not the only one. Almost everyone that did significant deals with Oracle at some point has this opinion.

Plus they hired away a ton of AWS engineers to build their competitor right down the street from AWS headquarters. I wouldn't dismiss it out of hand.

People aren't dismissing the product, they're dismissing doing business with Oracle regardless of the product.

I want to compete in the Olympics, but whether that actually happens is a different question.

Can someone educate me on why Oracle is bad? I don’t have much experience with dealing with them as a company. Only used their database before.

They also have really questionable business ethics. At one startup, I caught them grabbing an unauthorized copy of our customer's source code when we were on-site at the Emerald City to tune it. It wasn't a rogue engineer, either; it was an order at least from their manager and probably higher. At another startup, we pitched a shared-cache idea to take advantage of our hardware. They declined to work with us, then included exactly that feature as a marquee element of their next major release. Then I worked at Red Hat, where anger over their re-badging of RHEL as OEL (among other deeds) ran deep. Over and over again, they've abused partners and customers and even employees. They were the most evil company in the industry before Facebook and others even existed.

See also: Bryan Cantrill's "lawnmower" talk about what happened to Solaris after the Oracle acquisition. https://youtu.be/-zRN7XLCRhc


https://ariadne.space/2021/07/14/oracle-cloud-sucks/ this person had their account terminated for the crime of running an IRC server.

They also have a track record of leaving dials in their software that are very easy to turn on but now mean you need to be on a higher license tier and will ding you in an audit and expect penalty fees for it (e.g. various flags in Oracle DB, Extensions Pack [1] in virtualbox, Java features[2] like Java Flight Recorder)

[1]: https://old.reddit.com/r/sysadmin/comments/d1ttzp/oracle_is_...

[2]: https://upperedge.com/oracle/top-3-reasons-oracle-java-users...


They tend to have aggressive sales, high degree of lock in, and high license costs.

You basically get stuck paying lots for Oracle software and that doesn't directly equate to value the software provides


Plus Oracle's egress last time i checked was 10x cheaper.

They have terrible sign up experience, I'm far from only one who hasn't been able to sign up there because of some cryptic error. When I've tried to google that, there were reports of them requiring using email with full name in it. But changing to that didn't help, using different credit/debit cards didn't help. But prices are quite nice as far as I can tell from the outside.

My day to day browser is a hardened Firefox (plenty of about:config tweaks, resistfingerprinting, ublock origin, temporary containers etc). For signing up to Oracle Cloud I used a tmpfs based Chromium browser (ie: completely fresh, factory defaults) to avoid issues resulting in my hardened browser. Oracle cloud refused to accept my credit card with a bogus error message saying something about fraud and/or incomplete/incorrect personal details. I searched for the error message and someone on a random website mentioned about having issues with Chrome and to try Firefox. I found that quite unusual as it's usually the other way around but I tried using my day-to-day hardened firefox and instantly worked and account activated within minutes. Go figure.

Same here. My sign-up experience was a nightmare and it took 3 weeks (to create an effing account).

I was at a point where I just said "screw this, I don't care about the free trial, just give me an account where I'm supposed to pay for everything". For some reason you have to go through the free trial in order to get a regular account.

I didn't even try out the platform once I got my account. They managed to drain all my energy in the sign-up process.

I'd rather go bankrupt by AWS/GCP nefarious egress fees than having to deal with Oracle again. Serves me right for giving them a shot.

I second the recommendation to stay as far away from Oracle as you can, even if their OCI pricing seems incredible.


They've invested a lot into the signup experience since 2019 but offering a generous Free Tier like this means they also attract a LOT of people trying to exploit it for whatever the badness of the day happens to be.

Providers always have a false positive rate on signup experiences and last time I looked Oracle had provided a way to contact them and say "You got it wrong" (and they are reasonably good at fixing those situations).


This pretty much sums up my experience. After signing up and successfully getting an account then they cancelled the account with no explanation. I tried reaching out to them through several channels and had to create several more accounts just to try and get support. In the end the other answer I ever received was "The error you are getting is intended." Nothing more was ever said or given and I gave up on it ever working.

Is there a list of cloud providers that have a free tier somwhere?


Well if that isn't a direct answer to https://blog.cloudflare.com/aws-egregious-egress/ ...

I use 500GB - 1TB per month on Cloudfront, costing about $50-100 per month, and I was going to move this over to Cloudflare to take advantage of their savings. However, this AWS change will basically wipe out my entire Cloudfront bill. I should send Cloudflare a Christmas card to say thanks.

It leaves you pretty close to the edge of paying 8.5+ cents/GB when you go over though.

AWS giving you 95GB egress for free is not at all a direct answer to the insane prices

Just because it isn't as good doesn't mean it's not a direct answer. This is basically the lowest hanging fruit response AWS can give against Cloudflare to convince people not to migrate over.

Maybe the idea is that if you're doing 1+ TB of CloudFront traffic, you're already deeply locked into AWS anyways and less willing to make the jump anyways.


1GB of egress costs about 9 cents.

The cost savings is equivalent to ~$9/mo in standard US regions. Nobody is going to migrate clouds over a $9/mo saving.

If we're talking about CloudFront that comes out to $85. That's actually a pretty good savings, but it's distinct from egress. CloudFront's pricing isn't locking customers in due to egress pricing because it's a CDN, not a data storage service.


A partial direct answer that's still less free egress than almost every other free tier and/or sub $5/VPS. And as far as I can tell, the egregious pricing is still there once you're past the free limit.

:thinking face

Lol, thanks on behalf of everyone enjoying the AWS free tier, jgc.

My thought as well. I was just about to spin up a new service on cloudflare and now I’ll stick with AWS.

So which cloud provider offers a HARD spend limit? I just want to fund my account e.g. $20/month and never, ever, spend a cent over that. Even if my account gets hacked for bitcoin mining or whatever, I don't want to spend a cent over that.

With AWS, you can do it, via a trigger on a spend notification and a script, but the whole thing is a giant kludge. It should be a default feature. It should be a default feature for all cloud providers.

I'm literally using them less because this isn't a feature they offer. Even the free tier of AWS is too risky without hard limits.


I think the closest you can get is using a VPS like DigitalOcean where you pay $X for a server and there's no autoscaling to worry about. But even with those, if you go over the bandwidth limit (although with DO the bandwidth limit is a lot higher) you would be charged more.

The unfortunate reality is that hobby developers that just want to pay $20/month aren't the target audience for GCP etc. They don't really care if you're using them less for your personal hobby projects. They target large enterprises, and those large enterprises would have very little use for something like "cap my spend at $20".

Even as an AWS employee, I sometimes use non-AWS hosting providers for my own projects. Even outside of the billing situation, AWS is often too complicated for my use cases. It's just not targeted at me and my hobby development projects.

disclaimer: am AWS employee but the above is my own opinion and not official position of the company, etc etc.


> and those large enterprises would have very little use for something like "cap my spend at $20".

For the enterprise as a whole, probably not. But it would still be useful to be able to create sandbox accounts for experimentation with a hard limit on spend. Or give developers their own cloud accounts to run development and testing infrastructure that they can control, without having to worry about them accidentally spending way too much.


+1 to this; also an AWS employee, also use other VPS providers for personal projects (Hetzner and Vultr). Don't need the full breadth of AWS services to tinker with Caddy and Tailscale, but more than that I simply follow the practice of "don't shit where you eat".

Backblaze B2 has separate hard spending limits for storage, bandwidth, and requests, which is an elegant solution to the "but what happens to your data if you hit the limit?" objections you often see in these threads.

Digital Ocean could be interesting. Used to AWS and Azure but DO is doing great too

Not a full "cloud provider" per se, but you might want to check out NearlyFreeSpeech.net[0]. It's fully pay as you go--"If your balance runs out, your web site hosting suspends automatically."

(I should note that I haven't used them myself, I'm just impressed by their website/business model.)

[0]: https://www.nearlyfreespeech.net/services/pricing


Maybe not the kind of "cloud provider" you're thinking about but tarsnap works on a prepaid basis. You top up your account and it debits that account daily. You get warnings when you go in the red. Super simple.

> So which cloud provider offers a HARD spend limit?

It still baffles me that this isn't the default. Then again, egress/ingress costs also baffle me, why not just offer capped speed and unlimited bandwidth?

I doubt that i'll ever willingly use a platform that i pay for myself, which can just decide to charge me bunches of money because of my site/app getting DDoS'ed (assuming not all components are fully protected at the edge or something) or suddenly gaining popularity.

In my eyes, such scalability should be opt in (or at least opt out) and by default my apps should just break under the load, much like any self-hosted software would, which may be preferable in some circumstances (e.g. API manages backpressure as best as it can, once request queues fill up, it just gives "503 Service Unavailable" with something like "Retry-After" and any clients/front ends are supposed to show messages to try again later, or automatically retry after a while).

To that end, here's a list of providers that give you such fixed resources (in this case VPSes) and that i personally have used in the past:

• Time4VPS: https://www.time4vps.com/?affid=5294 (currently hosts most of my sites, hence affiliate link)

• Contabo: https://contabo.com/en/

• Hetzner: https://www.hetzner.com/cloud

• Scaleway: https://www.scaleway.com/en/elements/

• Vultr: https://www.vultr.com/products/cloud-compute/

• DigitalOcean: https://www.digitalocean.com/products/droplets/

Of course, some of those also offer managed services, but in my eyes it's far safer to just run MySQL/MariaDB/PostgreSQL/MongoDB/Redis and any other piece of software in Docker (or OCI compatible) containers, as opposed to using something proprietary, even with protocol compatibility. That way, it's also possible to migrate between providers easily, should the need arise.


Would you be comfortable with losing all of your data and going down when the spend limit is hit? That's fundamentally why this is not as simple as you think

> Would you be comfortable with losing all of your data and going down when the spend limit is hit?

This doesn't feel to me like a generous enough statement about all of the finer points.

Would i be okay with my servers no longer responding to web requests (or doing so with reduced port speeds with some of the above hosts, e.g. Time4VPS) in case of spending limits being unexpectedly hit? Yes, definitely, since my projects going dark for a bit or slowing down would be preferable to me ending up not being able to pay my rent and having to rely on public outcry about the large bill and the vendor's mood that day towards letting that one slide.

Would i be okay with my servers running out of space and no longer writing new data to any database, merely responding with the appropriate errors instead? Yes, definitely, because that probably signals the fact that for some reason lots of space has suddenly started getting used for no reason, something that happened so fast that i didn't even get to plan the appropriate scaling, once Zabbix would warn me about 80% of the disk being full (or any monitoring tool would have an alert set up for this). This would probably also be yet another sanity check.

Would i be okay with my servers suddenly ceasing to exist and being wiped from existence, for any reason short of excessive abuse complaints or repeated ToS violations? I most certainly wouldn't want this to happen, yet in my experience it has never actually happened - some of the vendors listed allow you to pre pay for the resources you're about to use (e.g. Time4VPS and Contabo), even offering discounts for longer term reservations much like AWS does, without unexpected charges to you like AWS would do. In contrast, some of the other vendors listed allow you to pay based on hourly usage (Hetzner, Scaleway, Vultr, DigitalOcean, at least IIRC), but also won't have unpredictable pricing spikes because of set limits for the most part (ingress/egress charges may still apply, however, a worrying trend in the industry that muddies the waters), if i pay for a 5$ VPS every month, that's what i can expect to pay most months regardless of usage (assuming 100% uptime).

With these factors in mind, my servers being wiped would probably have to happen due to either me misusing those services, or alternatively me failing to pay the bills even with the more predictable pricing structures, which is actually very much like a person's electricity being turned off due to them not paying for it - as unpleasant as that'd be, no surprises there. Personally, i don't subscribe to the belief that managed services with unpredictable billing are the only way to do software nowadays, something that Richard Stallman coined as SaaSS (Services as a Software Substitute) and about which you can read more here: https://www.gnu.org/philosophy/who-does-that-server-really-s...

Alas, as long VPS uptime with predefined resources is the unit of computation that you're paying for, everything else should be fairly simple from there onwards.


As long as you are paying by the minute for server time and by the byte-hour for storage, running out of money means you don't get any more minutes or any more byte-hours. No more server time means your servers just vanish into the ether. It doesn't mean they start returning error pages of being unable to write to the database.

They don't reach into your servers and meddle with your nginx configuration. They don't shrink your disk volumes to the amount that's already in use. That wouldn't be possible.


> As long as you are paying by the minute for server time and by the byte-hour for storage, running out of money means you don't get any more minutes or any more byte-hours.

Exactly, that's why I said that being able to prepay for a month ahead (or a similar period) is a good thing, as is anything else that makes billing predictable (such as no auto scaling).

> No more server time means your servers just vanish into the ether. It doesn't mean they start returning error pages of being unable to write to the database.

Come to think of it, that's probably the case for most people out there who just use one account/card for all of their resources.

What i described is indeed possible with a tiered approach: alotting most of your funds to keeping the DBs alive with all of your data and whatever is left apart from that for the more dynamic components - VPSes that work as load balancers or others that host your APIs.

Of course, the ways to achieve something like that are plentiful, for example, multiple accounts with different virtual cards (which may or may not be allowed), using multiple different service providers for different parts of the system (e.g. if one has better storage plans and latency isn't an issue, for example, data centers in same country), using multiple service providers for redundancy (hard to do for most DBs, easier for container clusters) or even using functionality provided by the platforms themselves (which may or may not exist when it comes to billing, even though DigitalOcean had a pretty lovely way of grouping resources into projects).

Come to think of it, that's probably a space that could use a lot of improvements.

> They don't reach into your servers and meddle with your nginx configuration. They don't shrink your disk volumes to the amount that's already in use. That wouldn't be possible.

This isn't even necessary. If platforms allowed me to say: "Here's a bunch of API nodes that I'll give up to X$ for the following month and here's another DB node that I'll prepay in full with Y$ for the following month" then none of the other multi-cloud deployment strategies or tiering would even need to be considered.

If we want to consider situations with auto scaling or other types of dynamic billing, then we should also be able to say something along the lines of: "For those API nodes with my preconfigured init script, i want at least Z instances available with the given funds, whereas any remainder can be used up for autoscaling up to W total nodes. Thus, if the alotted funds run out, it all will return to the prepaid/reserved minimum of Z instances for the rest of the billing period."

AWS Lambda can charge you for usage at sub second resolutions and yet neither they nor other cloud platforms provide good resource prioritization solutions or fine grained billing limits, doing just alerts at best? I'm not sure why that is, but until things change, we'll just have to live with workarounds. Technically some of that would already be possible with something like Reserved Instances on AWS (https://aws.amazon.com/ec2/pricing/reserved-instances/), but that's still not granular enough IMO. Then you'd just have RIs and "everything else" as opposed to resource groups with spending limits.

Furthermore, if my Time4VPS server runs out of bandwidth, I'm not expected to pay more, the port speed just goes down until the end of the prepaid billing period. That's the sort of simplicity that's one of the best current options, with minimal hassles. And if i ever can't afford to renew 10 API servers, then I'll just get 5 or whatever amount i can afford.

More platforms should be like that, maybe at a day/hour/... resolution, though and with APIs that allow us to decide ourselves how often we want to renew services and for what periods, like a few clever scripts online that you can find for turning off AWS/other service instances when not in use. Currently any fine grained controls that you want on those platforms would have to be done programmatically, but it's not like you could easily manage ingress/egress costs that way.


I think your line of thinking is rooted in the assumptions that these cloud services use the fact that they don't do what you describe as a ploy to make more money. Instead it's that they choose to invest in other kinds of functionality; while the concepts you describe are certainly possible to build, they are specifically demanded by a narrow subset of the market which makes them difficult to prioritize

That's perfectly fair, then.

I'll simply use services that fall more in line with what i want and i urge others to do the same, to better suit their needs and avoid the possibility of some very unpleasant financial circumstances down the road, at least as far as personal projects are concerned. Hence the list of some lovely providers in the post above.

They'll simply continue to work with their current approaches to billing, missing out on the 0.01% (read: humorous made up figure to not create the impression that i believe that huge platforms should work like i expect them to, my ego isn't that huge) or so of additional income that they'd get by catering to all of the people like me in the world. That's the beauty of a (mostly) free market - supply and demand.


It's definitely bigger than that as losing out on people like you means losing out on your future potential successes! So definitely important to keep this top of mind for those of us in the biz. It's one way to find competitive advantage in the growth funnel

Would you be comfortable with losing all of your data and going down when the spend limit is hit? That's fundamentally why this is not as simple as you think

Edit: On second thought this is probably a bad idea, but I'm leaving my original comment below for posterity.

Probably easiest to generate a virtual card number with a spend limit, off the top of my head I know Capital One offers this, Apple will also generate virtual card numbers though I don't know if you can set a spend limit on them.


This just stops them from easily charging you. It doesn't change your liability.

Yeah good point, I was thinking they'd cut your service off but on second thought that's probably not the case.

Woah woah woah we're bashing AWS here :)

Imo hard spend limit is a pretty complicated thing to implement and billing has historically been very batch based


Hetzner

As an AWS user I'm a bit disappointed with this. I was hoping their answer would be to cut or eliminated some prices across the board vs just increasing the free tier.

I find most apps are pretty binary. Either they are high bandwidth (like video and backups) or not. If you use 1TB there is a good chance you will use 2TB.

This will certainly make it less likely for the low bandwidth users to get a surprise bill but if you're doing video $85 per TB will get expensive fast. You better make sure your business model can make a profit at those prices. But even then, if the average user pays you $20 a month and only streams 100GB of video you make money but you'd make more money outside of AWS.

It's almost as if AWS wants to active discourage those use cases.


With reinvent coming up next week, we may not have heard the last of their response to lowering egress fees

Or they want to avoid making pricing more complex than it already is, they do that by significantly marking up egress and using it to cross-subsidize other stuff (like data ingress and internal data transfer), and they've determined that the benefit of that simpler pricing to their other users outweighs the detriment to users with egress-heavy use cases.

I've never operated at Amazons scale so I don't know the wholesale costs involved but at least one estimate puts AWS margins at bandwidth at 99% [1]. And I'm guessing ingress is a fixed cost where they are running nowhere near capacity and that is why it is free. That and they want your data on AWS so they can charge you for storage and when you download it.

I realize business need to make a profit but I don't think AWS is cross-subsidizing. It seems more like there are no loss leaders, they make at least 60% margin on all products (for users not big enough to negotiate rates). They could lower their outbound prices significantly, they just choose not to until the market forces them.

Which is smart business. But I think given Cloud Flare's pressure here, the market is calling for a more aggressive adjustment than just upping the free tier.

[1] https://www.cnbc.com/2021/09/05/how-amazon-web-services-make...


An underestimated chunk of AWSs costs is people. They employ >50k people, and likely have a salary bill of several billion dollars, a significant portion of revenue.

50,000 x 250k = 12.5B

Also, they do cross-subsidize, because many AWS services are either hardly used (CodePipeline) or free (CloudFormation) and the cost to run those services is non-negligible.


Very good point RE people. Running AWS is expensive, I'm sure, but to be clear, I'm not advocating for bandwidth to be free. Just that $0.08 per GB is not competitive.

Also, since CloudFormation is a feature that facilitates creating more resources that you do pay for I'm not sure that is a great example. That is more like saying that the other services subsidize the Web Console. CloudFormation is not a product in-and-of itself, it is more a shared feature that spans product lines.

Code Pipeline is an interesting one in that if it is true it's not widely used and costs more money than it makes, that's a no brainer: shut it down. But there has got to be more to that story why they haven't.

But in any case, with code pipeline there is a clear value chain that ends at ECR/EC2/Lambda/etc. My guess -- and it is just a guess -- would be that someone feels pipeline produces more revenue for EC2 (or similar) and that covers the cost. Or, simply, they have a path for it to be profitable.


You're overthinking it, and Code Pipeline is just one example. As I'm writing this they have close to 300 services!!! Many of them, maybe hundreds of them just straight up lose money.

The Corey Quinn number I keep quoting is that >60% of AWS revenue for large (>100MM) accounts is EC2. That jives with what I hear from people in the industry.

That means that the other 250+ services are splitting less than 40% of AWS revenue....


Wonder if CloudFlare’s post had anything to do with this: https://blog.cloudflare.com/aws-egregious-egress/

1TB / month CDN usage for Free ? Ignoring Cloudflare, I am not aware of any other CDN that offer anything similar.

Although their pricing [1] after the first 1TB is still very expensive.

[1] https://aws.amazon.com/cloudfront/pricing/?nc=sn&loc=3


Thanks Cloudflare

The egregious AWS egress fees have made AWS practically useless for all but enterprise customers.

Who would pay 100$ for backup test or recovery with S3?


Literally any tech company paying US engineering salaries? Try to calculate what a bay area engineer makes hourly sometimes, then go ahead and flip that AWS cross-region data replication checkbox (and maybe make your S3 bucket immutable too, storage is cheap), then tell your boss that you just saved the company a whole bunch of money compared to your team cobbling together some janky bespoke backup solution that nobody will remember how or if it works 12 months from now.

Wonder is Backblaze will follow suit. They're part of the bandwidth alliance with Cloudflare and that's great, but if you're using B2 for personal use and want to get data from a private bucket, you're still paying $0.01/GB egress after the first gigabyte.

Get ‘em Cloudflare. :muscle:

I guess every little bit helps, but 100 GB/mo * $0.09/GB = $9/mo of savings. That's nice for a hobby project, but it seems utterly negligible for any actual business.

The CloudFront savings add up to about $100/mo. Still pretty negligible.

Good job Cloudflare

~~The "100GB is actually only 1GB per region" thing is some sneaky stuff :/~~ Good move on CloudFront though.

I think you misread, it says [emphasis added]:

>100 GB of data per month (up from 1 GB per region)


Ohh. *facepalm* Yeah that's way better.

> Data Transfer from AWS Regions to the Internet

This only applies to the free tier, which you age out of after a year. Who even cares? Or am I misreading this?

They can do a lot better than this.

edit: It appears that the regional transfer doesn't age out either, even though it isn't explicitly stated in this post (whereas they did state so for Cloudfront).


> Data Transfer from Amazon CloudFront is now free for up to 1 TB of data per month (up from 50 GB), and is no longer limited to the first 12 months after signup.

Right, that's why I quoted the other section.

My bad, I thought you were quoting the article title. I see it's been cleared up in other replies now.

Not everything in the free tier expires after 12 months. Some remain always free.

Even now you should be getting 1 GB of free out traffic on your older-than-12-months AWS account.


I found it very unclear because the next section explicitly states that, in the case of Cloudfront traffic, it won't age out. But they don't make that statement about the regional traffic.

I'm guessing that's because the regional traffic never aged out before.

You're misreading it. Only some parts of the free tier "age out". Other parts of the free tier are free forever (it's really stupidly confusing). The things announced in this announcement are free forever.

Thanks, that wasn't clear to me. I'll edit my post.

It's stupidly confusing because the regional transfer of 1GB free per month isn't technically part of the "Free Tier" as advertised on this page [0], it's just part of the normal egress pricing model of the individual services [1] [2]. So I guess really this announcement post is misleading/confusing because the regional transfer increase is just a change in the normal pricing model, not the "Free Tier"... but really that's just semantics. AWS really needs to fix the "Free Tier" to make it less confusing.

0: https://aws.amazon.com/free/

1: https://aws.amazon.com/ec2/pricing/on-demand/

2: https://aws.amazon.com/s3/pricing/


Thanks, really appreciate the detail. Edited my post.

Interesting timing given https://www.lastweekinaws.com/blog/the-aws-managed-nat-gatew... just came out. Does this also apply to NAT Gateways?

This is nice for backup programs like HashBackup (I'm the author). For block-based incremental backups, this new free egress allowance lets backup tools do maintenance on the backup (packing vacant space out of older backup files) without egress charges.

>Data Transfer from AWS Regions to the Internet is now free for up to 100 GB of data per month (up from 1 GB per region).

Is it 100 GB per region or 100 GB across regions? Because it sounds like the latter.


I read it as total. But that's still a good sized increase. Especially if most of your traffic comes out of 1 or 2 regions.

Sure, all the caveats from muh capitalism and muh free market still apply ...

But,

This is THE textbook example on how competition pushes innovation forward and drives prices down.

Thank you, Cloudflare!


Nothing on Lightsail CDN and Object Store egress?

Looks like R2 was a punch in the gut for Amazon.

R2?

Cloudflare's answer to S3 (free egress, cheap storage).

> CloudFront Function

Is this similar to cloudflare workers?


Thank you Cloudflare!



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: