The big players have separate charges for bandwidth and disk and other hidden stuff. They are way more expensive than Digital Ocean / OVH all inclusive. Worse, the costs is unpredictable which makes them a no go for a side project, I can't risk accidentally getting a $1000 bill.
As a real world example, I run a personal blog. If it were running on S3, my personal finance would have been obliterated when it got featured on HN and served 1+ TB of traffic.
I've had things hit the HN front page a few times while just hosting on ec2 and never had a noticeable increase in charges. Then again, I wasn't hosting very large files.
Can HN really deliver enough traffic to a static site to cost a significant amount? I've had mildly popular posts on HN for my Netlify blog (John Carmack tweeted about it!) and not had to pay for bandwidth.
The concern for me is a lack of hard limit on spending on GCP, Azure, and AWS. If I screw up and allocate a bunch of resources unintentionally, I'm left holding the bill. That's a terrible setup for PaaS because all programming involves mistakes eventually, especially for new users learning the system.
Granted, there are likely limits on accounts, but those are to protect the services from fraud, no to protect the user from overspending. The limits aren't well defined and it's not something you can rely on because MS might consider $10k / month a small account while it's a ton of money for me.
Azure customers have been asking for hard limits on spending for 8 [1] years with radio silence for the last 5.
There's a difference in goals I guess. If I spend more than expected I WANT things to break. Microsoft, Google, and Azure want me to spend unlimited amounts of money, even if I don't have it. At least AWS can be set up using a prepaid credit card so if I screw up they have to call me to collect their money and I negotiate.
- Hobby kid doesn't want to overpay, shut everything down
- Business absolutely doesn't care about spend, if they get some kind of marketing result traffic spike they just want the site to stay up even if it blows the average budget
Very large businesses might not care about spend, but pretty much everyone else does.
Almost everyone will be unhappy if they're stuck with a six figure bill for non-converting visits because their site went viral. Everyone will be unhappy if they're stuck with a six figure bill because their site was used in a DDoS reflection attack, or got pwned and used in a DDoS attack directly.
Everything I run on nickle-and-dime-to-death cloud services, such as AWS, won't even respond to unauthenticated requests (Nginx return 444, or reachable only via Wireguard) precisely to mitigate this risk. To do anything else is just financially irresponsible.
I've even considered coding a kill switch that will shut down AWS instances if they exceed billing limits, but the fact that AWS charges a fee to check your spend via an API makes this awkward and speaks volumes about Amazon's motivations.
Amazon's refusal to offer spending caps on AWS benefits Amazon and only Amazon.
>"Business absolutely doesn't care about spend, if they get some kind of marketing result traffic spike they just want the site to stay up even if it blows the average budget"
While this statement can be true in some cases I vividly remember bosses of largish (budget wise company) running around like headless chickens yelling to kill every running instance of service just because they were hit by way more "success" that they've planned for.
Hard spend limits are not an easy problem with cloud. There are too many things that incur costs. Everytime this comes up, I ask the same question: what do you expect to happen when the quota is hit?
Shutdown your servers? Wipe your SSDs and storage buckets? Remove your DNS records? Should it be permanent? If not then they're just subsidizing the costs. If it's soft-limit then its just a warning, and if you just want a warning then billing alarms already exist in every cloud.
Also for most customers, the data and service is far more important than the cost. Bills can be negotiated or forgiven afterwards. Lost data and customers can't.
>Shutdown your servers? Wipe your SSDs and storage buckets? Remove your DNS records? Should it be permanent? If not then they're just subsidizing the costs. If it's soft-limit then its just a warning, and if you just want a warning then billing alarms already exist in every cloud.
You know. When I hit the storage limit of my SSD it doesn't wipe my data. It just ceases to store more data. When I rent a server for a fixed price and my service is under a DDOS attack then it will simply cease to work for the duration of the attack. If there is a variable service like lamba that charges per execution then lambda can simply cease to run my jobs.
You can neatly separate time based and usage based charges and set a limit for them separately. It doesn't even need to be a monetary limit, it could be a resource based limit. Every service would be limited to 0GB storage, 0GB RAM, 0 nodes, 0 queries, 0 api calls by default and you set the limit to whatever you want. AWS or Google Cloud could then calculate the maximum possible bill for the limits you have chosen. People will can then set their limits so that a surprise bill won't be significantly above their usual bill.
Your comment is lazy and not very creative. You're just throwing your hands up and pretending there is no other way even though cloud providers have created this situation for their own benefit.
The vast majority of overages are due to user error. These errors would just be shifted to include quota mistakes, which can incur data or service loss. Usage limits might be softer than monetary limits which are bounded by the time dimension, but can still cause problems since they do not discriminate between good vs bad traffic.
Before you go around calling people lazy, I suggest you put more thought into why creating more options for people who are overwhelmed by options is generally not productive and can cause unintended consequences and expose liability. With some more thought, you'll also realize that AWS is optimized for businesses and, as stated, losing customers or data is much worse than paying a higher bill, which can always be negotiated after the fact.
I want all services to be rate limited. What I don't want is for some runaway process (whatever the cause) to bankrupt me before I can respond to any alerts (i.e within hours).
In other words, I don't necessarily need to set a hard spending limit, but I want to set a hard spending growth limit (allowing for short bursts), either directly in monetary terms or indirectly through rate limits on individual services.
> Shutdown your servers? Wipe your SSDs and storage buckets? Remove your DNS records? Should it be permanent?
I'd be absolutely fine with that in a sub-account or resource group as long as I had to enable it.
A while back I wanted to try out an Azure Resource Manager template as part of learning something. Since I was _learning_ it, I wasn't 100% positive what it was going to do, but I knew that it should cost about $1 to deploy it.
With a hard limit on spending I would have set it to $10, run the thing and been ok with the account being wiped if I hit $10. Even $100 I could tolerate. Unlimited $$ was too risky for me, so I chickened out.
The worst part is I can't even delete my CC because it's tied to an expired trial that I can't update billing for.
> Also for most customers, the data and service is far more important than the cost.
I run a small side business, and these unlimited cloud plans are just a no go. A medium to large company could totally absorb a 5 figures bill, but that would be a dead sentence to my side project. Also, considering the variable costs of bandwidth of AWS, Azure or Cloudflare, one competitor could simple rent an OVH server and incur insane costs to my business while only spending 1/10 of the money.
Right now, I'm using Heroku (with a limited number of dynos and a single PgSQL database) together with BunnyCDN (which allow me to pay for prepaid usage). If I ever get DDoS'ed, my app will most probably be inaccessible or at least significantly slower, while I'll receive an email alert, from which I can decide myself to allocate more resources.
No. I once had a site hit #1 on HN. It was hosted on a Dreamhost shared VPC with Wordpress. It barely broke a sweat. I have no idea what these guys are doing who are having their sites bulldozed by HN traffic but it's worryingly common for something that should never happen.
This has always confused me. What is going on when someone's site is taken down by HN traffic? (Maybe the fact it's on HN when this occurs is just coincidence: maybe the real traffic loads are always from reddit or twitter or something in these cases?)
(My experience with high-ranking HN posts: initially with DreamHost, later with cheapest AWS ec2—never a noticeable impact with either)
Among articles you see on the front page, there is a two orders of magnitude difference in visits between the more popular and the less popular.
HN/reddit/twitter/android can all send a similar amount of traffic. There's one order of magnitude there, how many places an article is featured at the same time?
Then there's an order of magnitude within each place, how much interest and readership the article could gather? Highly variable. The first comment alone can make or break an article.
This sounds off. Both reddit and twitter have the potential for vastly more traffic than HN.
I also haven’t had the number one spot on HN (except maybe briefly), but was in 2 and 3 for long stretches and even an order of magnitude more traffic wouldn’t have been a problem.
Two orders probably would have been, but I have a hard time imagining a 100x traffic difference between the 1 spot and the 2 spot. Then again, if it was a very slow day here vs a very busy day maybe (though in my case it wasn’t a very slow day).
I assume you're targeting reddit programming and similar subs, they're similar to HN in aggregate. You're right that Reddit and twitter have way bigger audience in total but only a fraction of all reddit users is relevant. Assume we're talking about a tech blog, not articles on election or brexit?
It's not about rank. It's about the specifics of the article, mainly the title and the content. It simply attracts more or less readership.
I've had #1 multiple times. I've had articles that stayed on the front page for multiple days.
Wouldn't be surprised if I'm top 1% of personal bloggers on HN or something like that. I'd be shelling out AWS thousands of dollars over the years if I were using anything AWS, or more likely I'd be either broke or the blog would have crumbled under the traffic each time never going viral.
I don’t usually do this but I decided to check your post history. I don’t know if anyone else posted your blog posts to HN but assuming it’s just you, I counted five posts (excluding the flagged ones) that would have made it to the front page of HN for any meaningful amount of time. Based on this, I would say that you are unlikely to be HN’a top blogger.
And I don’t know how you’d set up your blog with AWS but I don’t see how it could be expensive to host static content there.
Wrong assumption, a fair bunch of the posts came from other people :p
I honestly wonder what's the average distribution for HN contributors. I imagine it's not much for personal blogs. Not trying to compare myself to the new york times or cloudflare blog obviously.
Heh. Checked by the domain instead and got 10 submissions with double digit or more vote counts. I still think pg and jacquesm have you beat by quite a bit but yes you have 2x the front page posts I initially spotted.
That's simply not true. I hit #1 a few times with content hosted on S3. Ended up paying maybe extra $2 those months. I'd be worried if I hosted any large files that came with it, but just a blog post? Barely noticeable.
You can lose money accidentally in many ways. I agree you have to watch out, but still disagree with the number of people dismissing S3 as a quick way to bankruptcy if you get HN #1.
I'm currently on AWS for my site and in the process of researching alternatives. I share your concern of something going wrong and being stuck with a huge bill. Someone pointed out that 1TB of outgoing traffic from Amazon EC2 would cost $90. I'm fortunate enough that that won't obliterate me, but I won't be happy if that happens. I'd rather my blog get hugged to death. Going viral isn't worth $90 to me.
But I don't think DO really solves this problem either. They say they have spending caps in some of their marketing materials, but the finer print says that overage billing is $0.01/GB. Now that's a whole lot better than Amazon's $0.09/GB, but it's not a cap.
DO can say they have "predictable pricing" because in the vast majority of the cases the "free allotment" that comes with your droplet is enough, so you never see a bandwidth charge, you pay the cost of your droplet and you're done. So yes, it's more predictable because Amazon would charge you $5.23 one month, $4.87 another month, and DO charges you $5 every month.
But I'm not worried about the 99% case, I'm worried about the extreme scenario where I somehow go viral or get DOSed. And both options leave me exposed.
That's not to say DO isn't a better deal for the hobbyist than AWS. The equivalent of DO's $5 droplet will run you much more on AWS, especially if you actually use the bandwidth they're allotting you. And the big 3 do a lot of nickel-and-diming, which is a nuisance compared to the simpler pricing model of the smaller providers.
You should be able to get the cost down significantly by caching on cloudflare. My company managed to deliver 99.9%+ from cloudflare for static pages and allowed us to serve large amounts of traffic from a small backend.
As a real world example, I run a personal blog. If it were running on S3, my personal finance would have been obliterated when it got featured on HN and served 1+ TB of traffic.