Lightsail instances can be used to "proxy" data from other AWS resources (eg EC2 instances or S3 buckets). Each Lightsail instance has a certain amount of data transfer included in it's price ($3.5 instance has 1TB, $5 instance has 2TB, $10 instance has 3TB, $20 instance has 4TB, $40 instance has 5TB). The best value (dollar per transferred data) is the $10 instance, which gives you 3TB of traffic.
Using the data provided by the post:
3TB worth of traffic from an EC2 would cost $276.48 (us-east-1).
3TB worth of traffic from a S3 bucket would cost $69.
Note: one downside of using Lightsail instances is that both ingress and egress traffic counts as "traffic".
> 51.3. You may not use Amazon Lightsail in a manner intended to avoid incurring data fees from other Services (e.g., proxying network traffic from Services to the public internet or other destinations or excessive data processing through load balancing or content delivery network (CDN) Services as described in the technical documentation), and if you do, we may throttle or suspend your data services or suspend your account.
As someone who has dealt with users who use a system in an unintended way, you don't go looking for those people and you don't build something to enforce a policy like this. When you're running services for lots of customers, you often don't know a lot of what's going on in the system and how people are using it. Then something seems weird or something is causing a problem and you want to deal with it - and you want the language out there so that you can deal with it.
In Amazon's case, their bandwidth pricing isn't really defendable. It's just crap. However, sometimes you're trying to offer something reasonable, but need to make sure that a customer doesn't end up abusing something. For example, Chia is a cryptocurrency that will basically wear through SSDs (it's a proof-of-space system). There aren't explicit limits on how frequently you can write to a disk from most hosting providers, but Chia goes beyond what normal usage would do to a disk. Chia farmers would rather burn someone else's SSD that they're renting than their own. But no one at most hosting providers was probably looking at how frequently people were writing before noticing "hey, why are the disks failing faster than we'd expect?"
They probably haven't taken action on it because they probably haven't noticed it being a problem. But if you're a whale of a customer and suddenly your data transfer charges drop off a cliff, someone might end up looking into that and seeing what's going on.
> You may not use Amazon Lightsail in a manner intended to avoid incurring data fees from other Services
This requires proving the users intent, which is not obvious except in the most blatant of cases (i.e. using Lightsail as a bent-pipe by writing the exact bytes you're reading). If it is a "CSV to Parquet translation layer", how would AWS possibly prove it's anything other than what it claims to be? You'd be paying a few more cents for compute, but that's the price of plausible deniability
Companies are permitted to deny service to anyone at any time for any (non-protected) reason. They typically don't have to justify service terminations to a court of law. Who would they be required to prove user intent to, and why?
I don't think you parsed their message correctly. It's not about litigation.
Re-posting a bit of the service terms for easy reference:
> 51.3. You may not use Amazon Lightsail in a manner intended to avoid incurring data fees from other Services [...]
As you point out, they may terminate your service without any justification in a court of law. So how do they go about terminating the offenders? Well, one trivial way (from a technology and/or policy perspective): terminate everyone's service! If you blindly terminate everyone's service, that will certainly prevent anyone abusing LightSail.
But that's, uh, not good for business. So they probably want to terminate the service of only those people actually abusing it. But how do you do that?
You'd have to look at each account's usage and do something to determine if that traffic is or isn't a means of avoiding data fees from other services. In other words, you'd have to determine the intent of that traffic. Or, put yet another way: "this requires proving the users intent".
If doing so was as trivial as detecting any traffic between LightSail and the other services, they'd just prevent such connections in the first place. So how can AWS tell if some traffic between services is legitimate or not? The unspoken premise of the person you're replying to is that this probably isn't feasible for AWS to catch any and all people abusing LightSail in this way, with the conclusion being that you can (in practice) probably get away with it unnoticed.
We disagree on the definition of "prove". I would not object to the claim if it had used "determine" or "detect" instead of "prove".
That said, detection is easy. Look for users who spin up a Lightsail instance and use close to 100% of its bandwidth quota before spinning it down. Sort by number of such instances, and tell all users above some cutoff that in your sole discretion you believe they have violated your TOS, and are terminating their service. Doing so is completely legally defensible.
> We disagree on the definition of "prove". I would not object to the claim if it had used "determine" or "detect" instead of "prove".
I do find that a bit odd, though. If I consult the Merriam Webster dictionary, I see precisely one entry under "prove" that says anything related to law and/or courts:
> to establish the existence, truth, or validity of (as by evidence or logic)
> "prove a theorem"
> "the charges were never proved in court"
Even there, the only mentioning of court is in the example sentence, rather than the definition itself -- naturally, we want our court system to be based in reasoning rather than whim.
Additionally, the meaning of "prove" given by this definition is exactly what the study of formal logic sets out to codify, and given that this is hacker news (where many are interested/involved in computer science and/or formal logic itself), it seems counterproductive to ascribe some legal meaning to the word "prove" here, as it would (to my mind, at least) be quite unlikely for others to do so.
GP here - feel free to replace "prove" with determine because that's what I meant. My point was that it is really hard for Amazon to detect data exfiltration when its disguised as some other run-of-the mill service. Amazon can cancel anyone's service at anytime, but they can't afford to piss off legitimate customers with capricious, undeserved bans due to false positives. Regardless of where AWS draws the line to separate abuse from legit usage,it will always be possible to skirt underneath it. The crux of my argument is that AWS will tolerate false negatives over false positives.
I always assumed that your free quota is proportional to the time you pay. Even the price is not the advertised fixed $3.5, you pay less in months with 30 days than in months with 31 days.
I have not checked my cost and usage reports every time I have some experimental instance for a shorter time, so I am not sure. Just from the general knowledge that AWS is permanently counting every fraction of a peanut. But as the submission shows, exceptions to the rule can exist.
True. And no one has said that they must prove anything to anyone.
Amazon wants to make money, so they probably don't want to terminate the service of people who are acting in good faith. But that's just another way of saying that they probably want to determine with some certainty that someone is not acting in good faith before terminating their service.
So it's not that Amazon needs to prove anything to anyone. But they do want to prove something to themselves.
In this case they are actually losing money not gaining by allowing this kind of abuse, both because the bandwidth usage costs money and also because of potential lost billing from other services which now is not billed.
The Lightsail style billing model works same way shared vs leased lines works, if everyone fully used their max allocation it won't be possible to offer service at that price point. They can offer 2TB or 4TB for price because the usage modelling of target users supported that.
No company wants a customer to bypass their usage and pricing ToS even if they are not actively enforcing it, it is lost revenue and/or bringing in customers who you don't really want.
> In this case they are actually losing money not gaining by allowing this kind of abuse, both because the bandwidth usage costs money and also because of potential lost billing from other services which now is not billed.
Your statement here is absolutely correct (we are in agreement); it is also absolutely orthogonal to what I (and others) have said.
Let me use an analogy.
Marijuana is illegal in most states of the US (and, federally, it is still a controlled substance). And yet a (relatively) recent survey[1] showed that around 7% of respondents grew marijuana at home.
How is this possible? Shouldn't that be 0%? It's almost like the DEA is slacking off or something.
... or maybe it's because they can't practicably round up each and every one these people: the DEA isn't omniscient, and given the 4th amendment they can't ransack every home within the US to catch these people. If you don't do something that gives them sufficient evidence to acquire a search warrant, there's nothing they can do about you growing pot in your domicile.
Back to Amazon. Could you, at a high level, describe a process by which they could, for a given account, determine if that account's use of LightSail is legitimate, or is instead intended to avoid incurring data fees from other services? And you must satisfy some additional, absolutely crucial qualifications: this process must not negatively impact abiding users (because they would abandon AWS, resulting in financial harm to AWS), the cost to AWS of executing this process must not be prohibitive (in terms of compute, human resources, etc), and the process must be applied across all accounts within a reasonable time frame (if it takes 1 year for AWS to comb through 1% of accounts, that means you have a mere 1/100 odds of having your service terminated for abusing LightSail for an entire year).
Something being prohibited doesn't imply that it is practicably, fully enforceable.
If your workload is just Compute or stateless with commodity or standardized API interfaces you could maybe do a move maybe.
Even for those it is fraught with problems and takes a lot of time , time you are not developing features and adding product value.
If you are using S3 or any sort of proprietary stack on AWS to the cost to migrate (retrieval + b/w or rewrite your app ) is just too prohibitive .
All cloud providers know this and plan that in their models , reason why they give out generous free tier or give a ton of money in startup programs or other hooks to get you to start .
——
AWS does not just have an all or nothing suspension policy .
From personal experience I know they do suspend your access to a single service at even single region level - our account still has SES blocked in one region because early on we handled bounces poorly and this was at least 8 years ago they even sent few warnings too, so they have pretty robust framework to handle abuse. Back then I couldn’t get it unblocked with tickets and escalations , I am sure we spending 15/20k a year then so not super small either .
These days we spend more like 250k a year on AWS and I still don’t get a proper account manager. I could perhaps get it unblocked now if I really wanted, not just worth the hassle to jump the hoops, one of the reasons why Azure is our primary cloud partner and we spend most of our money on despite subpar tech compared to AwS (GCP is 10x worse on this) .
I cannot comment on specific controls that is put on lightsail never used the service but they definitely do have a framework to suspend for every service they offer.
Just given the generous free tier, there is huge industry of using stolen credit cards to run scams or send spam on AWS which they constantly have to fight against.
Btw. Hetzner charges 1.19 €/ TB or so if you exceed the 20 TB you get even with the 4.5 €/ month VM. So obviously, AWS is charging disproportionately compared to what it actually costs to deliver that bandwidth/ traffic.
These charges are great at preventing a good deal of piracy probably but they also prevent some startups from offering competitive prices or implement a simpler system.
Cost and price are at best correlated. Cost is at best a floor for pricing rarely the signal for it.
Bandwidth cost is very cheap for AWS and other major cloud vendor, even cheaper than likes of OVH, Hetzner buy at, because they also own good chunk of the deep sea cables in addition to DCs.
B/W specifically is charged high as a competitive moat. Every time I consider migration out of say S3, the migration costs always come out prohibitively high because of b/w pricing, so end up staying put.
That is exactly AWS wants, and so b/w is priced costliest in AWS ( even more so than Azure,GCP) because their market share is far ahead and thus have most risk of customer retention. Hetnzer and OVH have less PaaS service moat they have to worry about unlike AWS,Azure GCP , so they instead compete by setting rock bottom prices.
More market share AWS gains, less incentive they have to drop b/w pricing, only pressure to increase.
You can download 1TB of data for free from AWS each month, as Cloudfront has a free tier [1] with 1TB monthly egress included. Point it to S3 or whatever HTTP server you want and voila.
Cloudflare rant was posted in July 2021, new Cloudfront free tier was there Nov 2021. I consider changing pricing like this for AWS in 4 months pretty fast.
GCP patched a similar loophole [1] in 2023 presumably because some of their customers were abusing it. I'd expect AWS to do the same if this becomes widespread enough.
Unlikely. The "loophole" GCP patched was that you can use GCS to transfer data between regions on the same continent for free. This is already non-free on AWS. What OP mentioned is that transferring data between availability zones *in the same region* also costs $0.02 per GB and can be worked around.
This doesn't feel like a loophole though, it feels like they have optimized S3 and intend your EC2 instances to use S3 as storage. But maybe not as transfer window, that is, they expect you to put and leave your data on there.
There are tons of these tricks you can use to cut costs and get resources for free. It's smart, but not reliable. It's the same type of hacking that leads to crypto mining on github actions via OSS repos.
Treat this as an interesting hacking exercise, but do not deploy a solution like this to production (or at least get your account managers blessing first), lest you risk waking up to a terminated AWS account.
I have used this and other techniques for years and never gotten shut down. Passing through S3 is also generally more efficient for distributing data to multiple sources than running some sync process.
S3 storage costs are charged per GB month so 1 TB * .023 per GB / 730 hrs per month… should be 3 cents if the data was left in the bucket for an hour.[1]
However sounds like it was deleted almost right away. In that case the charge might be 0.03 / 60 if the data was around for a minute. Normally I would expect AWS to round this up to $0.01..
The TimedByteStorage value from the cost and usage report would be the ultimate determinant here.
I can't see a reason to do this intentionally within a single account, but use cases with multiple accounts should be aware that what AZ has ID us-east-1a in Account 1 is not necessarily the same AZ that has the ID us-east-1a in Account 2.
But in the context of data transfer costs, this would actually increase the costs, because there's a small surcharge for Intelligent Tiering - and the only relevant storage class for sidestepping data transfer costs is standard storage (because it's the only one with free download), so Intelligent Tiering won't provide value.
This feels like the tech equivalent of tax avoidance.
If too many people do this - AWS will "just close the loophole".
There's not one AWS - there are probably dozens if not hundreds of AWS - each with their own KPIs. One wants to reduce your spend - but not tell you how to really reduce your spend.
If you make something complex enough (AWS) - it will be impossible for customers to optimize in any one factor - as everything is complected together.
This isn't a loophole. This is by design. AWS wants you to use specific services in specific ways, so they make it really cheap to do so. Using an endpoint for S3 is one of the ways they want you to use the S3 service.
Another example is using CloudFront. AWS wants you to use CloudFront, so they make CloudFront cheaper than other types of data egress.
An alternative to sophisticated cloud cost minimization systems is…….. don’t use the cloud. Host it yourself. Or use Cloudflare which has 0 cents per gigabyte egress fees. Or just rent cloud servers from one of the many much cheaper VPS hosting services and don’t use all the expensive and complex cloud services all designed to lock you in and drain cash from your at 9 or 12 or 17 cents per gigabyte.
Seriously, if you’re at the point that you’re doing sophisticated analysis of cloud costs, consider dropping the cloud.
If you're at the point you're doing sophisticated cloud cost analysis you are doing the cloud right, because that is completely impossible anywhere else.
I swear the people who say go on premise have no idea how much the salary costs of someone who will not treat their datacenter like a home lab is. Even Apple iCloud is in AWS and GCP because of how economical it is, you suck at the cloud you think you have to go back on prem, or you just don't give a shit about reliability (start pricing up DDoS protection costs at anything higher than 10G and tell me the cloud is more expensive).
We spend 100k+ on AWS bandwidth and it's still cheaper than our DIA circuits because we don't have to pay network engineers to manage 3 different AZs.
Apparently we're doing the impossible for over 12 years now. Who knew?
Some people act like it's some kind of black magic. It's not. We've some customers in our DC and some on AWS for various reasons. AWS isn't less problematic. AWS is about 10x more expensive. Both on prem and cloud require people familiar with them and cloud-engineers are in no way cheaper.
Only meaningful problem is that on-prem requires some up front cost&time. That can be mitigated by leasing and other means, but indeed can be an issue for small businesses.
Both on prem and cloud require people familiar with them and cloud-engineers are in no way cheaper.
I think the real story is a bit sordid: office politics. On-prem and cloud are different skillsets. Companies that have been around for a while can end up with both on-prem and cloud experts who end up competing with each other, often on separate teams. Throw in some slick consultants from Amazon who are able to bend the ear of the VP and you've got a real problem. From what I've seen it doesn't end well for the on-prem team!
I concur. Many VPs bend ears so easily you got to wonder. do they also get invited to some private dinner by those so called slick consultants, who pay the bill in a rush and leaves after forgetting some thick envelope on the table.
It’s a story as old as time itself. IBM has been doing it at least as far back as the 60’s. Fancy consultants who know the tech and also know how to sell and make themselves seem way smarter than the VP’s reports. Do one slick presentation and the VP is asking his team “why didn’t you guys come up with this stuff?”
Next thing you know these multi-million-dollar contracts are signed and the existing teams are just shaking their heads. The smart ones have already put out their resumes and started interviewing elsewhere.
If you have a large enough on-prem infrastructure where you are using automation and not manually configuring everything, the skillsets will have a lot of overlap.
Hilariously ironic, with a sufficiently large cloud footprint, things like BGP (and more/other internetworking protocols) and OpenZFS become required skillsets. I have firsthand experience of this. :)
Yeah, there was an amusement there. I've definitely had to understand BGP to configure cloud VPC setups.
"They just have to be an AWS expert".
Right. They just have to be experts in: EC2, S3, Aurora, DynamoDB, RDS, Lambda, VPC, LightSail, Athena, EMR, RedShift, MQ, SQS, SNS, ECR, ECS, EKS, ElastiCache, CloudWatch, CloudTrail, IAM, Cognito, and a few more. No big deal.
AWS pricing and optimization is just capacity planning, which doesn’t go away if you run on prem - it just looks different, with longer time horizons & financial implications.
“Will my data center run out of floor space & I need to expand?” (years+)
“Will I have enough cooling & power to support the new racks we need?” (6 months+)
“When do I need to get the server order out to ensure we meet our capacity needs?” (6+ weeks)
Every one of those are capital expenditures, so line them up with the annual budget cycle - be sure to keep enough spare capacity to be responsive for last minute asks.
Don’t think my intent is to romanticize the cloud, either. It’s not better, nor worse, just a different way to manage things.
Of course if your company is sufficiently small, do whatever you know and can do quickly - customer acquisition will be more important than debating the cost of either infra in aws or a colo’d server or two in some racks somewhere. But the complexity doesn’t go away if you go to the cloud, OR if you are all on prem. TINSTAAFL.
- small business with at least some reliability expectation, and little to none IT expertise
- huge workload requirement volatility
- having someone else to blame
- solution is already working in cloud, with teams being very comfortable there, and perceiving on-prem as “enemy” (analogy: forcing devs to rewrite stuff from haskell to java)
- that extra cost is small budget line for you
On the other hand, it does not make sense to go cloud, if you are sufficiently big and already have on prem solution and expertise in house.
(Extreme case: google, does not use aws for its main load; this upper threshold I wager is couple of orders of magnitude smaller)
Cloud doesn't make sense for small business. A vps would. If you are spending less than 100,000 you probably don't need it for your 10,000 million or less daily visitors
You probably aren't paying a "cloud engineer" just to fiddle with cloud config full time with <$100k/yr spend. Why would you suddenly need a full time sysop to do the same capability level of on-prem? If you do 10 hours of "cloud engineering" a month to support an application, the same capability level of on-prem work is probably in the same ballpark. 5-30 hours. Yes, it can be lower than cloud. No, it's NOT suddenly 160 hours every month. Yes, it does mean you need someone who can wear the extra hat, cloud engineering skills are literally no different in this regard.
Internet sites ran on basement and closet servers for years, actual server rooms for larger stuff. The jokes were tripping over power cords and backhoes digging up internet lines were the causes of most outages. It's never been easier to run on-prem than today. It cost a fraction of even budget VPS providers like Hetzner.
For certain classes of applications, it's awesome. It's not everyone's cup of tea I'll grant. But if you are inclined to play with hardware or have someone in the org who does, and an extra 2-4 hours of downtime a year isn't that big of a deal (depending on general utility and network available at your chosen site). You can save tons of money.
Exactly this. I've set up several VPSes for clients. They go for years without any maintenance because they don't want to pay for it. (They only bother when something is absolutely critical... like the OS is EOL and no more security upgrades.)
> Extreme case: google, does not use aws for its main load; this upper threshold I wager is couple of orders of magnitude smaller
Not a good example in my opinion, as Google is also a major provider of cloud services. AWS isn't the only game in town.
I think Disney+ is a more interesting case. A quick google search turns up some articles saying their video streaming is powered by external CDNs. As I understand it, Netflix take the opposite approach and deliver all video data from their own Open Connect CDN, although they use AWS for other workloads (presumably including things like authentication, their recommendation engine, etc).
Is that really a fair comparison though? AWS is a very weird argument to make because you could say that AWS is kind of “on premise” for Amazons purposes. Internally Amazon.com does not pay retail pricing or have the same level of support as third party end users. A better example would be looking at Jet.com/Walmart and asking if it runs on AWS.
Nobody who is big is paying retail prices, that's why saying "on premise is cheaper" is total copium.
As soon as you start factoring in discounts (i.e. bandwidth is nearly free at some point), the math of being on premise completely falls apart to the point you are paying more for licensing and support for the hardware than you are the entire lifecycle of your infrastructure in the cloud. It's just that bad to do it yourself.
Sorry, but who pays for licensing and support of the hardware? I've never done that. You buy a Dell server or whatever, you pay for the 5 or 7 years warranty up front, you put in in a rack and you never literally touch it again until it's EOL'ed. If something breaks Dell touches it, not you. That typically costs around $500/year, albeit paid upfront.
But you usually don't do that. Instead you rent a dedicated metal from someone. The cost if of the order of $1000/year including some storage, no more to pay unless you exceed 100TB/year. A similarly configured AWS EC2 instance is $13,000/year, plus bandwidth, plus whatever other services you get sucked into. And you will get sucked into it because if you ask AWS about any problem (like say, monitoring why your bills are so high), the answer is invariably "use this paid service of ours".
You're kidding yourself if you think using AWS is cheaper than the alternatives. Those discounts you speak of are from an absurdly high starting point. I'm sure there are lots of reasons to use AWS or a similar cloud service, but unless you only need a lot of grunt for at most a few months price isn't one of them.
I ran the numbers the other day. For just compute with my particular load, my numbers say AWS costs 73928x more for lambda over my on-prem. Like for like is 1250x more. This is presuming the savings levels for being a big spender that I've heard about.
That's a lot of room to work with for some inconvenience.
This is a quirk of the business that is Amazon and AWS, as they started by selling excess compute and expertise, due to how Amazon was built as API first internally it was almost natural.
This matches the public (i.e., non-Amazon) speculation I was hearing around the launch of S3 and later EC2. But not what I was hearing internally when I worked at Amazon. I was there when S3, the first AWS service, and EC2 were launched. I was working on what I believe to be the first Amazon (non-AWS) application that used S3 for storage. Getting that approved was not easy - all the same skepticism existed internally as externally (cost, availability, durability, security, etc.).
The story I was hearing internally was that it was too costly to scale infrastructure the way Amazon had been doing it, it was fragile, and the expertise wasn't keeping up with growth. So, set the bar a lot higher, and build infra that is big enough and flexible enough to be everybody's infra, and then Amazon's applications (e.g., retail) could run on AWS' excess capacity. Literally the opposite of what external folks were guessing. I believe they were completely physically separate data centers - even the physical location of AWS data centers were on a need-to-know basis internally (the internal lore was "under a mountain in Virginia" - this was years before Regions and Availability Zones). And any bugs in AWS could be worked out with outside usage before moving Amazon's applications onto it.
Also, Amazon needed the elasticity of AWS because of the nature of their retail business. At the time that the initial AWS services were being developed, a massive chunk of Amazon's traffic came during the holiday season. IIRC, something like half of the year's traffic and revenue, possibly more, came in November/December each year. That meant a lot of capacity was sitting idle most of the year. Selling that excess capacity would mean shutting AWS down every holiday season.
For a time, there was an internal mailing list that wasn't yet locked down that contained reports on S3 bandwidth usage. The growth rate was shockingly high. I would guess that within a year or two of release, S3 was using a few (at least) orders of magnitude more bandwidth than everything else at Amazon combined.
In broad strokes, the main point I was makings till stands though: AWS was deliberately made to back the demanding scale of Amazon, it was a bet on the future and the Amazon model as much as it was product service, and that did mean they built expertise and hardware up and sold that as a product none the less.
This still isn't the norm for most businesses, even big ones.
In many organizations the biggest appeal of services like AWS or GCP isn’t simply cost, it’s that the service is approved and therefore all the services of that service are approved and I no longer have to justify spinning up more compute or leveraging one of their more bespoke services like SQS (or wherever equivalent). It’s all just there, ready to be used.
It may not be true for you and where you work but this is a very real thing in a lot of organizations, where development teams want a quicker turn around on booting up services they need as the product evolves and having some control over how they act with each other. It (sometimes to negative results) opens up more architectural possibilities to solve problems
I'm not saying using AWS is universality bad. Depends on your needs.
What I'm saying is some people are trying to portrait rolling your own k8s or on-prem as equivalent to rolling your own crypto - better left to the chosen ones with years of training in secret monastery. This is BS :-)
These types of debates always seem to go this way - one person saying one option works way better for them so the other option must be crazy, followed by another person saying the opposite. I'm not an infra guy, but my guess is the reality is both are choosing the right option for themselves, because is no one objectively "best" option. Just tradeoffs.
I also think if a company is unhappy with their current set up and thinks that switching from cloud to on-prem (or vis versa) is the answer, they're probably delusional because "the fault, dear Brutus, is not in our stars, but in ourselves."
Completely not my point. I don't say that using cloud is crazy(we're using it sometimes). I'm saying that opinion that on-prem is much harder than AWS is IMO wrong.
Requires some planning ahead of time - yes. Harder - not really.
My small business once spend 50k per month on AWS. We brought that back to 800 dollars for a similar setup at Hetzner. I find this a significant number.
I have hetzner vms going on 500+ DAYS without restart.
I've been a hetzner customer for about 8 years now. Aws has been down x10 times more than hetzner has.
And even when hetzner has issues they are localized to a single DC at worst, most often a single host is down.
When aws has issue.. The whole internet has issues. Or their central region goes out, also can affect their other regions.
For the small/medium businesses infra I manage here in bulgaria , the same thing would cost 5-10 times as much on aws just for the compute, throw in 1tb of bandwidth.. And this makes no financial sense.
On hetzner I pay 40 euros for 2 vms, dedicated ips, daily backups, 100gb of ssd external storage, and firewall.
I have more than 10 servers on Hetzner (some dedicated, some VPS), for 5+ years and the same experience, once one of the dedicated servers had some hardware issues, and an hour later the drives were moved to another box and it was running again. Other than that time, I had downtime only because of my own fault.
Pretty sure over these years AWS has experienced a lot more issues overall.
"Yea this small VPS provider with 1% of the features is just as good as AWS to us" yea that's because you aren't using features as basic as AWS Nitro Enclaves and you are years behind even basic cloud security.
Hetzner is for running homelabs and basic compute, not businesses. That's why EU companies constantly ignore EU rulings on US-EU privacy shield because there's just no alternative to American cloud providers yet.
Of course you can run your business on it, you will just suffer because there is practically 0 automation to it. I guess if you are a small business but we are very obviously not talking about mom and pop shops who need a web and email server.
You're arguing semantics, you are on massive copium if you think that automation is useful to anyone. Go ahead and set up cryptographic attestation (or just try and interface with a TPM ffs) for your apps on Hetzner to decrypt customer data and see how impossible it is.
It's useful to a lot of people, not everyone has the same requirements. I'm running SecureBoot and LUKS encryption on my Laptop. If there's a TPM2 interface in whatever you can use the same thing. Add an immutable Linux distro and Kubernetes and you'll cover pretty many use cases. Not everyone has the same requirements, $BIGCLOUD makes things easier, but you also pay top dollar for it. AWS funds Amazon
Everyone needs it, you've just lied to yourself enough that you can talk yourself out of setting it up for your own apps. It's easy and free if you're in the Cloud, and nearly impossible on-premise to do yourself (i.e. why most VPS providers don't do it).
AWS and GCP are giving companies like Apple huge discounts so someone could say something like, "Even Apple iCloud is in AWS and GCP because of how economical it is"
There is too much nuance to say one is better than the other. In some cases using a IaaS is more economical, in other cases it's not.
For Apple, the same is also true[0] to say "Even Apple is running their own datacenters because of how economical it is"
You know everyone gets those discounts right? Like that's why the cloud is so much more economical than a datacenter, once you are at scale AWS will give you MASSIVE (I'm talking 30-60% discounts) on compute and other compute-adjacent resources, and I've seen 99% discounts on bandwidth with multi year agreements too.
I think that's where the disconnect is, a lot of people don't actually realize how cheap cloud compute is because they're only seeing the price for like... 20-30 servers and some basic S3 or load balancer usage. There are entire departments at Amazon that run those numbers on a daily basis to make sure AWS is always competitive with building your own datacenter.
Incompetence. Take my friend’s company for instance. They were frustrated paying $60K/mo to Amazon so their brilliant sysadmin bought $600K of servers and moved them into a cheap colo.
Over Christmas, everything died, and the brilliant sysadmin was on holiday. Nobody could get things going again for many days and so their entire SaaS business was failing. They lost a lot of business and trust as a result.
The sysadmin is now gone and they are back on AWS.
No key person risk management -> no risk register -> no management. Your friends company will fail regardless of poor sysadmin decision making or not. They need to hire competent management ASAP.
This is basically the logic of people who say the cloud is too expensive, you have to ignore so many things to make being on premise logical. Basically you are lying to yourself if you think you can run a datacenter cheaper and better than Amazon or Microsoft can, because if you can you are just making huge sacrifices somewhere (usually time, which is why reddit sysadmins complain about how much work they have while defending being on-premise because they couldn't possibly be wrong).
>Basically you are lying to yourself if you think you can run a datacenter cheaper and better than Amazon or Microsoft
what magical things do they have? that every single reasonably sized enterprise doesn't have? it should be extremly easy for a small enterprise to beat any of the main clouds* - they make crap ton of profit from you
*making an assumption that your needs are reasonably static and not MASSIVELY busting up and down your infrastructure
The "magical" thing they have is thousands and thousands of people thinking about how to improve the performance/efficiency/availability of their datacenters.
And yes, they pay the costs of those people and take a good profit margin, and yes there are in some ways diminishing returns to go from 3 nines to 6. But most enterprises can't match that depth of concentrated expertise, certainly not most small enterprises.
just to confirm I when i say small(ish) enterprise - i'm referring to company with around 500+ people and a IT dept of over 20+ people
seriously??!? you think you need thousands of people to to improve your effeciency and performance I would strongly suggest employing a few good infrastructure engineers/ architects who know what they're talking about - there really is no secret sauce!! just lots of kool aid on cloud
the whole cloud thing only looks wonderful and magical if your inexperienced.
re: the whole up time x9 thing is fairly useless in the real world, since the architecture of the application is really the king here!! christ on a stick i got an application running 100% for 3+ years on NT 4 because of good design (the clue is active - active - active)
also to add... availibilty zones are a very very poor mans DR
With cloud and SaaS services you are paying to reduce person risk profile.
Your forming a larger dependency on a team lead against a custom system that now is a liability as new people come to the organization don't want to adopt an abandoned poorly understood project.
This company is reasonably well run. After going back to AWS, they doubled their revenue and things are going well. They are not incompetent. They did earnestly try to cut their costs and just didn’t see the iceberg.
> [ Unmentioned - Single Point of Failure Service dependent on a single admin ]
If you are fully accounting for vacation, training, sleep etc then you need a minimum of 5 admins for mission critical services. Now, you can engineer around this to reduce your staffing requirement but I wouldn't recommend going under 2 ever because accidents happen.
This business seemed one below that, without the engineering, and I would point to the mgmt, not the brilliant admin as the problem.
> The sysadmin is now gone and they are back on AWS.
This story has nothing to do with AWS or on-prem.
It's a story about incompetent management allowing a single human point of failure. If they don't change that, they'll have the same problem wherever they go.
Non-scalable incompetence or basically pretending that the datacenter will never go down. Any high schooler with an iPhone can set up and maintain a datacenter full of servers.
But if you want something reliable that I can spend 30 seconds writing some terraform for, it will take an entire infra team to set up and maintain it, not to mention an entire procurement process and now having to integrate a new supply chain just for a basic multi-az setup (probably without things like backups and still without basic features the cloud gives you automatically).
Managing on-prem hardware may not necessarily be hard, but it can be extremely time consuming. To me, the nice thing about dropping a bunch of hardware in a colo is you get to take a lot of shortcuts and take risks that you cannot buy from the public cloud providers.
I worked for a company and would do it again that did the colo route, and it gave immense cost savings compared to public cloud, taking on risks that you can't do elsewhere. Before they started investing in having folks take care of the infra as a raw startup, it was just some servers and some desktop unmanaged switches. But that gave the company breathing room to survive as the business model probably didn't work without it. But also had a reputation for unreliable service.
I've also built the five nines infra at telcos, and yes you can do it with average engineers, but it's going to be time consuming, slow, and expensive in costs and labor. To allow 26 seconds of unplanned outage a month, you're going to be testing every firmware update for every piece of equipment on an ongoing basis, and practicing every operation and change as best as possible. And you need the scale that you get that 26s by having most outages only impact a subset of your customer base, otherwise you're going to blow that outage budget fast.
Managing on prem is definitely harder because you are benefiting from the economics of scale of all the management problems that you have to pay yourself, and if you don't have scale then you will be significantly overpaying to get the same type of quality, reliability, or responsiveness.
Most people are not paid to manage infra, they are paid to talk to customers, ship features, fix bugs, and other "core business" items; just like most businesses don't build roads, they pay taxes and utilize them because the cost of doing it themselves for their preferred traffic patterns would be much more than they could justify (for now.)
If you don't have scale, you don't need most of the features. Fire up PC, load application. Setup egress port open to internet. Setup application backup on cron job. Done until scale problems arise.
Correct, my point absolutely doesn't apply to someone who is just doing their thing, even maybe 2 orders of magnitude more stuff than their thing.
But when your local IT goon says its going to be 8 months to procure the next set of hard drives for your next order of magnitude, it's a real problem and you have real money to invest in solving it, just not owning a data center money.
"this is what we're paid for" Nope, it's what YOU'RE paid for.
I am paid to relax on my holidays because I know my team and I don't have to drive to a colo to swap out a failing line card since I realized time is worth money and people quit jobs that take up too much of their time. I can A/B test (something on-prem guys NEVER get the luxury to do) so outages just don't happen at all (fingers crossed).
I have rarely met someone happy with their on-prem DC deployments, but after I moved to the AWS world it's just crazy how backwards it is to be anywhere but the cloud.
Anyone serious wouldn't "drive to the colo to swap out a failing line card" they keep have excess capacity and spares in the colo, and have the on-site personnel from the facility replace it.
Honestly just sounds like the environment you describe has greater organizational issues not related to on prem vs cloud.
Compare something like rocketry or chemical engineering with running an on-prem DC. I don't see what the complaining is about. It's still a luxury compared to what other professions have to deal with.
On-prem is massively harder if you can’t cut corners on security or reliability. Just things like testing & upgrading firmware, doing real DR testing (I know multiple places which spent lots of time and money doing annual failover tests, but went down hard every time they had a true failure due to something they’d missed), handling things like boot signing or secure logging, etc. all take up multiple FTEs worth of time, or are a checkbox from a platform which handles that for you.
> Seriously, if you’re at the point that you’re doing sophisticated analysis of cloud costs, consider dropping the cloud.
Which would mean that you loose part of the reason to use the cloud in the first place... A lot of org move to cloud based hosting because it enable them to go way further in FinOps / cost control (amongst many other thing).
This can make a lot of sense depending on your infra, if you have some fluctuation in you needs (storage, compute, etc...), cloud based solution can be a great fit.
At the end of the day, it is just a tool. I worked in places where I SSH´d into the prod bare-metal server to update our software, manage the firewall, check the storage, ... and all that manually. And I worked in places where we were using a cloud provider for most of our hosting needs. I also handle the transition from one to the other.
All I can say is: It's a tool. "The cloud" is no better or worse than a bare-metal server or a VPS. It really depends on your use-case. You should just do your due diligence and evaluate the reason why one would fit you more than the other, and reevaluate it from time to time depending on the changes in your environment.
> A lot of org move to cloud based hosting because it enable them to go way further in FinOps / cost control
I think a lot of orgs move to cloud simply because it's popular and gartner told them so.
But taking a step away from that, it's really about self-service. When the alternative is logging a ticket for someone to manually misconfigure a VM and then fail to send you the login credentials, then your delivery is slow.
When you're chasing revenue, going slow means you're leaving money on the table. When you're a big bureaucratic org, it means your middle managers can't claim to have delivered a whole bunch of shit. Nobody likes being held up, but that's what infrastructure teams historically do.
> I think a lot of orgs move to cloud simply because it's popular and gartner told them so.
Nah, I think it's mostly about the second part of your comment. Everyone hates waiting for months to get a VM or a database or a firewall rule because the infrastructure/DBA teams are stuck ten years in the past and take pride in their artisanal infrastructure building.
So moving to the cloud eliminates a useless layer of time wasting,
If your on-prem team can't spin up a VM same day, then firing them is probably higher ROI than "going to cloud". Further, a lot of the shops "going to cloud" because their infra team is slow.. then hide cloud behind their infra team.
A prior 200+ dev shop went from automated on-prem VM builds happening within hours from when you raise a ticket, to cloud where there was a slack channel to nag&beg for an EC2 which could take a day to a week. This was not a temporary state of affairs either, it was allowed to run like this for 2 years+.
Oh and, worth mentioning, CTO there LOVED him some Gartner.
Despite years of friendly sounding devops philosophy there's times when devs and ops are fundamentally going to be in conflict. it's sort of a proxy war between devs who understandably dislike red tape and management who loves it, with devops caught in the middle and on the hook for both rapid delivery of infrastructure but also some semblance of governance.
An org with actual governance in place really can't deliver infra rapidly, regardless of whether the underlying stuff is cloud or on prem, because whatever form governance takes in practice it tends to be distributed, i.e. everyone wants to be consulted on everything but they also want their own responsibility/accountability to be to be diluted. Bureaucracy 101..
Devs only see ops taking too long to deliver, but ops is generally frozen waiting on infosec, management approving new costs, data stewards approving new copies across ends, architects who haven't yet considered/approved whatever Outlandish new toys the junior devs have requested, etc etc.
Depends on exactly what you're building but with a competent ops team cloud vs on prem shouldn't change that much. Setting aside the org level externalities mentioned above, developer preference for stuff like certain AWS apis or complex services is the next major issue for declouding. From the ops perspective cloud vs on prem is largely gonna be the same toolkit anyway (helm, terraform, ansible, whatever)
Whilst often true in practice, this doesn't have to be true.
The reality is, a lot of these orgs have likely already discovered devops, pipelines, deployment strategies, observability, and compliance as code.
There's basically little in compliance that can't be automated with patterns and platforms, but in most of these organizations a delivery teams interface with the org is their non-technical delivery manager who folds like a beach chair when they're told no by the random infosec bod who's afraid of automation.
I've cracked this nut a few times though. It requires you be stubborn, talk back, and have the gravitas and understanding to be taken seriously. i.e. yelling that's dumb doesn't work, but asking them for a list of what they'd check, and presenting an automated solution to their group, where they can't just yell no, might.
I think it helps when people actually take a step back and understand where the money that pays their salary comes from. Often times people are so ensconced in their tech bureaucracy they think they are the tail that wags the dog. Sometimes the people that are the most hops from the money are the least aware of this dynamic. Bureaucracies create an internal logic of their own.
If I am writing some internal software for a firm that makes money selling widgets, and I decide that what we really need is a 3 year rewrite of my app for reasons, am probably not helping in the sale or the production of widgets. If another team is provisioning hardware for me to write the software on, and it now takes 2 weeks to provision virtual hardware that could take seconds, then they are also not helping in the sale or the production of widgets.
These are the kind of orgs that someone may one day walk into, blast 30% of the staff, and find no impact on widget production, and obvious 30% savings on widget costs...
> If another team is provisioning hardware for me to write the software on, and it now takes 2 weeks to provision virtual hardware that could take seconds, then they are also not helping in the sale or the production of widgets.
Well in this example, the ops team slowing down pointless dev work by not delivering the platform that work is going to happen on quickly are effectively engaged in costs savings for the org. The org is not paying for the platform, which helps them because the project might be canceled anyway, and plus the slow movement of the org may give them time to organize and declare their real priorities. Also due to the slow down, the dev and the ops team are potentially more available to fix bugs or whatnot in actual widget-production. It's easy to think that "big ships take a while to turn" is some kind of major bug or at least an inefficiency, but there are also reasons orgs evolve in that direction and times when it's adaptive.
> Often times people are so ensconced in their tech bureaucracy they think they are the tail that wags the dog.
Part of my point is that, in general, departments develop internal momentum and resist all interface/integration with other departments until or unless that situation is forced. Structurally, at a lot of orgs of a certain size, that integration point is ops/devops/cloud/platform teams (whatever you call them). Most people probably can't imagine being held responsible for lateness on work that they are also powerless to approve, but for these kind of teams the situation is almost routine. In that sense, simply because they are an integration point, it's almost their job to absorb blame for/from all other departments. If you're lucky management that has a clue can see this happening, introduce better processes and clarify responsibilities.
Summarizing all that complexity and trying to reduce it to some specific technical decision like cloud vs on-prem is usually missing the point. Slow infra delivery could be technical incompetence or technology choices, but in my experience it's much more likely a problem with governance / general org maturity, so the right fix needs to come from leadership with some strong stable vision of how interdepartmental cooperation & collaboration is supposed to happen.
I've never seen an IT team that couldn't spin up a VM in minutes. I have seen a bunch of teams that weren't allowed to because of ludicrous "change control" practices. Fire the managers that create this state of affairs, not the devops folks, regardless of whether you "go cloud" or not.
I've met multiple customers where time to get a VM was in the weeks to months. (To be fair, I'm at a vendor that proposed IaC tooling and general workflows and practices to move away from old school ClickOps ticket-based provisioning, so of course we'd get those types of orgs).
And more often than not, it had nothing to do with managers, but with individual contributors resisting change because they were set in their ways and were potentially afraid for their jobs. Same applies for firewall changes btw.
I think a lot of HN crowd hangs out at FAANG/FAANG adjacent or at least young/lean shops, and has no idea how insane it is out there.
I was at a shop that provisions AWS resources via written email requests & clickops, treated fairly similar to a datacenter procurement. Teams don't have access to the AWS console, cannot spin up/down, stop, delete, etc resources.
A year later I found out that all the stuff they provisioned wasn't set up as reserved instances. We weren't even asked. So we paid hourly rates for stuff running 24/7/365.
This was apparently the norm in the org. You have to know reserved instances exist, and ask for them.. you may eventually be granted the discount later. I only realized what they had done when they quoted me rates and I was cross checking ec2instances.info
I can guarantee you less than 20% of my org (its not a tech shop) is aware this difference exists, let alone that ec2instances.info exists for cross reference.
No big deal, just paying 2x for no reason on already overpriced resources!
I went from that type of world (cell carrier) to a FAANG type company and it was shocking. The baseline trust that engineers were given by default was refreshing and actually a bit scary.
I’m not sure my former coworkers would have done well in an environment with so few constraints. Many of them had grown accustomed to (and been rewarded and praised for) only taking actions that would survive bureaucratic process, or fly underneath it.
The problem is the strong players are less likely to stick around, so you often do end up with folks who can't do the work in minutes - though, the work is usually slightly more than clicking the "give me the vm" button.
> If your on-prem team can't spin up a VM same day, then firing them is probably higher ROI than "going to cloud".
I haven’t seen this be due to one set of incompetents since the turn of the century. What I have seen is this caused by politics, change management politics, and shortsighted budgetary practices (better to spend thousands of dollars per day on developers going idle or building bizarre things than spend tens on infrastructure!).
In such cases, the only times where firing someone would help would be if they were the C-level people who created and sustained that inefficient system.
They probably should be fired, but it's actually complicated because the orgs tend to be staffed with departments that believe this is the way things should be done, and best case the replacement needs to compromise with them, worse case they are like minded and you just get more of the same.
If you're running an internal cloud, you can likely absorb that.
I think comes down to a couple of things:
- Small orgs don't have the resources to run internal clouds, nor should they be doing so. This limits the pipeline of available candidates.
- Large orgs promote the wrong people to management, and they make decisions based on their mental model of the world that was developed 20 years ago. They're filled with people who don't understand the difference between cloud and virtualization.
- Large consultancies make more money by throwing raw numbers at the problem rather than smart automation. i.e. it's easier for IBM to bill T&M and a whole project wrapper to patch the server than automate it.
- Finance & HR teams want you to bend to their ways of working rather than the opposite.
Of the rest, you get into many of them are simply in ops because they're less skilled software developers, or they're now being asked to assure security, and that scares them so they try to lock everything down.
> Project is a dud? just nuke the cloud project and no more charges for it.
How is that a negative? Not every project is going to be successful. That's just a basic fact of life. That you don't have to deal with the sunk cost fallacy and just pull the plug is a good thing.
> Project is poorly architected and running like a dog? throw more resources at it.
Another positive...?! You can continue to serve your clients and maintain a revenue stream while you work on a better architecture. Instead of failing completely. And once you need less ressources, you easily scale down.
> waiting for months to get a VM or a database or a firewall rule because the infrastructure/DBA teams are stuck..
You still have to go through your devops (or equivalent) team to make any network configuration/permission changes. Whether that change is implemented by a local firewall rule or some AWS configuration change is not very important.
It's not like you're going to have developers changing AWS access permissions directly. Maybe in a few employee startup, but in any regulated & audited company, you must have separation of duties and audited change control process.
That time wasting is back in the same form or another. For instance, at my side they enable *all* the Azure Application Gateway, even those rules that Microsoft says not to enable - causing even simple OpenID redirects from Microsoft AzureID (Microsoft login) to the application to get captured in AAG and fail.
> I think a lot of orgs move to cloud simply because it's popular
This can be rational and not just following the leader. In particular.. many devs might think that working with an org that does On-prem is bad for their career, and they might be right. So from an org POV you can't hire good engineers if you're perceived as a dinosaur. This actually might be enough to send you towards the cloud even if the price by itself makes no sense
I've experienced the opposite too: orgs looking at down on cloud-only devs.
The idea being that devs who lean on cloud excessively do that to masks their lack of fundamentals, which will cause costly fuck ups no matter what technology they use, cloud or on-prem.
Maybe directionally similar idea to hiring ex-Googlers? Some orgs also don't like those. Specific mindset, specific toolbox.
It is absolutely true that some devs have the AWS product set as the tech toolkit they know best.
Whatever their fundamental skills are, the most important way they add value is by optimizing things like lambda startup time or EC2 CPU utilization. Does this allow them to mask deep problems with fundamentals? I guess it could, but that sounds a bit gatekeep-y to me.
Sort of, but IDK, If you have specific needs this might be a somewhat reasonable heuristic for hiring.
Devs who came up building software more or less from scratch really do have a different skillset than ones who stick to working in service-rich environments because there's a significant difference between glueing services together vs building out those same services. For example something like using a paginated API is quite a bit easier than designing/implementing one.
A developer who is skilled and methodical about reading and understanding service-level documentation may not actually be able to step through debugging in a REPL, and vice versa. (Not to say that either kind of person cannot learn the other persons tricks, but as far as the differences in what they already know, those can be pretty significant.)
Assuming someone only has one of these skillsets, the most valuable one totally depends on the situation. On the one hand it's pretty cool that service-familiarity tends to be language-agnostic, but it's less cool when your S3-API expert barely understands the basics of tooling in the new language.
Paginated api is a great example. For me, I learned C from K&R and producing a.out files that would default and leave a core file in $HOME. If I wanted a list structure, I had to build it out of resizable arrays of pointers, etc.
I ended up years later at AWS, and while I was there I built internet-facing paginated APIs over resources which had a variety of backing stores, each of which was had some behavior I had reason about.
So I don’t doubt the difference between API builder and API user, I’ve been both. I think it’s less about what you are doing and more about how you do it (with curiosity about how things work, vs. as an incurious gluer).
That said, looking at the code inside MySQL is highly instructive for the curious; AWS doesn’t provide that warts-and-all visibility into their implementations, which cuts off the learning journey through the stack.
Mantras are good for orgs that are not mature enough to do actual analysis.
A lead developer left recently where I work and while it was likely higher pay that was the biggest decision to move, I suspect the real reason he left is higher ups simply don't listen when he says things like you can't just take full virtual machines on azure, refuse any rewrite/redesign while complaining about high azure spend.
AWS is infamous in financial services for this though.
First they give you a ton of credits, assign you internal resources to help.
Then they encourage you to simply "lift and shift" your workloads onto EC2/EBS/EFS/etc. It's 100% compatible with your current system, you can rollback, etc.
This take two years, then you notice your AWS bill is 10x your old infra.
Then they say - of course, that's because you need to rewrite it all to serverless/microservices/etc that are all AWS bespoke branded alphabet soup of services. Now you are fully entrapped, and can not rollback to your own infra, let alone another cloud provider without another rewrite.
A lot of big financial firms are 5+ years into this. Several have rolled back for certain use cases due to cost, especially anything with a lot of data transfer because yeah.. performant storage in the cloud & egress are expensive, duh.
You can still use standard stuff like Kubernetes, even if you go microservices. I don't think it's that bad.
I'd say Cloud lets you do a few things, but the way I think of it ultimately is it lets you spend opex instead of capex. If that means though that your opex will end up higher than your capex, then it would be silly to go with it.
The other thing is in theory your reliability should be higher, but, again, that will depend on your individual situation, and how much reliability matters to you.
You CAN, but of course that's not what AWS steers you to.
Once your org has gotten to that step, it's been so steered by AWS staff it's hard to imagine suddenly finding sense and building with open standard stuff. Very few AWS shops I have encountered avoid the siren call of various AWS-only or AWS-specific services, which they then become heavily ingrained in..
Generally I do think its mostly about transforming CapEx to OpEx, with the rest of the stuff being noise.
I was one of those AWS people that worked with Financial Services customers.
We (at least my team) were always pushing for a minimum of modernization when architecting migration projects - even a simple move to containers, managed by whatever orchestrator they want to run. It helps enormously on costs at least by just taking off so many overhead and overprovisioned VMs from the roster of migration candidates.
More often than not, the customer will refuse that and opt for lift + shift. It's either too hard, they don't have the resources or time, etc.
Yep I did a (tiny small by your scale) lift and shift+ - basically take a thing that ran as a regular process with a database attached, and containerised (and pen tested, but that's another story) the thing and plugged it into a cloud-managed database. It worked great.
Well, then they can flip a coin for which mantra to follow. If you pick "cloud bad" you'll get stories also about companies that refused to go to cloud when it makes sense to.
Oh how I wish it had been so. The cloud has been a hard sell all along. Also, 20 years ago S3 and EC2 didn’t exist, so maybe it’s been a little less time than that.
I have had the same experience. And all this, even though Amazon has granted us really generous and free annual plans and professional advice (all inclusive for a non-profit GmbH).
>Seriously, if you’re at the point that you’re doing sophisticated analysis of cloud costs, consider dropping the cloud.
The blog post's solution is relatively simple to put in place ; if you're already locked-in to AWS, dropping it will cost quite a lot and this might be a great middle ground in some cases.
If you're actually using the features of cloud - i.e. managed services - then this involves building out an IT department with skills that many companies just don't have. And the cost of that will easily negate or exceed any savings, in many cases.
That's a big reason that the choice of cloud became such a no brainer for companies.
I don’t think it’s that ridiculous (the next obvious goalpost being producing your own silicon), but since microservices means your DB is purely for your service, it stands to reason that you should also then know how to configure, backup, maintain, and tune the DB.
This is of course a wildly unrealistic ask, which is why I think the idea of merging jobs to save money is stupid. Let people who like frontend do frontend. Let people who like DBs do DBs. Let people who tolerate YAML do DevOps.
I don't think "host yourself" in this instance would've helped. I think AWS in this instance is operating at a loss. Author found a loophole in AWS pricing and that's why it's so cheap. Doing it on their own would've been more expensive.
Now as to why the AWS pricing the way it is... we may only guess, but likely to promote one service over the other.
Those VPS hosting services are a solution for a startup but not to run your internet banking, airline company or public sas product. Plus their infinite egress bandwidth and data transfer claims are only true, as long as you don't...Use infinite egress bandwidth and data transfer....
While they might not charge you directly as a line item, you still get charged: Linode in the above list is what I use. I get a fixed cap of bandwidth each month. Anything beyond that is charged. So, you don't get charged IF you stay below the initial cap.
If anything you could say that AWS is the closest to actually having unlimited bandwidth (or at least, half-parity there) since they don't charge you for incoming data, where other VPS charge you for data both ways.
Really which has more or less expensive bandwidth comes down to the shape of your data usage.
The tipping point for self-hosting would be the point at which paying for full-time, 24/7 on-call sysadmins (salary + benefits) is less than your cloud hosting bill.
That's just not gonna happen for a lot of services.
If you're a heavy bandwidth user it's worth looking at Leaseweb, PhoenixNAP, Hetzner, OVH, and others who have ridiculously cheaper bandwidth pricing.
I remember a bizarre situation where the AWS sales guys wouldn't budge on bandwidth pricing even though the company wouldn't be viable at the standard prices.
I hadn't really thought about it much, but googling it looks like there's a discount programme for a committed spend of around $1M/year. For a small company, that's a lot of money, and it was an unusually large amount of bandwidth for the size of company. I suppose it makes sense now I know they're interested in companies spending that sort of money.
Another trick is to use ECR. You can transfer 5TB out to the internet each month for free. The container image must be public, but you can encrypt the contents. Useful when storing media archives in Glacier.
I don't understand how AWS can keep ripping people off with these absurd data transfer fees, when there is Cloudflare R2 just right over there offering a 100 times better deal.
Data has "gravity" -- as in, it holds you down to where your data is, and you have to spend money to move it just like you have to spend money to escape gravity.
When all my VMs and containers are hosted in AWS, and S3 has rock solid support no matter what language, framework, setup I use, it becomes really tough to ask the team to use another vendor for object storage. If something goes wrong with R2 (data loss, slow transfer, etc.) I will get blamed (or at least asked for help). If S3 loses data or performs slowly in some case, people will just figure we're somehow using it wrong. And they will figure out how to make it better. Nobody gets blamed. And to be honest, data transfer fees is negligible if your business is creating any sort of value. You don't need to optimise it.
we just built a new feature for our pretty bandwidth heavy SaaS on R2. Works pretty damn good with indeed massive savings. We just use the AWS-SDK (Node.js) and use the R2 endpoint.
I trust cloudflare far less than AWS. Once my data is in AWS all applications in the same region as the data can use the data without paying anything in transfer costs.
Also, the prices he quotes are label prices, if you are a customer and you pre purchase your bandwidth under an agreement, it gets _significantly_ less expensive.
R2 is still pretty new. I don't know how well it works in practice in terms of performance and availability. And of course durability, which is difficult if not impossible to judge. S3 has a much longer history and track record, so it has the advantage here. And if all your stuff is inside AWS already there are advantages to keeping the data closer. Depending on how the data is used, egress might also not always be such a major cost.
But yes, the moment you actually produce significant amounts of egress traffic it gets absurdly expensive. And I would expect competitors like R2 to gain ground if they can provide reasonably competitive reliability and performance.
> it’s almost as if S3 doesn’t charge you anything for transient storage? This is very unlike AWS, and I’m not sure how to explain this. I suspected that maybe the S3 free tier was hiding away costs, but - again, shockingly - my S3 storage free tier was totally unaffected by the experiment, none of it was consumed (as opposed to the requests free tier, which was 100% consumed).
It’s also possible their billing system can’t detect transient storage usage. Request billing would work differently from how billed storage is tracked. It depends on how billing is implemented but would be my guess. That may change in the future.
Maybe some sampling mechanism comes along and takes a snapshot once per hour.
Suppose you store the data there for 6 minutes. Then there's an 90% probability that the sampler misses it entirely and you pay $0. But there's a 10% probability that the sampler does catch it. Then you pay for a whole hour even though you used a fraction of that.
Over many events, it averages out close to actual usage[1]. In 9 out of 10 cases, you pay 0X actual usage. In 1 out of 10 cases, you pay 10X actual usage. (But you can't complain because you did agree to 1-hour increments.)
---
[1] Assuming no correlation between your timing and the sampler's timing. If you can evade the sampler by guessing when it runs and carefully timing your access, then you can save a few pennies at the risk of a ban.
This is clever. And as I understand it, one of the tricks WarpStream (https://www.warpstream.com) use to reduce the costs of operating a Kafka cluster.
This may be arguably nitpicking, but the following statement from TFA isn’t exactly the case:
> Moreover, uploading to S3 - in any storage class - is also free!
Depending upon how much data you’re transferring in terms of storage class, number of API calls your software makes to do so, and the capacity used, you may incur charges. This is very easy to inadvertently do when uploading large volumes of archival data directly to the S3 Glacier tiers. You absolutely will pay if you end up using millions of API calls to upload tens of millions of objects comprising tens of terabytes or more.
Thanks for the feedback! I don't think it's nitpicking, you're right that it's misleadingly phrased - in fact, the only S3 costs I observed weren't storage at all, but rather the API calls.
I’ve been deploying 3xAZs in 3xRegions for a while now (years). The backing store being regional s3 buckets (keeping data in the local compliance region) and DDB with replication (opaque indexing and global data) and Lambda or Sfn for compute. So far data transfer hasn’t risen to the level of even tertiary concern (at scale). Perhaps because I don’t have video, docker or “AI” in play?
Sorry, that was indeed nonspecific, you’re right. The add-on features for VPCs are commingled with the concept for me since they almost always go hand in hand. Internet gateways, transit gateways, EIPs, service endpoints, etc., and their fixed costs. Yuck.
All that stuff definitely adds up. I'm familiar with some low traffic projects with high "security" requirements that have so much overhead due to those sorts of adds-on. All the overhead winds up costing more than the actual compute + bandwidth running the site.
I've not seen any evidence that multi-AZ is more resilient. There's no history of an entire AZ going down that doesn't affect the entire region, at least that I can find on the internet within 15 minutes of googling.
In case you ever decide to return to AWS, its Cost Explorer is far from perfect but it can show you where your expenses are coming from, especially if your costs are pennies. In the last re:invent they even released daily granularity when grouping by resources (https://aws.amazon.com/blogs/aws-cloud-financial-management/...).
Offtopic but related: Has anyone noticed transient AWS routing issues as of late?
I’ve on three or four occasions the last three months notices that I got served a completely different SSL certificate than the domain I was visiting, of a domain that often could not be reached on publicly - probably pointing to some organizations internal OTA environment. In all occasions the URL I wanted to visit and the DNS of the site I was then actually visiting were located in AWS. Then less than a minute elapsed the issue is resolved.
I first thought it must be my side, my DNS server malfunctioning or something, but the served sites could not be accessed publicly anyway, and I had the issue on two separate networks with two separate clients and separate DNS servers. I’ve had it with polar.co internal environment, bank of ireland (iirc), multiple times with download.mozilla.org and a few other occassions.
I contacted AWS on Twitter about it, but just got some generic pointless response I should make an incident - but I’m just some random user, I’m not an AWS client. Somehow I could not get it clear to the AWS support on the other side of Twitter.
I'm not sure how this could be removed - the fundamentals behind it are basic building blocks of S3.
Maybe raising the cost of transient storage? e.g. If you have to pay for a minimum of a day's storage - but even if that was the case this would still be cost-effective, and at any rate it seems very unnatural for AWS to charge on such granularity.
+ I would guess that S3 is orders of magnitude more profitable for AWS than cross-AZ charges, so I'm not sure they'd consider it a loss-leader.
It would be fairly easy to change the pricing policy. GCP did something similar for cross-region https://cloud.google.com/storage/pricing-announce#network. This is pretty severe because it seems to affect all reads. However I can imagine an alternate implementation where the source AZ is tracked when data is written and egress fees are charged when the data is read (as if the data was always stored in the source AZ). This could even be done more complexly such as only charging the first time data is read in another AZ. Once you read once it is free as-if it is now cached in that new AZ forever. Another option would just be raising the minimum storage duration so that it basically costs all or most of what the data transfer would.
It would definitely piss a lot of people off as it is adding to their bill, but it could likely be done in a way that makes exploiting this for just data transfer not worth it without adding huge costs to most "real" use cases.
Yeah, I see what you mean - that'd indeed render this method ineffective. Like you said I'm sure this would bother a lot of customers, but it's not a completely unrealistic overhaul of S3 pricing.
That being said, that'd be sort of "mean" of AWS to do - the data is already replicated across AZs whether you pay for it or not because of how S3 works.
It's absolutely amazing that so many devs don't realise this. They seem to think that bandwidth should cost a few cents a month, when in reality it is virtually free. Perhaps the 7c/GB charge was reasonable when AWS came out 15 years ago, but networking has got orders of magnitude cheaper and faster in the intervening time period.
What's more odd now that 1gigabit+ home connections are available, it should be obvious to anyone doing the math that it can't cost that much, otherwise a 200GB CoD install would be costing the ISP $20.
I feel like an entire generation of devs have been weirdly brainwashed by cloud to believe that a ton of things need to be very complex and expensive.
Of course it’s also a zero interest rate phenomenon. We are exiting a >10 year era when the name of the game was simply to grow and anything in your way could be dealt with by just throwing money at it. Nobody cared about cost as long as growth numbers went up.
I hear you, and that is an egregious margin. Just wondering if part of their bandwidth pricing calculation is driven by a goal of constraining their infrastructure costs (or other considerations beyond profit). I'm actually wondering this exactly because it is so egregious.
There is of course a thing wherein if something is free people mindlessly use it. If all AWS customers did this with bandwidth, I wonder how it would impact total usage and AWS's subsequent infrastructure considerations.
I'm no fan of their pricing and I'm sure there's an unhealthy dose of greed in there. Your phrasing just prompted me to consider what other factors might also be involved. And, if part of the rationale is actually to influence customer behavior with disincentives, then by definition there would have to be some pain involved.
Yep. Big cloud bandwidth is a 200X markup from list price. Its ludicrous.
It serves two purposes for them. One is obviously a nice profit center. The other is that free ingress but expensive egress causes data to flow in but not out, creating a center of gravity and a form of lock in.
They're not paying for bandwidth, but their connections are not asymmetric, so they need to balance egress and ingress or they will incur fees or dropped traffic.
The pricing is there to maintain this balance. Since they're obviously egress heavy, it makes sense for them to charge for egress, and make ingress free.
People think AWS is using costs to "tax" you, what they're really doing is using to control the shape and size of their traffic.
If this is true then how do so many other companies not charge this way? VPS companies that charge radically less and bare metal / colocation hosts that charge flat rates are all profitable and their networks work fine.
Add to that the fact that people often explicitly choose these smaller providers because they have cheap bandwidth, meaning they're going to be a magnet for high bandwidth users like DIY CDNs, streaming, game servers, TURN servers, video conferencing relays, etc.
I find it hard to believe that AWS or GCP are getting core Internet bandwidth on worse terms than much smaller companies like Vultr, Hivelocity, Datapacket, or OVH.
The other companies have significantly different SLAs and drop packets far more readily. They also charge for bandwidth, in my experience, you get your 1TB/mo with your $5 VPS sure, but once you go over, you're facing per/GB charges that are very close to AWS default egress price.
They're not a magnet for these services for the reasons I just described as you reach your per VPS limit very quickly, and to get more "cheap bandwidth" you have to be prepared to run 100s of VMs per provider, and have to consider provisioning VMs you don't need just to get access to another $5 TB of transfer, or you're just going to end up paying the per GB fee anyways.
The terms aren't worse, but the service and their guarantees are different. Again, if you ask AWS for a bandwidth deal, they'll cut you one within a few minutes that will more than halve the price of your transfer if you pay up front. Which is AWSs way of saying, "if you make your usage predictable, we can make it way cheaper."
Why? Because they have fixed _capacity_ on their links. The costs manage that _capacity_.
Digital Ocean and Vultr are a fraction of AWS pricing. Vultr is $0.01/gb. Bare metal providers are often cheaper still, selling by size of pipe rather than bytes transferred.
In my experience GCP and AWS are pretty unwilling to budge on bandwidth pricing unless you are very large and making a long commitment. If you are not spending six figures a month forget about it.
You may be right about SLA but I run large volume services out of bare metal providers and do not experience meaningful packet loss or down time in practice. Bandwidth costs are easily hundreds of times less than AWS or GCP.
They'll sell you the pipe without any guarantee and some include provisions that allow them to instantaneously downgrade your pipe if they decide your servers are a traffic management problem.
I can't speak to GCP, AWS is pretty generous, and they even suggest you contact them for a deal once you're in the low 5 figure range, and that's across all services in a region. If you move enough data the discount is significant and approaches overage pricing at VPS providers.
I'm sure. If I were running things that were more bandwidth heavy as opposed to integration heavy, we would have gone that route as well, and we would have gone through the extra trouble of getting some provider diversity and redundancy built in.
For smaller cases, they can avoid all that overhead and just trade those into bandwidth costs. Which, if your costs do get high, it's much easier to build an external caching network then it is to build a bunch of external dependent infrastructure with bare metal providers.
In any case, I don't think it's that AWS is taxing it's users unfairly, I think the costs are a solid reflection of where their engineering effort and variable costs are concentrated. It seems like maintaining symmetry in bandwidth is one of those.
As a customer I can use petabytes one month and then zero bytes the next month. They have link agreements with multi year terms and possible "balance payments" required if symmetry is not maintained. This type of bandwidth isn't as cheap to provide under these terms.
Hahahah. I don't even know where to start. If you think a 200x markup from list price on IP transit is a fair reflection of costs from AWS then good luck to you.
Attempting to have a conversation by laughing in someones face as a starter is incredibly obnoxious. If you thought so little of what I had to say why reply at all? What are you hoping to gain by doing this?
I'm not sure you how get a 200x multiplier from per GB prices when the list prices are not per GB of transfer but per GB of capacity. Or are you taking a 1Gb/s price average of $1000/mo and then assuming 100% egress activity on this pipe then multiplying a full month of this 100% usage by the AWS price and dividing the two? I get around 230x there, but this is not a practical comparison, and these types of links are quite different than a standard COLO, so you're in for quite a bit of overhead.
Plus, if you actually used this much bandwidth on AWS, 2.5PB, you could get nearly a 10x break in pricing, bringing the multiple down to 23x. If you didn't try to pre purchase the multiple would be something like 80x because of their built in automatic tiering.
In terms of CloudFront I'm getting a global caching layer. In terms of EC2 I get the VPC. I'm getting quite a bit more than just the bandwidth. In terms of luck, we feel we don't need it because we've actually sat down and calculated the costs (even all the above) for running the type of product we're running, and it's /far cheaper/ to do it entirely inside AWS.
This is all assuming you actually wanted to discuss this on merit.
Ok “loss” is a relative word here… a loss compared to what they could have got from you.
Some how AWS has to rip you off so if there is a non rip off gateway to the ripoff then if you can use the non rip off to avoid another rip off, they will close the “loophole”.
For those suggesting VPSs instead of cloud based solutions, how do you deal with high availability? Even for a small business you may need it to stay up at all times. With a VPS this is harder to accomplish.
Do you setup the same infrastructure in two or more VPS instances and then load balance? (say, [1]). Feels a bit of an ... artisanal solution, compared to using something like AWS ECS.
Someone in the thread said that if you're 'at the point that you’re doing sophisticated analysis of cloud costs, consider dropping the cloud.'
We've built https://nodeshift.com/ with the idea that cloud is affordable by default without any additional optimization, you focus on your app with no concerns on costs or anything else.
Cost analysis has helped me build great infrastructure on AWS. The costs are communicating to you what is and is not efficient for AWS to do on your behalf, by analyzing the costs and working to reduce them, you also incidentally increase efficiency and in some cases, such as this one, workload durability.
Cost analysis should of course play the foundation of everything you build, regardless if it's SaaS tooling or infrastructure. But surely it's easier to do a cost assessment and optimization exercise on something that is fundamentally more affordable than AWS and doesn't have as high of margin costs? That's why we have built a platform that creates all the value at a low cost.
I reduced them to 0 by not using AWS. This simple trick lets you install and configure dedicated servers that work just fine. Most of your auto scaling needs can be solved using a CDN. But by the time you reach such needs you'd have hired competent engineers to properly architect things - it will be cheaper than using amazon anyway.
I've been looking for a place to store files for backup. Already keeping a local copy on NAS, but I want another one to be remote. Would you guys recommend S3? Wouldn't be using any other services.
I use S3 with the DEEP_ARCHIVE storage class for disaster recovery. Costs go up if you have many thousands of files so careful there. Hopefully will never need to access the objects and it's the cheapest I could find.
There's going to a be a huge market for consultants to unwind people's cloud stacks and go back to simpler on-prem/colo (or Heroku-like) deployments in the coming years.
Lightsail instances can be used to "proxy" data from other AWS resources (eg EC2 instances or S3 buckets). Each Lightsail instance has a certain amount of data transfer included in it's price ($3.5 instance has 1TB, $5 instance has 2TB, $10 instance has 3TB, $20 instance has 4TB, $40 instance has 5TB). The best value (dollar per transferred data) is the $10 instance, which gives you 3TB of traffic.
Using the data provided by the post:
3TB worth of traffic from an EC2 would cost $276.48 (us-east-1). 3TB worth of traffic from a S3 bucket would cost $69.
Note: one downside of using Lightsail instances is that both ingress and egress traffic counts as "traffic".