Some time ago, I evaluated the pricing for someone who was thinking of moving a web server from a traditional private server to AWS. It seemed that the monthly cost would be lower, until I looked at the bandwidth cost. The monthly price of that traditional private server, which included a fixed bandwidth limit, ended up being lower than what it would cost just for the bandwidth on AWS.
In some ways AWS locks in its customers with their egress pricing. It can be so expensive to move away because of the egress that it's economically not viable. I don't think that's a mistake. I have a feeling in a number of years this could be reviewed by governments.
Oh yes egress pricing is horrible. That’s why AWS/S3 looks nice to backup to but that first restore costs big time.
One killer no one seems to notice is the bandwidth between availability zones. When you have a proper best practice cross AZ deployment it can be rather expensive. But still cheaper than switches and humans and cages and data centre contracts.
There's so many variables. I was looking into moving my stuff into the cloud to throw out some physical hard drives. I haven't committed to anything yet but I was surprised to see dropbox is somehow cheaper than AWS. Isn't dropbox just an entire extra company that makes a layer slapped on top of AWS?
Cloud price comparisons are often apples to oranges.
Due in part to an extremely large number of hidden factors.
I'm not just talking about direct cost, but even 1vCPU != 1vCPU.
When discussing s3/gcs storage; for instance, you would need to understand the difference between each of the storage classes and how they're charged for access.
This would look much more like a series of a million graphs than a table.
On the whole: I'm extremely unimpressed by this article. Maybe one day when I'm bored enough I'll give a shot to a reasonable cost comparison of some actual infrastructure.
Maybe even throw in some TCO calculation for administrative duties, such as understanding your billing, ease of debugging and setup. If I really hate myself that day.
> I'm not just talking about direct cost, but even 1vCPU != 1vCPU.
And the vCPUs change at providers too. It used to be that AWS Lightsail was consistently the poorest performing vCPU in our tests, but yesterday their 4vCPU instances were beating other cloud providers. Despite ostensibly having a slower clock speed according to /proc/cpuinfo
Even minescule differences in the hypervisor makes a big difference, also things like over subscription which (used) to be common in AWS meaning limited or varying memory bandwidth.
Hell, Brendan Gregg, father of observability, noticed a 10x performance difference of machines of the same type inside AWS and that’s before talking about different providers.
GCP is also quite transparent with their CPU capabilities. But I would really hesitate to say 1x2.2GHz vCPU in GCP is equal to 1x2.2GHz vCPU in AWS
I am familiar with Brendan Gregg work, but do you have a reference for the point you mentioned of 10x performance difference for machines of the same type?
Thanks! Looking at pag 9 of the presentation it seems the comment would talk about Performance differences, at the same price, not of the same type.
While looking at the StrangeLoop presentation it would be 5x difference for the same type.
I am skeptical. Need to dig deeper here.
Edit: Ok. I was already familiar with the Brendan Gregg presentation you shared. I have just carefully reviewed what you kindly shared. Particularly the Reddit discussion and I think this come up out of a misunderstanding.
Brendan Gregg does not mention it on his presentation. I would be shocked that he would omit mention of such a possibility. Many users in the Reddit discussion tried to investigate such a scenarios and could not observe it, other than the +/- variation expected with instances like T2/T3.
So I think somebody heard variance ( possible with the T2/T3) -> then also variance on performance for same price ( but possibly different types) -> Then variance with workloads and over time of a deployment. And that might explain it.
But none of the resources helps point to actual factual statement. I am not saying it's not possible. Only that personally I have not seen it, and I also have not seen other than tweets and rumors. ( That could be true but not with the evidence seen so far.)
These comparisons are fairly meaningless even on IaaS only class deployments. Instance costs aren't necessarily the biggest cost or the best cost optimisation path in the cloud.
For example our S3 costs are higher than our EC2 costs and we made a productive saving implementing lifecycle and S3 IA migration. The biggest EC2 cost reduction we made was through careful analysis and moving some stuff to Lambda, not by changing provider or instance class optimisation.
Whatever happens though, you don't know what the costs are going to be until you get the first bill, regardless of what you estimate.
I fully agree with this: to generalize, it’s useless to look at just prices, you really need to take performance and other parts into consideration too.
For example, we rely heavily on storage (with deployments in the 100TB - 1PB range), where we have a lot of churn in data and as such require a lot of throughput. AWS’s GP3 EBS volumes offer 1GB/sec throughout each at a really attractive price, there simply isn’t any comparable offering for this at Azure at that price point (only the ultra premium SSD variant, which is more like IO2 EBS).
Does anyone know whether there are any real in-depth studies and comparisons between performance of cloud providers, on the level of, say, the STAC benchmarks?
I call bullshit on this article. Looks like a sales page for their stuff with enough data to get them ranked high with SEO.
Low effort, not insightful.
The first issue I took with this is just the mix of processors types. I don’t think it’s fair ever to compare graviton with x86 instances. Mainly because it’s so app dependent.
Needed is a suite of open source applications/processes ported to each cloud provider to directly compare on cost ... kicker is to exhibit best practices
- cloud is expensive with large resource needs, so bare metal is better
- cloud is hard to use for few deployment needs, so bare metal is better
So, the best case to use cloud is when you require lots of deployments but relatively small resource needs for each. For example, testing a piece of software on many platforms.
Some time ago, I evaluated the pricing for someone who was thinking of moving a web server from a traditional private server to AWS. It seemed that the monthly cost would be lower, until I looked at the bandwidth cost. The monthly price of that traditional private server, which included a fixed bandwidth limit, ended up being lower than what it would cost just for the bandwidth on AWS.