Hacker News new | past | comments | ask | show | jobs | submit login
Cloud Pricing Comparison: AWS vs. Azure vs. Google Cloud Platform in 2022 (cast.ai)
67 points by entirelylin on May 6, 2022 | hide | past | favorite | 32 comments



I see no mention of bandwidth pricing.

Some time ago, I evaluated the pricing for someone who was thinking of moving a web server from a traditional private server to AWS. It seemed that the monthly cost would be lower, until I looked at the bandwidth cost. The monthly price of that traditional private server, which included a fixed bandwidth limit, ended up being lower than what it would cost just for the bandwidth on AWS.


In some ways AWS locks in its customers with their egress pricing. It can be so expensive to move away because of the egress that it's economically not viable. I don't think that's a mistake. I have a feeling in a number of years this could be reviewed by governments.


Oh yes egress pricing is horrible. That’s why AWS/S3 looks nice to backup to but that first restore costs big time.

One killer no one seems to notice is the bandwidth between availability zones. When you have a proper best practice cross AZ deployment it can be rather expensive. But still cheaper than switches and humans and cages and data centre contracts.


There's so many variables. I was looking into moving my stuff into the cloud to throw out some physical hard drives. I haven't committed to anything yet but I was surprised to see dropbox is somehow cheaper than AWS. Isn't dropbox just an entire extra company that makes a layer slapped on top of AWS?


That used to be the case, but they set up their own server farms a while ago. It was a big project but it allowed them to undercut AWS now.

https://www.computerworld.com/article/3044261/dropbox-quits-...


I'm really suprised you found a VPS that was cheaper before AWS bandwidth kicks in, usually that's only possible with serverless.


Lift and shift is never a good thing unless you have plans to decrease your spend in areas like egress.


Cloud price comparisons are often apples to oranges.

Due in part to an extremely large number of hidden factors.

I'm not just talking about direct cost, but even 1vCPU != 1vCPU.

When discussing s3/gcs storage; for instance, you would need to understand the difference between each of the storage classes and how they're charged for access.

This would look much more like a series of a million graphs than a table.

On the whole: I'm extremely unimpressed by this article. Maybe one day when I'm bored enough I'll give a shot to a reasonable cost comparison of some actual infrastructure.

Maybe even throw in some TCO calculation for administrative duties, such as understanding your billing, ease of debugging and setup. If I really hate myself that day.


> I'm not just talking about direct cost, but even 1vCPU != 1vCPU.

And the vCPUs change at providers too. It used to be that AWS Lightsail was consistently the poorest performing vCPU in our tests, but yesterday their 4vCPU instances were beating other cloud providers. Despite ostensibly having a slower clock speed according to /proc/cpuinfo


Generally for EC2 AWS is transparent about which CPU model and clock speed they are using.

For other services, not so much. Like, is 1 CPU / Memory in Fargate the same as in App Runner and Lambda?


Even minescule differences in the hypervisor makes a big difference, also things like over subscription which (used) to be common in AWS meaning limited or varying memory bandwidth.

Hell, Brendan Gregg, father of observability, noticed a 10x performance difference of machines of the same type inside AWS and that’s before talking about different providers.

GCP is also quite transparent with their CPU capabilities. But I would really hesitate to say 1x2.2GHz vCPU in GCP is equal to 1x2.2GHz vCPU in AWS


I am familiar with Brendan Gregg work, but do you have a reference for the point you mentioned of 10x performance difference for machines of the same type?


it was actually 5x, my bad, but here's the references:

Original info being mentioned as a rumour (2016): https://www.youtube.com/watch?v=pYbgcDfM2Ts&t=1631s

Slides for confirmation by Greeg (2017): https://www.brendangregg.com/Slides/AWSreInvent2017_performa...

Talk (2017): https://www.youtube.com/watch?v=89fYOo1V2pA

More discussion: https://www.reddit.com/r/aws/comments/547xbx/netflix_found_5...

Original tweet has been deleted, but I have a screenshot here slide 9: https://sh.drk.sc/~dijit/devfest2019-msv.pdf


Thanks! Looking at pag 9 of the presentation it seems the comment would talk about Performance differences, at the same price, not of the same type.

While looking at the StrangeLoop presentation it would be 5x difference for the same type.

I am skeptical. Need to dig deeper here.

Edit: Ok. I was already familiar with the Brendan Gregg presentation you shared. I have just carefully reviewed what you kindly shared. Particularly the Reddit discussion and I think this come up out of a misunderstanding.

Brendan Gregg does not mention it on his presentation. I would be shocked that he would omit mention of such a possibility. Many users in the Reddit discussion tried to investigate such a scenarios and could not observe it, other than the +/- variation expected with instances like T2/T3.

So I think somebody heard variance ( possible with the T2/T3) -> then also variance on performance for same price ( but possibly different types) -> Then variance with workloads and over time of a deployment. And that might explain it.

But none of the resources helps point to actual factual statement. I am not saying it's not possible. Only that personally I have not seen it, and I also have not seen other than tweets and rumors. ( That could be true but not with the evidence seen so far.)


These comparisons are fairly meaningless even on IaaS only class deployments. Instance costs aren't necessarily the biggest cost or the best cost optimisation path in the cloud.

For example our S3 costs are higher than our EC2 costs and we made a productive saving implementing lifecycle and S3 IA migration. The biggest EC2 cost reduction we made was through careful analysis and moving some stuff to Lambda, not by changing provider or instance class optimisation.

Whatever happens though, you don't know what the costs are going to be until you get the first bill, regardless of what you estimate.


I fully agree with this: to generalize, it’s useless to look at just prices, you really need to take performance and other parts into consideration too.

For example, we rely heavily on storage (with deployments in the 100TB - 1PB range), where we have a lot of churn in data and as such require a lot of throughput. AWS’s GP3 EBS volumes offer 1GB/sec throughout each at a really attractive price, there simply isn’t any comparable offering for this at Azure at that price point (only the ultra premium SSD variant, which is more like IO2 EBS).

Does anyone know whether there are any real in-depth studies and comparisons between performance of cloud providers, on the level of, say, the STAC benchmarks?


Azure Premium P80 disks offer 900 MB/s provisioned throughput plus higher bursts. 32 TiB in size though, so not a fit for use cases where you just want small but fast disks. https://docs.microsoft.com/en-us/azure/virtual-machines/disk...


No comparison of bandwidth? Often the most expensive element...


After switching away, the tab title started changing to and from "Message from CAST AI".

This annoyed me, so I closed the article. How's that for engagement?


I didn’t make it that far as trying to dismiss the cookie overlay triggered the chat bot instead ;(



This doesn't take into account GCP's sustained usage discount which makes the instances cheaper than AWS.


Standard marketing page for improving SEO.

Short answer: it depends


I'm a bit confused, if you look at the On Demand for compute optimized it states that Azure is $0.0846.

But if you do a 1 year commitment it's $0.10...

I think they got the data messed up in here.


Maybe the price is guaranteed not to increase past $0.10? But over the long-term, computing cost should fall, otherwise something is seriously wrong.


I call bullshit on this article. Looks like a sales page for their stuff with enough data to get them ranked high with SEO. Low effort, not insightful.


If anyone is interested in a GPU-specific comparison across these three and other providers, I created this page for that: https://cloud-gpus.com

Happy to get any feedback on this!


The first issue I took with this is just the mix of processors types. I don’t think it’s fair ever to compare graviton with x86 instances. Mainly because it’s so app dependent.


Needed is a suite of open source applications/processes ported to each cloud provider to directly compare on cost ... kicker is to exhibit best practices


.. or go bare metal?


.. we reached the point of needing AI to optimize the price, maybe bare metal is not bad idea :)


Indeed.

IMO:

- cloud is expensive with large resource needs, so bare metal is better

- cloud is hard to use for few deployment needs, so bare metal is better

So, the best case to use cloud is when you require lots of deployments but relatively small resource needs for each. For example, testing a piece of software on many platforms.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: