Hacker News new | past | comments | ask | show | jobs | submit login
[flagged] Cloud pricing comparison: AWS vs. Azure vs. Google Cloud in 2021 (cast.ai)
31 points by BlackPlot 3 months ago | hide | past | favorite | 23 comments



> Compute is what makes your cloud bill so high.

This is simply flat-out not true for 90% of non-AI workloads, both on the startup and enterprise side, and this makes me question the entire article.

With compute, you generally have a lot of wiggle room and levers you can pull to optimize your pricing. Even with storage, you can do things like bulk loading, caching, etc to reduce transaction counts.

But, with traffic (not just including egress, but also inter-VPC/region traffic)? You're just kinda stuck! or at least until better dramatically compression comes along which doesn't spike things on the compute/memory side of things.

In general, compared to other hosting providers, these three (AWS, Azure, GCP) have insanely expensive traffic costs -- an order of magnitude higher than their smaller competition! To make matters worse, egress negotiations are taken off the table before you even get started when discussing enterprise discounts.

Be careful when getting started using Activate etc startup funds... that money disappears insanely fast.


Their pricing for GCP is way off. If you were looking for an instance with 32 GiB of RAM, you do not have to buy one with 40 CPU. There are other options such as:

> n2-highmem-4 4 32GB $0.262028

Which turns out to be the cheapest price and that is not even preemptible which is $0.06356


In the Cloud pricing based on On-Demand rates table, the AWS and Azure memory-optimized machines have 32 GB of RAM, but the Google Cloud machine has 961(!) GB of RAM, with a giant price tag to match.

I'm only partway through reading the blog, but this and other errors I've spotted so far make me suspect the accuracy of the blog's analysis.


I am shocked to learn that I can get a 32GB RAM VM from AWS for cheaper than a 961GB RAM VM from GCP.

It looks like the author just went to https://cloud.google.com/compute/vm-instance-pricing#memory-... under the assumption that "memory-optimized" was what they needed, not realizing that "memory-optimized" is for truly ridiculous workloads with terabytes of RAM.

For "compute optimized" they're comparing ARM Graviton2 ("price performance optimized") AWS instances (I don't know where; I can't find any that cost >$0.2/hr) in to 3.8GHz Intel Cascade Lake ("highest performance per core") GCP instances... in LA.


> n2-highmem-4 4 32GB $0.262028

For Google Cloud, the N2 machine family is actually quite expensive, and I tend to avoid them if possible. Rather, I use the E2 series for smaller machines, and the N2D series if I need a machine bigger than E2's size limits.

For us-west2, an e2-highmem-4 instance has 4 vCPUs and 32 GB of RAM, but costs only $0.217148 per hour.


Awesome catch!


Or just use the dynamic instance size. there is no need to pay for preconfigured instance in GCP.

and this is from the so-called cloud experts


Do. Not. Use. GCP. This product will be sunset in a few years. Google does not care about this product. Their UI is totally broken, their offerings are years behind, any issue you have is unsearchable (since no one uses it). Their 22 year old "representatives" only try to lock you in and ask for "network diagrams" - no they do not care about your issues or read your "network diagram". They do not care about you as a customer. Disclaimer: I went from AWS million+ spend to GCP million+ spend. Do not use GCP. Bail leave flee - save yourself.

edit: lol. downvotes. you suck google.


The only datapoint I can give is that moving from AWS to GCP reduced our bill by 2/3rds. That isn't a 1 to 1 comparison though as we did a re-platforming at the same time.

Moving from AWS ECS to GCP Kubernetes was probably the biggest money saver. It reduced the amount of compute needed by a huge amount. Being able to safely use pre-emptive VMs in Kubernetes is also leads to huge savings.

I am sure that if we used Kubernetes in AWS it would have had a similar cost reduction.


I'm a big fan of preemptible instances on GCP. They're much easier to set up than AWS spot instances, all you need to do is set `preemptible = true` and bam you've saved 80% on compute costs. The consistent pricing is also very convenient.

At my old job we ran out CI infra on preemptible instances. It cost basically nothing to run, and because the instances were cycled frequently due to scaling, we hardly ever had jobs fail because of an instance being terminated mid-run (not that it was a problem when they did, they'd just automatically get re-run).


Just pointing out that you don't need to use Kubernetes to avail of spot/preemptible savings. Our regions are configured to have a Load Balancer serve several Instance Groups, one of which is preemptibles


the article is incorrect for memory, google cloud allows you to create custom configurations. e.g. 1 cpu and 100 gb of memory. That itself allows us to cut the cost 10% just because predefined instances doesn't match our memory and cpu footprint.


A snippet from the blog post:

> This chart shows CPU operation in AWS (t2.2xlarge with 8 virtual cores) at different times after several idle CPU periods. Would you expect such unpredictable CPU behavior within a single cloud provider?

> [image showing extremely bursty CPU performance]

In AWS EC2, the T-series are burstable instances types that have a low base performance, and use a credit system to offer bursts above that baseline. In exchange for this credit limit, T-series instances are less expensive than instances with consistent CPU performance.

They are marketed for use cases where applications often sit idle, and work well in that case. Pointing out erratic performance in a benchmark as a problem (a problem with all of EC2, at that!) is incorrect -- the machine is working exactly as its advertised to work.

See here for more info: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/burstabl...

Edit: I see later on in the article they talk about burstable instances and their performance impact (so why make such a novice mistake!?). However, Google Cloud's E2 instance series is price-competitive with AWS's burstable T-series, and doesn't have complexities and pitfalls associated with bursting. That wasn't mentioned, despite this article literally being a comparison between what the platforms have to offer in terms of compute.


The title for this feels really broad.

Cloud providers offer so many products with so many nuances that I'd posit it's practically impossible to do a satisfying pricing comparison that's even halfway comprehensive. That's not the fault of the author, certainly, but the title makes it come off as something far more comprehensive than it actually is, and as a result comes across as disingenuous.


FYI to the blog author, the image showing the machine CPU and RAM configuration incorrectly uses Azure's names for the Google Cloud VMs.


> Region: US West – Los Angeles (US West in Azure, US West 2 in Google Cloud)

Also, while I'm not familiar with AWS or Azure's pricing, Google Cloud's us-west2 region is relatively expensive for a US region, with compute costing about 20% more than us-west1.


>Let’s take a look at the biggest cost driver: compute >Compute is what makes your cloud bill so high. This isn’t to say that other services don’t contribute to it at all. Storage can get quite expensive and moving data around might result in high egress costs.

I don't know what to say here. Compute is probably costly for cast.ai because they use a lot of it, but for many applications storage and egress is as much if not more than compute. It's all down to the application.

Compute is also the easiest to optimize in terms of cost, and egress the hardest.


At my last job, memory ended up being the bigger cost factor.

We were running t3.xlarge (4 CPU, 16 GB) instances with the CPU hardly above idle, but constantly running out of memory.


We probably should do a comparison of the customer service between them as well. The AWS support, TAM and engineering team almost provide white glove experience for troubleshooting, feature request etc. (might be biased for since we spend big)

It’s interesting to compare The retailer vs technology DNA on this front


While working for a previous employer (who worked in cloud computing sector, like cast) Our relations with AWS were the best but still very lacking for our position in the market.

A support ticket for a GCP API bug was ignored/answered by complete robots.

Azure promised working with us when they launched spots but the end result was disappointing and the demo was almost laughable

While our cloud bill was not that high so we didn't get special treatment, you should have better communication with companies who help people use your products


The tables in the article won’t resize for mobile device browsers. Or at least on safari ios


Anyone else looked found relatively reasonable instance pricing on cloud, but then calculated the bandwidth would bankrupt the company?


Is GCP even profitable? How long until it joins the 5 previous chat apps in the Google Graveyard?

https://killedbygoogle.com/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: