This is simply flat-out not true for 90% of non-AI workloads, both on the startup and enterprise side, and this makes me question the entire article.
With compute, you generally have a lot of wiggle room and levers you can pull to optimize your pricing. Even with storage, you can do things like bulk loading, caching, etc to reduce transaction counts.
But, with traffic (not just including egress, but also inter-VPC/region traffic)? You're just kinda stuck! or at least until better dramatically compression comes along which doesn't spike things on the compute/memory side of things.
In general, compared to other hosting providers, these three (AWS, Azure, GCP) have insanely expensive traffic costs -- an order of magnitude higher than their smaller competition! To make matters worse, egress negotiations are taken off the table before you even get started when discussing enterprise discounts.
Be careful when getting started using Activate etc startup funds... that money disappears insanely fast.
> n2-highmem-4 4 32GB $0.262028
Which turns out to be the cheapest price and that is not even preemptible which is $0.06356
I'm only partway through reading the blog, but this and other errors I've spotted so far make me suspect the accuracy of the blog's analysis.
It looks like the author just went to https://cloud.google.com/compute/vm-instance-pricing#memory-... under the assumption that "memory-optimized" was what they needed, not realizing that "memory-optimized" is for truly ridiculous workloads with terabytes of RAM.
For "compute optimized" they're comparing ARM Graviton2 ("price performance optimized") AWS instances (I don't know where; I can't find any that cost >$0.2/hr) in to 3.8GHz Intel Cascade Lake ("highest performance per core") GCP instances... in LA.
For Google Cloud, the N2 machine family is actually quite expensive, and I tend to avoid them if possible. Rather, I use the E2 series for smaller machines, and the N2D series if I need a machine bigger than E2's size limits.
For us-west2, an e2-highmem-4 instance has 4 vCPUs and 32 GB of RAM, but costs only $0.217148 per hour.
and this is from the so-called cloud experts
edit: lol. downvotes. you suck google.
Moving from AWS ECS to GCP Kubernetes was probably the biggest money saver. It reduced the amount of compute needed by a huge amount. Being able to safely use pre-emptive VMs in Kubernetes is also leads to huge savings.
I am sure that if we used Kubernetes in AWS it would have had a similar cost reduction.
At my old job we ran out CI infra on preemptible instances. It cost basically nothing to run, and because the instances were cycled frequently due to scaling, we hardly ever had jobs fail because of an instance being terminated mid-run (not that it was a problem when they did, they'd just automatically get re-run).
> This chart shows CPU operation in AWS (t2.2xlarge with 8 virtual cores) at different times after several idle CPU periods. Would you expect such unpredictable CPU behavior within a single cloud provider?
> [image showing extremely bursty CPU performance]
In AWS EC2, the T-series are burstable instances types that have a low base performance, and use a credit system to offer bursts above that baseline. In exchange for this credit limit, T-series instances are less expensive than instances with consistent CPU performance.
They are marketed for use cases where applications often sit idle, and work well in that case. Pointing out erratic performance in a benchmark as a problem (a problem with all of EC2, at that!) is incorrect -- the machine is working exactly as its advertised to work.
See here for more info: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/burstabl...
Edit: I see later on in the article they talk about burstable instances and their performance impact (so why make such a novice mistake!?). However, Google Cloud's E2 instance series is price-competitive with AWS's burstable T-series, and doesn't have complexities and pitfalls associated with bursting. That wasn't mentioned, despite this article literally being a comparison between what the platforms have to offer in terms of compute.
Cloud providers offer so many products with so many nuances that I'd posit it's practically impossible to do a satisfying pricing comparison that's even halfway comprehensive. That's not the fault of the author, certainly, but the title makes it come off as something far more comprehensive than it actually is, and as a result comes across as disingenuous.
Also, while I'm not familiar with AWS or Azure's pricing, Google Cloud's us-west2 region is relatively expensive for a US region, with compute costing about 20% more than us-west1.
I don't know what to say here. Compute is probably costly for cast.ai because they use a lot of it, but for many applications storage and egress is as much if not more than compute. It's all down to the application.
Compute is also the easiest to optimize in terms of cost, and egress the hardest.
We were running t3.xlarge (4 CPU, 16 GB) instances with the CPU hardly above idle, but constantly running out of memory.
It’s interesting to compare The retailer vs technology DNA on this front
A support ticket for a GCP API bug was ignored/answered by complete robots.
Azure promised working with us when they launched spots but the end result was disappointing and the demo was almost laughable
While our cloud bill was not that high so we didn't get special treatment, you should have better communication with companies who help people use your products