Google's approach to pricing is, "do it as efficiently and quickly as possible, and we'll make sure that's the cheapest option".
AWS's approach is more, "help us do capacity planning and we'll let you get a price break for it.".
Google applies bulk discounts after the fact, AWS makes you ask for them ahead of time.
Thanks jedberg. You've definitely captured how we feel about it.
I will say that while our sustained use discount (SUD) works as you describe, our committed use discounts (CUD) are more similar to the AWS Flexible RIs. For some customers, even if it's obviously cheaper to use on-demand and just get the SUD benefit, they prefer the predictable "let me be sure I pay X" model.
As a note, even though CUDs and Flexible RIs are similar, I prefer ours (obviously). It's just a pile of cores and RAM in a region that you sign up to. That is easy enough to do in arrears that no ETL is necessary, and like you imagine: we want to automate this, too. We also don't play games with upfront payment or not, but most enterprises don't care.
This said, GCPs committed use discounts still seem to be a better deal than convertible RIs.
Looking at the run-of-the-mill 64GB RAM standard instance:
AWS 1yr all upfront: $0.458
GCP commited use: $0.47878
AWS 1yr no upfront: $0.491
GCP sustained use: $0.532
AWS 1yr no upfront convertible: $0.564
There is a certain amount of clarity or volatility in a system. Volatility will entail cost that will effectively be passed onto the buyer one way or another.
Google can be smart about it and try to do some predictions, but ultimately, the customer should in many cases know much more about what their usage patterns will be. This information reduces volatility and therefore cost ... passed on to the buyer.
If you say "I want to rent 10 cars for a month" vs. "I want the ability to rent from 2 to 20 cars for a month, not sure about what" - what is the intrinsic cost going to be for the provider? Even with demand smoothing over a large client base ... the increased volatility is cost.
I don't think this is related to gambling.
In many ways this is really similar to financial planning and I think the language of this article will make way more sense to financial types than technical types. I might even go so far as saying this is really a problem for financial ops, and not devops.
Either way, there is way more $$$ consulting for AWS than for GCP because of the expertise required to optimize costs.
For example: when AWS encounters non-catastrophic issues with their hypervisor, you are on the hook for moving the instances away (meaning stop-start, or termination and relaunch for instance store). Depending on the instance type, this can cause service disruption.
GCP will transparently migrate the VM while it is running for you. You never see it, you customers don't either.
Same for networking: if you use their "premium" network, you can have anycast IPs to the closest POP, which will route traffic on Google's Network, not the open internet. AWS does not have anything close to this, the closest is multi-region VPC peering, without the fancy routing.
AWS offers more features though, which could be important if you require them.
A number of global outages in various network-related APIs or features on GCP in the past year says otherwise. I don't think there has been any global or even multi-region outage at AWS in many many years, the biggest recent outage for S3 in one region (us-east-1) impacted the internet more than any of the (~3) global outages at GCP and some other recent ones at Azure.
> For example: when AWS encounters non-catastrophic issues with their hypervisor, you are on the hook for moving the instances away (meaning stop-start, or termination and relaunch for instance store).
Apparently this is only for older instance types.
> GCP will transparently migrate the VM while it is running for you. You never see it, you customers don't either.
If you never see it from GCP, how do you know AWS doesn't do it for more recent instance types?
> Same for networking: if you use their "premium" network, you can have anycast IPs to the closest POP, which will route traffic on Google's Network, not the open internet.
There may be good reasons for this ...
This is just hot-migration between hosts. It "stuns" a VM for a fraction of a second - very high performance databases and videogame servers will sometimes notice, while everything else just sees a spot of lag and keeps going.
AWS are an odd cloud vendor who don't support many common cloud features:
- No hot migrate between hosts.
- No hot-add RAM or CPU.
- No memory-state snapshot, only disk snapshot.
- No arbitrary CPU or RAM quantities, only "t-shirt" sizes - can't build servers in "nonsensical" configurations like 12 CPUs and 1GB RAM, or 1 CPU and 128 GB RAM.
This is on top of having arbitrary separators in their data centers, so they send you strange messages about having to delete and rebuild your servers in the same data center. AWS may think it's cute to sell different "areas" in their data center, but the way to have redundancy for servers in AWS us-east-1 is to have servers Azure US West 2 or GCP us-central-1.
Like early iOS in the smartphone space, AWS are dominant in the VM space through marketing, not features.
From re:invent presentations, we know that even availability zones might be made up of multiple datacenters. In 2014, James Hamilton's presentation said that one of the AZs in us-east-1 had 6 separate datacenters.
I don't think it's really accurate to say AWS is selling us different 'areas of a datacenter' when we know that AZs are not only not sharing a datacenter, but might be multiple datacenters themselves.
 When a server class is "sold out" in a region, you can't start your server - but there's no indication of this anywhere until you try to start your server. Other cloud providers auto-rebalance VMs to make space - using AWS is sometimes more like using physical servers than VMs - maybe moreso with paravirtualization.
AWS has been very open over the years about what their terminology means. When they say datacenter, they mean it in the traditional sense of the word. When they say an AZ is made up of at least one but sometimes multiple datacenters, they mean that that an AZ has multiple physical datacenters. They're not slicing up a server room and calling these multiple datacenters.
We also know that AZs are physically separated from each other.
So an AWS region has at least as many physical datacenters as it has AZs, and potentially quite a few more.
James Hamilton has talked pretty extensively about this stuff at re:invent, and as an AWS customer, his talks have been some of the most interesting to me.
Other people calling a datacenter a region doesn't suddenly reduce an AWS region down to a datacenter. A datacenter is a word with a pretty specific definition, and an AWS region does not fit that definition.
My apologies if I came across as combative.