Hacker News new | comments | show | ask | jobs | submit login

Despite being a strong advocate for AWS, this is where I will say Google completely outshines Amazon.

Google's approach to pricing is, "do it as efficiently and quickly as possible, and we'll make sure that's the cheapest option".

AWS's approach is more, "help us do capacity planning and we'll let you get a price break for it.".

Google applies bulk discounts after the fact, AWS makes you ask for them ahead of time.




Disclosure: I work on Google Cloud.

Thanks jedberg. You've definitely captured how we feel about it.

I will say that while our sustained use discount (SUD) works as you describe, our committed use discounts (CUD) are more similar to the AWS Flexible RIs. For some customers, even if it's obviously cheaper to use on-demand and just get the SUD benefit, they prefer the predictable "let me be sure I pay X" model.

As a note, even though CUDs and Flexible RIs are similar, I prefer ours (obviously). It's just a pile of cores and RAM in a region that you sign up to. That is easy enough to do in arrears that no ETL is necessary, and like you imagine: we want to automate this, too. We also don't play games with upfront payment or not, but most enterprises don't care.


To be fair, AWS gives you bigger discounts for RI than GCP's sustained use. That makes sense; someone gotta pay for the risk.

This said, GCPs committed use discounts still seem to be a better deal than convertible RIs.

Looking at the run-of-the-mill 64GB RAM standard instance:

  AWS 1yr all upfront:            $0.458
  GCP commited use:               $0.47878
  AWS 1yr no upfront:             $0.491
  GCP sustained use:              $0.532
  AWS 1yr no upfront convertible: $0.564
edit: confused commited vs sustained use for a second there


The customer takes all the risk. Reserving instances is a bad deal, that should be assimilated to gambling.


I don't think this is like gambling.

There is a certain amount of clarity or volatility in a system. Volatility will entail cost that will effectively be passed onto the buyer one way or another.

Google can be smart about it and try to do some predictions, but ultimately, the customer should in many cases know much more about what their usage patterns will be. This information reduces volatility and therefore cost ... passed on to the buyer.

If you say "I want to rent 10 cars for a month" vs. "I want the ability to rent from 2 to 20 cars for a month, not sure about what" - what is the intrinsic cost going to be for the provider? Even with demand smoothing over a large client base ... the increased volatility is cost.

I don't think this is related to gambling.

In many ways this is really similar to financial planning and I think the language of this article will make way more sense to financial types than technical types. I might even go so far as saying this is really a problem for financial ops, and not devops.


It's gambling, the customer loses money when he bets on the wrong instances.


By that definition, all of business is gambling.


What is your definition of gambling?


It's not a bad deal, companies purchase physical servers all the time which are around 3 year investments. Plus you can prorate your reserved instances to larger ones for only the difference in cost.


I wonder if this contributes to amazon's ability to have better capacity planning. I've had issues with available quota from google in the past, and haven't had the same experience with amazon.


I have to think AWS will get there eventually. But maybe not.

Either way, there is way more $$$ consulting for AWS than for GCP because of the expertise required to optimize costs.


I agree that there is more dollars to be made on consulting, however there is little money to be saved by the client. It's all about promising cost savings while pocketing the difference and some more.


Using GCC still feels like comparing an early android to a modern iOS. I guess they have to sell it cheap when the features and quality is not there.


Whatever GCP has is rock-solid and often superior to AWS.

For example: when AWS encounters non-catastrophic issues with their hypervisor, you are on the hook for moving the instances away (meaning stop-start, or termination and relaunch for instance store). Depending on the instance type, this can cause service disruption.

GCP will transparently migrate the VM while it is running for you. You never see it, you customers don't either.

Same for networking: if you use their "premium" network, you can have anycast IPs to the closest POP, which will route traffic on Google's Network, not the open internet. AWS does not have anything close to this, the closest is multi-region VPC peering, without the fancy routing.

AWS offers more features though, which could be important if you require them.


> Whatever GCP has is rock-solid and often superior to AWS.

A number of global outages in various network-related APIs or features on GCP in the past year says otherwise. I don't think there has been any global or even multi-region outage at AWS in many many years, the biggest recent outage for S3 in one region (us-east-1) impacted the internet more than any of the (~3) global outages at GCP and some other recent ones at Azure.

> For example: when AWS encounters non-catastrophic issues with their hypervisor, you are on the hook for moving the instances away (meaning stop-start, or termination and relaunch for instance store).

Apparently this is only for older instance types.

> GCP will transparently migrate the VM while it is running for you. You never see it, you customers don't either.

If you never see it from GCP, how do you know AWS doesn't do it for more recent instance types?

> Same for networking: if you use their "premium" network, you can have anycast IPs to the closest POP, which will route traffic on Google's Network, not the open internet.

There may be good reasons for this ...


> GCP will transparently migrate the VM while it is running for you. You never see it, you customers don't either.

This is just hot-migration between hosts. It "stuns" a VM for a fraction of a second - very high performance databases and videogame servers will sometimes notice, while everything else just sees a spot of lag and keeps going.

AWS are an odd cloud vendor who don't support many common cloud features:

- No hot migrate between hosts.

- No hot-add RAM or CPU.

- No memory-state snapshot, only disk snapshot.

- No arbitrary CPU or RAM quantities, only "t-shirt" sizes - can't build servers in "nonsensical" configurations like 12 CPUs and 1GB RAM, or 1 CPU and 128 GB RAM.

This is on top of having arbitrary separators in their data centers, so they send you strange messages about having to delete and rebuild your servers in the same data center. AWS may think it's cute to sell different "areas" in their data center, but the way to have redundancy for servers in AWS us-east-1 is to have servers Azure US West 2 or GCP us-central-1.

Like early iOS in the smartphone space, AWS are dominant in the VM space through marketing, not features.


>AWS may think it's cute to sell different "areas" in their data center

https://aws.amazon.com/about-aws/global-infrastructure/

From re:invent presentations, we know that even availability zones might be made up of multiple datacenters. In 2014, James Hamilton's presentation said that one of the AZs in us-east-1 had 6 separate datacenters.

I don't think it's really accurate to say AWS is selling us different 'areas of a datacenter' when we know that AZs are not only not sharing a datacenter, but might be multiple datacenters themselves.


From a certain point of view, it's Amazon using different terminology - what others call a data center, they call an AZ...or even a subset of an AZ, and we'll never know exactly - nor the capacity of each[0]. And what they call a data center, others call a region.

[0] When a server class is "sold out" in a region, you can't start your server - but there's no indication of this anywhere until you try to start your server. Other cloud providers auto-rebalance VMs to make space - using AWS is sometimes more like using physical servers than VMs - maybe moreso with paravirtualization.


I'm really confused as to what you're trying to say.

AWS has been very open over the years about what their terminology means. When they say datacenter, they mean it in the traditional sense of the word. When they say an AZ is made up of at least one but sometimes multiple datacenters, they mean that that an AZ has multiple physical datacenters. They're not slicing up a server room and calling these multiple datacenters.

We also know that AZs are physically separated from each other.

So an AWS region has at least as many physical datacenters as it has AZs, and potentially quite a few more.

James Hamilton has talked pretty extensively about this stuff at re:invent, and as an AWS customer, his talks have been some of the most interesting to me.

Other people calling a datacenter a region doesn't suddenly reduce an AWS region down to a datacenter. A datacenter is a word with a pretty specific definition, and an AWS region does not fit that definition.


The confusion is entirely mine. Your clarity is appreciated.


You're welcome!

My apologies if I came across as combative.


The AWS equivalent of Anycast / closest region is to route all traffic through Cloudfront. That way users enter the AWS fiber within 50ms (sometimes 5ms) and have SSL terminated there as well. Only works for HTTP(S) traffic, though, not general networking.


[citation needed]




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: