> Over the past couple of years, I’ve never struggled to grasp or understand any of the GCE offered services. The predefined machine types are also very clear, shared core, standard, high memory and high CPU, I know them all by heart now with their memory configurations and pricing to some extent. I clearly understand the difference between the IOPS for a standard disk and an SSD disk and how the chosen size impacts their performance. There is no information overload, disk pricing and other information is kept separate, it’s simple and easy to understand.
Now compare this with EC2 VMs, it’s overwhelming, current/previous, etc… generation VMs. Disk information and network configurations all lumped together with VM configurations, paragraphs and paragraphs of different price configurations. For me, it was painful just trying to understand which VM type is suitable for my needs. My first encounter with SSD configurations and maximum provisioned IOPS for AWS RDS was one of pain. Instead of spending time working on my project, I found myself spending valuable time trying to select which IaaS offerings best fit my needs. Things like trying to figure out if low to moderate or high network connectively is best for my needs!. No wonder I still hear many say they find Cloud offerings confusing!, I think this is no more with GCP.
Very structured and lots of meaningful analogies. I didn't know that AWS had so many storage services!. And you are right AWS console feels like a marketing dashboard, so far I've only ever clicked on 3 icons, never looked at the other 30 or so
i think it's mostly historical baggage. as soon as gce gets as old as aws, the offering variations they have to offer for historical contacts and what so ever will be as confusing as aws today
Actually, we've made some explicit choices along the way to avoid this cruft.
For example, we haven't introduced different "generations" of machine types, and instead have stuck with "n1-standard-1" even across different architectures (we then document which underlying processor architectures are available on a zone-by-zone basis at https://cloud.google.com/compute/docs/zones#available for people that care).
Similarly, instead of introducing "Local SSD instances", we let you attach Local SSD partitions to any VM arbitrarily. And Preemptible VMs are just a boolean option on any VM.
So you don't see a machine-type matrix explosion on GCE, and that's on purpose.
There's a real trade off. I didn't mean to dismiss it (apologies if it came across that way). For some workloads, a 2.3 GHz Haswell runs absolute circles around a 2.5 GHz Sandy Bridge (mostly vectorization friendly ones). So if you care about newer architectures, now the API either needs to let you choose a processor type per zone (painful) or you end up taking it on as the provider (each zone has a single processor type). We also elect to maintain single-threaded performance across a variety of benchmarks which is a double-edged sword.
So they made a different choice: let the customer just decide explicitly. We went with "let the customer decide if they want". Most people internally at Google don't bother and I'd say we've been proven correct in the marketplace as well; if you need to care, we're transparent (otherwise, who cares!).
Same with AWS, I've been using that since 2008 or so. At that time, many of the services, or functionality within certain services were either not usable (as in couldn't be accessed) via the web interface, only via the APIs.
I see a constant flux of features, but the whole Google Cloud ecosystem is well thought of and coherent as if the ease of use is primary thing. App Engine, after 8 years, is still as easy to use as the day it was released.
It's not just accumulation. IMHO, the features of Google Cloud are better thought out.
Two examples of things that rock:
- Being able to pop up an ssh shell right from your browser
- Google Cloud shell. A free Linux shell in the sky with a bunch of dev tools pre-installed (including docker)
Any feedback is welcome.
Citing from OP on ease of use with Google Cloud:
> Over the past couple of years, I’ve never struggled to grasp or understand any of the GCE offered services. The predefined machine types are also very clear, shared core, standard, high memory and high CPU, I know them all by heart now with their memory configurations and pricing to some extent. I clearly understand the difference between the IOPS for a standard disk and an SSD disk and how the chosen size impacts their performance. There is no information overload, disk pricing and other information is kept separate, it’s simple and easy to understand.
Now compare this with EC2 VMs, it’s overwhelming, current/previous, etc… generation VMs. Disk information and network configurations all lumped together with VM configurations, paragraphs and paragraphs of different price configurations. For me, it was painful just trying to understand which VM type is suitable for my needs. My first encounter with SSD configurations and maximum provisioned IOPS for AWS RDS was one of pain. Instead of spending time working on my project, I found myself spending valuable time trying to select which IaaS offerings best fit my needs. Things like trying to figure out if low to moderate or high network connectively is best for my needs!. No wonder I still hear many say they find Cloud offerings confusing!, I think this is no more with GCP.