On AWS, multi region involves setting up VPN and NAT instances. Not rocket science, but wasted brain cycles.
Generally, with GCP setting up clusters that span three regions should provide ample high availability and most users don't need to deal with the multi cloud headaches. KISS. You can even get pretty good latency between regions if you setup North Carolina, South Carolina, and Iowa. Soon West Coast clusters will be possible between Oregon and Los Angels (region coming soon).
Of course anything can be setup using custom VPN but this is a lot more work and will never be as easy, reliable, automated or cost effective.
That being said, AWS is rolling out automatic VPC peering, running on their own private backbones between regions so there should be functional parity soon, although with different price and performance compared to GCP.
They're overshadowed now by the scale, efficiency and managed services of the major clouds but can still be useful if you're running on their dedicated machines. Last I checked, Keen.IO runs on softlayer.
One success story is not enough compared to thousands elsewhere.
With difficult interconnection of regions, it makes it somewhat harder to do, and it can easily end-up with "meh, AZs are good enough".
Just as an FYI you don't have to use a NAT instance there are also NAT gateways which I find easier to manage: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-na...
We had issues as soon as we started launching instances ( after connecting vnets ) , and azure supports response was to give them the ids so they can manually add them to routing between vnets.
Also BGP routing, was impossible to do beyond their tutorial level setup.
To a lesser extent, it's also nice registering domains within AWS and setting them to auto renew. Since Google Domains already exists, it would be neat to have this feature right inside Google Cloud.
* Idle Load Balancers
* Underutilization of EBS volumes
* Unassociated Elastic IP addresses
* Idle RDS intsances
* R53 latency resource record sets
GCE bills are aggregated across instances. To get more detailed breakdown, you can apply labels to them and the bills will have label information attached in BQ.
Alternatively, you can leverage GCE usage exports here:
Which has per-instance per-day per-item usage data for GCE.
Disclosure: I work for Google Cloud but not on GCE.
- They have Role Based Support plans which offer flat prices per subscribed user which is a much better model. 
- Live migration for VMs mean host maintenance and failures are a minor issue, even if all your apps are running on the same machine. It's pretty much magical and when combined with persistent disks, effectively gives you a very reliable "machine" in the cloud. 
Not at all. Major mistake here.
When you buy a dedicated instances on AWS, you reserve an entire server for yourself. All the VMs you buy subsequently will go to that same physical machine.
In effect, your VMs are on the same motherboard and will all die together if the hardware experiences a failure. It's the exact opposite of what you wanted to do!
Dedicated Instances: https://aws.amazon.com/ec2/purchasing-options/dedicated-inst...
Dedicated Hosts: https://aws.amazon.com/ec2/dedicated-hosts/
> You can use Dedicated Hosts and Dedicated instances to launch Amazon EC2 instances on physical servers that are dedicated for your use. Dedicated Instances are Amazon EC2 instances that run in a VPC on hardware that's dedicated to a single customer. You can also use Dedicated Hosts to launch Amazon EC2 instances on physical servers that are dedicated for your use.
> Dedicated instances may share hardware with other instances from the same AWS account that are not Dedicated instances.
> An important difference between a Dedicated Host and a Dedicated instance is that a Dedicated Host gives you additional visibility and control over how instances are placed on a physical server, and you can consistently deploy your instances to the same physical server over time.
It looks like you can launch DIs on your DHs, or on any arbitrary host; but once you have a DI on an arbitrary host, only your VMs will run there; so a de facto Affinity policy. And any instance you launch on your DH is automatically a DI.
Is there a benefit to running DIs without having a DH? It sounds like having a DI gives you 90% of a DH. The DH gives you is a few hardware details (which might be essential for licensing), and like GP suggested would let you choose Affinity (or Anti-Affinity) between them manually.
As a result, Dedicated Hosts enable you to use your existing server-bound software licenses like Windows Server and address corporate compliance and regulatory requirements.
This is the first I'm hearing about DHs, and it sounds like that might be what we need, instead of the DIs we've been telling other teams about.
You can buy up to two of each type/location and schedule your vms to run on different physical hosts?
The run of iperf refuted your refutation.
For how long is the question. Historically, it’s been considered common knowledge (might just be an urban legend) that AWS, even if you pay for more traffic, at some point just throttles you, the same way that they do with IO.
Though there would still be other things like the lower on-demand rates, custom shapes, networking that scales with shape (rather than being coarsely grouped), being able to attach SSD / GPUs semi-arbitrarily, and so on. For those that care, not having to pay up front for the best price is also a huge deal. You see the same thing in GCS vs S3 as well: Glacier and S3-IA have a few rounding up gotchas that catch many people out.
All that said, I hope we all get to per-minute billing.
Disclosure: I work on Google Cloud (but haven't talked to the Metamarkets folks)
Maybe he author means multiple regions? Multi az is so easy. Everything works. Multi region is much harder.
It seems to focus more on raw infrastructure (EC2 vs GCE) instead of each company's PaaS offerings. Obviously AWS has the front runner lead here, but would be super curious in a comparison of RDS vs. Cloud Spanner for instance.
(pun unintentional, but then realized, and left in there)