There's time you need more flexibility. I'm serving 1.2MM requests per second from 3 GCP regions, managing instances and GKE clusters with terraform, and I cannot see how could I possibly set that up in a resilient fashion with DigitalOcean.
I think DO is perfect for certain scale apps. You usually care about UI things mostly when you spin up couple servers; but when you operate hundreds of machines you need automation.
GCP has it's quirks, e.g.
* 130k connections/core limit due to conntrack,
* lower networking throughput compared to AWS (16Gbps on GCP vs 25Gbps on AWS),
* no support for enhanced networking (haven't tested recent Andromeda 2.1 yet, though)
* no way to attach more than 8 local SSDs (arguably a good thing)
So does AWS, so does DO and you have to pick what's best for your project. One thing I like in general here is competition that makes all of those services better.
EDIT: Fix conntrack typo
Isn't the 130K limit only for core count < 8?
130000 per instance for instances with shared-core machine types
130000 per CPU for instances with 1 to 8 CPUs
130000 *8 (1040000) per instance for instances with > 8 CPUs
I don't disbelieve you, I'm just wondering what type of site that is since English Wikipedia is several orders of magnitude lower than that.
Ended up serving most of the traffic from n1-standard-16 or lower.
I think these days GCP and AWS are more or less on par. One thing I learned the hard way: invest time into calculating your expected cloud spend. Your use case, partners you integrate with, your audience. Those impact significantly on cloud pricing.