
Goodbye Google Cloud, Hello Digital Ocean - mluggy
https://lugassy.net/goodbye-google-cloud-hello-digital-ocean-a4ca1c8b7ac8
======
gabemart
I'm a bit surprised by the replies talking about serving millions of requests
a second, or needing ultra-consistent packet latency. Perhaps most of the HN
demo is working for companies that operate at that scale.

But the OP is talking about running dozens of servers, not hundreds or
thousands. My own modest apps have held up to the traffic from being #1 on
reddit and not fallen over, never been null routed or nerfed by DO, and served
hundreds of millions of requests and hundreds of TBs of data. I have nothing
but good things to say about DO.

I have no experience running apps at the "millions of requests a second"
scale, and I'm willing to accept DO might be a bad choice at that level. But
what percentage of apps will ever reach that scale?

~~~
user5994461
>>> I'm a bit surprised by the replies talking about serving millions of
requests a second, or needing ultra-consistent packet latency. Perhaps most of
the HN demo is working for companies that operate at that scale.

There is a high concentration of serious professionals on HN running world
class operations. There is also a large amount of startup and single man
operations who would hate to write a $100 check.

The thing is, one can only be on one side. Then the other side has nothing
valuable to say to you, all their advice is misplaced.

So, to answer your question. It's stupid to ask what apps will reach a large
scale. You're either at large scale already or you are not. Obviously, the
article is only intended for small scale operations (actually, the entire
article is just a generic ad, digitalocean has a nice affiliate program).

------
doh
I think this is called maturity. GCP finally knows who the target customer is,
and they're building features for them. DO has really nice and simple UI, but
try to spin 5,000 servers and you will see what a hot mess it is.

GCP shines at a very large scale. For us, what makes the difference, is the
spending predictability and massive discounts coming from running preemptible
servers or simply committing to certain usage.

GCP is not as service rich as AWS, but most of the offered services are
capable of withstanding incredible loads. In our case, while running on tens
of thousands of mostly preemptible servers[0], we couldn't pull this off
anywhere else (trust me, we've tried).

It's great to know which service works for your loads and make the best of it.

[0]
[https://news.ycombinator.com/item?id=13726224](https://news.ycombinator.com/item?id=13726224)

~~~
mark_l_watson
+1 you said what I was going to say. GCP probably doesn’t want the business of
small web sites and portals. I think that under Diane Green they have a clear
vision of who their target (large/huge) customer base should be and what they
need to do to attract those customers.

For small projects, especially when I am paying the bills, it is difficult to
beat OVH and DigitalOcean on costs.

That said, GCP is still my favorite provider, largely because using GCP makes
me a little nostalgic for the time I worked as a contractor at Google - really
enjoyed their infrastructure.

~~~
doh
I hear you. Miss most of the tools and infrastructure every day since I left.
I was ecstatic when they made PubSub available to everyone. Wish they would
expose more of the internal services, especially Colossus, Dremel, Borg, ...

~~~
dekhn
dremel = bigquery, borg = GKE.

~~~
doh
Yes, those are using the technologies underneath the hood, but are not them
itself. I would like to have access to them directly as I would like to build
other things at top of them

~~~
dekhn
not really. BigQuery is for all intents and purposes an instance of Dremel
running for cloud users. And GKE is effectively a clone of borg, rather than
an instance of it- which is great. It sounds like what you want is a job at
Google...

~~~
doh
Oh really. Then please add some vectors to BigQuery and run the query on it.

~~~
dekhn
I'm sorry, I don't know what you mean, by vector? Do you meant a repeated
proto field (which models a vector)?

~~~
doh
Anything from a matrix to a tensor. For instance you want to store all SIFT
points and run comparison.

------
FBISurveillance
More of a rant.

There's time you need more flexibility. I'm serving 1.2MM requests per second
from 3 GCP regions, managing instances and GKE clusters with terraform, and I
cannot see how could I possibly set that up in a resilient fashion with
DigitalOcean.

I think DO is perfect for certain scale apps. You usually care about UI things
mostly when you spin up couple servers; but when you operate hundreds of
machines you need automation.

GCP has it's quirks, e.g. * 130k connections/core limit due to conntrack, *
lower networking throughput compared to AWS (16Gbps on GCP vs 25Gbps on AWS),
* no support for enhanced networking (haven't tested recent Andromeda 2.1 yet,
though) * no way to attach more than 8 local SSDs (arguably a good thing) *
etc

So does AWS, so does DO and you have to pick what's best for your project. One
thing I like in general here is competition that makes all of those services
better.

EDIT: Fix conntrack typo

~~~
throwawayReply
What do you do that serves 1.2mm requests / second?

I don't disbelieve you, I'm just wondering what type of site that is since
English Wikipedia is several orders of magnitude lower than that.

~~~
FBISurveillance
Ads, one of the most shitshow industries in the world.

~~~
dhimes
So Google built a platform that is excellent for running ad networks. It seems
so obvious now.

~~~
user5994461
Ads are a super intensive workload with super low margin. It'd be more correct
to say that google built a very efficient platform that can run anything.
Well, anything easier than ad, so almost anything.

------
nickjj
Let's not forget how great DO's support is too.

Even if you're only operating at 1x $5/month droplet (their lowest instance
type), you'll still get top grade email support included with your plan.

Getting a response in a few hours is almost guaranteed and often times they'll
even help you debug problems that go beyond the scope of their infrastructure.

Compare that to something like AWS which charge you an extra $29/month minimum
(or 3% of your AWS bill) to speak to an entry level support person through
email.

------
tzury
“Goodbye cloud, hello nice UI for VPS hosting”.

If Google’s own premium speedy network, anycast IP, global load balancing and
security features bring no benefit to your web operations, then your math
might be right.

If you need your LB to handle 10s of millions of HTTP/S requests per minute,
then what other choices do you have other than Google?

F/W default “Everything within a project should allowed”? Sorry, but no way.

Not familiar with other DO capabilities, but cloud platform is not just about
a VPS available throu REST API, it is way far beyond.

~~~
kuschku
> If you need your LB to handle 10s of millions of HTTP/S requests per minute,
> then what other choices do you have other than Google?

The same options every business that needed them has done for decades? Renting
dedicated servers én masse, colocating, or running own datacenters? And doing
this in multiple countries?

What I'd suggest for a start is renting dedicated servers or colocating in a
few places (EU-Central, US east, US west), and running on that.

Dedicated servers, even with having to maintain them, are always cheaper than
GCP. Even more so is the network traffic.

I've started with dedicated servers with a few European hosters myself where
I'd be able to get servers and traffic for 40€ a month that would have cost me
upwards of 14'000€ a month with GCP, AWS or Azure (180TB+ traffic per month, I
don't have the exact numbers again, but I used each service's cost calculator
for this). Sure, if I want high quality networking, I'd have to pay another
360€ per month extra with these hosters, but that's still cheap enough that it
means I can basically run a 180TB/month serving service on a hobbyists budget.

~~~
tzury
Dedicated servers == constanct cost. Many cases, you reach peak times for
several hours per week/day. With GCP scale up and super fast, and when done,
tearing down and saving money.

Also, from my experience, when you reach a certain level of operations, you
discuss with Google sales team and you get a significant discount for
bandwidth.

~~~
sokoloff
We're well past that point (with AWS) and we get a meaningful discount, but
it's in no way competitive on a per-TB or per-Gbps basis with a dedicated
colo.

It's also higher quality of service IMO. When AWS gets their daily DDoS
traffic, they manage it and we never notice nor care. With smaller colo
providers, they are much cheaper, but if your neighbor is getting DDoS'd, you
notice it. I assume Azure and GCP are similar.

~~~
kuschku
With regards to DDoS: that's something where OVH has surprised me well in the
past, I've been a target once or twice, but they automatically handled all the
upstream issues, and thanks to their hardware firewalls, I could filter the
traffic before it hit my server.

On Twitter, the OVH CEO claimed that they've previously had servers targeted
with 1.1Tbps DDoS, and still managed to keep that user's site working.

------
bauerd
>What has gone wrong with GCP? It was trying to be AWS. Too complex and
feature-rich instead of simple and feature-complete.

Yeah no. GCP is a direct competitor to AWS, so of course Google needs (at
least partly) to match Amazon's product lineup. The argument of this post
boils down to "GCP should be like DO", i.e. less services and a slicker UI,
but Google won't win over AWS customers with that.

~~~
laumars
Indeed. Further to that, there are already a lot of DO-like hosting platforms
out there but fewer AWS-like solutions targeting enterprise. So why would
Google target a crowded market when they have the resources to target
enterprise.

Also to offer my own anecdotal experiences using DO. I find it highly
inflexible when I need to do anything more granular. Sure it's easy for
deploying a standard build, but anything that falls slightly outside of DO's
remit quickly becomes more trouble than it's worth. Where as AWS (and similar)
might intimidate new comers with it's scale and complexity but that
flexibility can also be a godsend.

------
dna_polymerase
Good luck with DO, the last time I checked they still shut down your instances
as you hit sustained 300Mbps incoming traffic [0]. Could never work with a
service that weak on the policy side. Also remember the disaster with surge
[1].

[0]: [https://www.digitalocean.com/community/questions/extra-
bandw...](https://www.digitalocean.com/community/questions/extra-bandwidth-
what-will-happen)

[1]: [https://motherboard.vice.com/en_us/article/qkj35w/nra-
compla...](https://motherboard.vice.com/en_us/article/qkj35w/nra-complaint-
takes-down-38000-websites)

------
sitepodmatt
AWS, Azure and GCP instances are completely different to Linode, DO, Vultr,
etc... With the first three you get dependable performance characteristics
with minimal variation (excluding t* EC2), this is the exact thing you need to
do reliable horizontal scaling. Linode and Vultr and DigitalOcean etc.. are
great for the price point but if I'm CPU bound and about to scale the
identical service to N boxes then it's a poor choice.

As someone pointed out you have Lightsail if you want Linode/DO from AWS.
3.2.1... lets have cliche AWS/GCP/Azure is ridiculously expensive based on
some undefined VPS memory/cpu details, hey virtual core!, and let's not forget
the usual HE.net / Cogent / Level3 quote of 10Gig for $2000 and work out real
bandwidth from there.. :rolleyes:

~~~
mluggy
didn't get your last remark re bandwidth. If 1TB alone costs $90 at both
GCP/AWS, how is 3TB + 2 CPU instance for $20 is a poor choice?

~~~
sitepodmatt
You're getting 2 (or X) virtual cores - which means absolutely nothing and can
vary from minute to mintue / hour to hour - Linode was even giving 8 virtual
cores with the cheapest plan 2/3 years ago. Virtual cores on Linode/DO/Vultr
are shared/contended, basically undefined in terms of lower bound performance
- you are at the mercy of other tenants - and there's a lot more tenants in
the building too (where number of tenants is exponentially more than physical
cores), so I maintain if your CPU bound and horizontal scaling these are poor
choices. The cost of provisioning high performance networking is very real,
Linode/DO/Vultr don't expect 98% of their customers to use anything close to
1TB and this nicely fit with the idea that they are poor choice for horizontal
scaling - hence why AWS Lightsail isn't a threat to main cloud offerings. If
you're just getting start and no where near the limit of single node then
fine, I even use Linode for a few clients myself "Bandwidth: 39GB Used,
13961GB Remaining, 14000GB Quota ".

~~~
Elect2
Did you know that DigitalOcean provides "High CPU" droplets?

~~~
sitepodmatt
Interesting, I missed this announcement. This is definitely a step in the
right direction to competing for CPU bound workloads. "Achieve up to 4x more
CPU performance on High CPU vs Standard Droplets." I guess we can take this to
mean that standard droplets virtual CPUs are contended at 4:1 - I would of
thought it be higher.

~~~
thekonqueror
You should also check out Vultr's dedicated core plans [1]. I have ~50
instances running on Vultr and CPU performance / unixbench scores are
significantly better than DO and Lightsail's similar sized instances.

[1]
[https://www.vultr.com/pricing/dedicatedcloud/](https://www.vultr.com/pricing/dedicatedcloud/)

------
euph0ria
We were also impressed with DO's GUI and easy of use. We setup a few monitor
servers on their network to measure uptime, latency, packet loss and similar
metrics since we deal with audio streaming over UDP. Suffice it to say, it was
a mess. So many micro-timeouts and tons of packet loss in small bursts.

For prototypes and small projects dealing mainly in TCP and if uptime is not
really important then DO is great. For us, we stick to AWS / GCP for the bulk
of our load including dev/testing etc as we want to have the environment as
similar to prod as possible.

------
dividuum
> Private network is available, unmetered.

It's not private. It's internal and shared with all other customers at the
same data center location. I wouldn't be surprise if this misleading marketing
resulted in lots of accidentally exposes services.

------
radarsat1
Huh. I have zero experience with using cloud services, but I'm a bit surprised
by the complexity of _both_ of those UIs. I always assumed it was way simpler
than that. Like, I thought it was as simple as using a command-line tool, like
"docker run <myprogram>" but "cloud run.." As there any cloud services like
that? Where you just specify your environment and CPU features you need etc
from a single console command?

~~~
Permagate
You could spin up a server with just one AWS CLI command (and most likely
google console, but I have no experience with it). As you said, just specify
the server spec in the command option. Though, AWS CLI is very verbose with
their commands, imo.

Anything more complex, such as weaving several server under one VPC with a
load balancer and routing configuration, I'd recommend checking out terraform.
It also supports multiple cloud providers.

~~~
kthejoker2
There's also an Azure CLI. You can spin up a Docker Container in one line:

az container create --name mycontainer --image microsoft/aci-helloworld
--resource-group myResourceGroup --ip-address public --ports 80

------
halayli
So he basically uses ec2 equivalent service and is complaining about the
existence of other services.

~~~
mluggy
yes. except DO bandwidth is $0.006 instead of $0.09/GB. am I paying extra for
the sole existence of other services?

~~~
laumars
The high bandwidth is because with cloud services like AWS and Google Cloud
you'd be serving most traffic from either object storage (like S3) or their
CDN. Both of which are in the same region as the DO bandwidth figure you
quoted. This not only reduces your compute bandwidth costs but also
drastically improves your application performance when under load (and as an
added bonus you also reduce your compute CPU costs since they're not serving
up static content).

I've found even some relatively simple infrastructures (2x web servers, 1x
database server + some extra caching services), AWS actually worked out
cheaper than Digital Ocean. The problem is you'd have to approach hosting on
platforms like AWS slightly differently than you'd approach hosting on the
likes of Digital Ocean. But once you start scaling your application, you'll
soon find that you'll want to adopt the aforementioned topology anyway
regardless of arguments about AWS/Google/etc vs Digital Ocean.

~~~
mluggy
S3 is $0.023/GB (still x4 higher), plus $0.004 per 10K requests CloudFront is
$0.085/GB, plus $0.0075-$0.01 per 10K requests) GCP is $0.08-0.12/GB (same for
all services), plus $5 per 1M requests) And who says most of the bandwidth
goes towards static assets? 99% of our requests are dynamic (thus the NodeJS
frontends)

~~~
laumars
_> And who says most of the bandwidth goes towards static assets?_

It obviously depends on the application but generally you'd expect that to be
the case for web sites (ie not services that act only as an API backend).

 _> 99% of our requests are dynamic (thus the NodeJS frontends)_

I don't know your business so you might be transferring more dynamic data
rather than static but your argument here doesn't help explain your business
as you're mixing metaphors. eg most of our requests are dynamic as well but
JSON APIs will generally return smaller chunks of data compared to an image.
So in real bandwidth terms on our application, more data is transferred for
static content rather than dynamic. None of the static content touches our web
servers though. In fact a fair amount of the dynamic content doesn't either
since a lot of that isn't user specific so is served from caching services.

If you don't mind me asking, how much bandwidth you transfer each month? Even
taking the above into account, our bandwidth costs (which is nothing compared
to when we self-hosted) gets dwarfed by our computing costs (VMs and DB).

By the way, how are you finding node.js for your workload? I find it a weird
choice if 99% of your requests are dynamic based on bottlenecks I've
experienced when deploying test systems on it. But I ask about your
experiences because I've not used node.js for anything at scale (ie just a few
thousand requests a second).

~~~
kuschku
I’m not the one you responded to, but, I wanted to share my data:

I’ve had months where I’ve transferred above 180TB of data, at an overall cost
of below 40€ in that month, with dedicated servers at a European hoster.

------
ridruejo
If you want a (free) digital-ocean like interface on top of GCP checkout our
Bitnami launchpad [https://google.bitnami.com](https://google.bitnami.com)

------
Elect2
The only thing that vps attracted me is the pricing of bandwidth.

