
Managed Kubernetes Price Comparison - spalas
https://devopsdirective.com/posts/2020/03/managed-kubernetes-comparison/
======
neop1x
Egress costs at those major clouds are ridiculous. Once you start having some
traffic it can easily make 50% of all costs or more. That is not justifiable!
Meanwhile hardware costs are going down. We need more k8s providers with more
resonable pricing. Unfortunatelly both Digital Ocean and Oracle Cloud don't
have proper network load balancer implementations which is a must for elastic
regional clusters and to forward TCP in a way that client IP is preserved and
be able to add nodes without downtimes or TCP resets. OVH cloud doesn't
implement LoadBalancer service type at all. So the choice in 2020 is really
just Google, Amazon, Azure with their rolls-royce pricing. The cost difference
between them is neglible. And then there are confidential free credits for
startups. So sad...

~~~
vasco
All of AWS's VPC features are free. You get Security Groups, Subnets, Route
Tables, all kinds of shenanigans in a very stable way, with basically no
incidents that I can remember.

Except those are not free, they're just charged for separately under "egress
transfer costs". End of the day, the engineers in the networking teams at AWS
still have to get paid and that service probably also needs to run at a
profit. When seen under this light, the costs make more sense to me than the
usual simple view of "but bandwidth transfer costs are cheaper everywhere
else!!!"

~~~
jugg1es
NAT gateways (a required component if you want real security) cost like
$30/month each and you need at least 2 for HA.

~~~
scarface74
Well, I’ve got bad news for you, if you think NAT Gateways are required for
“real security”, just wait until you enable IPv6 on AWS.....

~~~
kawsper
Could you elaborate on this?

~~~
scarface74
When you enable IPv6, there is no NATing. All of the IP addresses are public.
You can use an Egress only Internet Gateway.

[https://docs.aws.amazon.com/vpc/latest/userguide/vpc-
nat.htm...](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat.html)

But technically, your EC2 instance never has a public IP address. You can see
for yourself by assigning a public IP address to an ENI attached to your
instance and doing an ifconfig.

An Internet Gateway is a one to one NAT.

~~~
jugg1es
Interesting, I did not know this is how IPv6 worked. Need to investigate this
more.

------
david-s
It doesn't seem to include Digital Ocean in the comparison.

~~~
mjfisher
Digital Ocean is still significantly cheaper (unsurprisingly). They don't
charge for the control plane, so you just pay the normal prices for the
droplets and resources you use. It's well integrated, allowing K8 to provision
load balancers and volumes, and the Terraform provider for it works well.

My (admittedly small) cluster of 3x 4Gb droplets, an external load balancer,
and volumes enough for logs, databases and filesystems costs about 70
USD/Month. It's been absolutely rock solid too. I have very few minor gripes
and a lot of positive things to say about it.

~~~
gingerlime
Isn't it more limited though, e.g. with auto-scaling not available for nodes,
but only for pods?

~~~
photonios
DigitalOcean now has a node auto-scaling as well [1]. It was released very
recently. It was not available in the first general release.

[1] [https://www.digitalocean.com/docs/kubernetes/how-
to/autoscal...](https://www.digitalocean.com/docs/kubernetes/how-
to/autoscale/)

------
mcdoker18
Only Azure doesn’t charge for the k8s control plane, that is the most
surprising thing for me.

~~~
petilon
We don't know how long that's going to last. Google didn't charge for control
plane either, until recently. So Azure couldn't charge, in order to be
competitive. Now that Google has started charging Azure may start too.

~~~
fernandotakai
Azure also doesn't have an SLA, so that's why they don't charge.

Google started charging after they added an SLA.

~~~
dodobirdlord
Makes sense, the point of an SLA is that you agree to pay back the customer's
money if you don't meet it. What does it mean to have an SLA for a free
product?

------
almostdigital
Scaleway gives you a k8s cluster starting from 40 EUR per month

~~~
neop1x
Thanks for this! I've looked at Scaleway and so far I like it a lot, this
looks like the end of my months of suffering. It seems to fit my needs
perfectly. Finally a smaller cloud provider doing it right!

------
0x1221
I'm not familiar with the space so my question might not be that relevant -
where does OpenShift fit in all of this (I still struggle to differentiate it
from Kubernetes) and is there any merit to IBM trying to sell it so hard?

~~~
p_l
Openshift _wraps around_ Kubernetes, with some of their own special offering
stuff on it. Generally, plain K8s is a building block - RedHat made Openshift
with a bunch of opinionated choices, geared towards enterprise deployments,
some of them migrating later to Kubernetes itself (OpenShift's _Route_
inspired K8s' _Ingress_ ), some OpenShift cribs from K8s (Istio becoming part
of OpenShift by default in OS 4).

Generally OpenShift heavily targets enterprises as "All-in-One" package. Some
of that works, some doesn't, but honestly it's often more a case of the IT
dept that manages the install ;)

Except installing OpenShift. That's horrific. Someone should repent for the
install process, seriously.

~~~
Conan_Kudo
Even with OpenShift 4? I thought it was pretty nice and straightforward, to be
honest...

~~~
p_l
I have yet to touch OpenShift 4 - every environment that used OpenShift that I
worked with professionally except for some testing runs was air-gapped to some
extent from internet, something that is not supported on OpenShift 4, and
which was treated as a crucial requirement by the customer deploying
OpenShift.

~~~
smarterclayton
Airgapped is now available in 4.3 (although it has some rough edges that will
be addressed in 4.4).

~~~
p_l
Oh, that's a good news. Just this friday when I first looked into OpenShift
install, it looked like it wasn't even in the plans, so I might have hit older
docs than I intended.

Makes for higher possibility that $DAYJOB upgrades to OpenShift 4.x, but then,
we would rather get rid of our (intra-group) provider and their openshift
environments...

------
madjam002
Too bad AKS is just terrible.

Slow provisioning time, slow PVCs, slow LoadBalancer provisioning, slow node
pool management, plus non-production ready node pool implementation.

~~~
aliswe
Agreed, not below usable though.

Some more: rolling upgrades of k8s (said to not affect the uptime of the
cluster) not being rolling in actuality, allowing upgrades when the service
principal is expired thus preventing the nodes from being added to the LB,
certain aks versions not being upgradable requiring you to recreate the
cluster from scratch ...

------
showerst
Does anyone have experience with OVH's managed k8s offering? I've had good
experiences with them in the past on pricing/quality.

~~~
freedomben
I tried it out briefly and it seemed to work well. I never went to prod with
it tho. I also didn't try out the LoadBalancer so I can't say how easy that
would be to use. I've heard that cost can unexpectedly jump, so read the docs
before you get too deep into it[1]:

I now have an OpenShift cluster that I do testing with, but if I didn't I'd
probably use OVH k8s in dev because it does seem by far the cheapest.

[1] [https://docs.ovh.com/gb/en/kubernetes/using-
lb/](https://docs.ovh.com/gb/en/kubernetes/using-lb/)

------
empath75
As someone who manages a production cluster, I spend about 1% of my time
worrying about the control plane. It’s trivial to get it running and keep it
running now. It’s all the stuff you build on top of k8s that’s the hard part.
I don’t see much value add to eks, personally.

~~~
spalas
Do you use something like kops for setting up and maintaining your cluster?

Most of my direct experience with Kubernetes has been on GKE, but I have been
meaning to work through [https://github.com/kelseyhightower/kubernetes-the-
hard-way](https://github.com/kelseyhightower/kubernetes-the-hard-way) to gain
more appreciation for what is going on behind the scenes.

------
based2
[https://aws.amazon.com/en/blogs/aws/amazon-eks-on-aws-
fargat...](https://aws.amazon.com/en/blogs/aws/amazon-eks-on-aws-fargate-now-
generally-available/)

------
oroup
At the low end it’s worth considering Fargate distinct from EKS. You don’t
need to provision a whole cluster (generally 3 machines minimum) and can just
run as little as a single Pod.

~~~
petilon
I tried Fargate and found it to be crappy. It is very hard to use. It is
proprietary, so your app will not be portable, and your knowledge and
experience will not be portable either. If you use Kubernetes there is tons of
tutorials, your app becomes portable across clouds and your knowledge is
portable from cloud to cloud too. GKE only costs around $60 per month for a
single-machine "cluster".

~~~
thoraway1010
I use fargate and pretty happy with it. Don't need big scale out - it supports
$1M/year revenue so not huge, but LOVE the simplicity.

I just have the CLI commands in my dockerfiles as comments, so once I get
things sorted locally using docker I update the task with some copy / paste. I
only update occasionally when I need to make some changes (locally do a lot
more).

The one thing I'd love to get my DOCKER image sizes down - they seem way too
big for what they do but it's just easier to start with full fat images. I
tried alpine images and couldn't get stuff to install / compile etc.

~~~
grey
You should look into multistage docker builds, that lets you still use a full
fat image for your build but then leave all the build tools out of your final
image

I liked jpetazzo's post on the subject but there are plenty to choose from
[https://www.ardanlabs.com/blog/2020/02/docker-images-
part1-r...](https://www.ardanlabs.com/blog/2020/02/docker-images-
part1-reducing-image-size.html)

~~~
thoraway1010
Someone else suggested the same thing actually. Easy to get lazy when it "just
works" and internet is 1gig home and office - you can see how bloat just
builds up.

------
spectramax
Just curious - what's wrong with buying a large instance (24 cores) and
running it for < 10,000 users? Kubernetes feels like an insane complexity that
doesn't need to be taken on and managed. You're gonna spend more time managing
Kubernetes than writing _actual_ software. Also, it feels like if something
goes wrong in prod with your cluster - you're gonna need external help to get
you back on the feet.

If you're not going to build the next Facebook, why would you need so much
complexity?

~~~
the_other_b

      If you're not going to build the next Facebook, why would you need so much complexity?
    

You don't. I think this is a recent point people are trying to make.
Kubernetes makes sense at a certain scale, but for smaller startups it maybe
shouldn't be the go to.

~~~
DelightOne
So if Kubernetes is too complex, then terraform is a Nono too?

I don't find them complex at all. You just tell the tools to be in a specific
state and the tool applies the necessary changes. Server templates.
Provisioning. Orchestration. etc.

~~~
the_other_b
I don't think theres a comparison there (or I'm just unsure of the point
you're making with that statement). I agree, they aren't conceptually complex,
but Kubernetes is a large scheduler that _definitely_ benefits from having a
dedicated team managing it.

That being said, I always recommend using a tool like Terraform to back up
infrastructure and the likes.

~~~
DelightOne
Maybe I didn't do enough with Kubernetes to need a dedicated team hmm.

The point I wanted to make is that my opinion is a bit different. Being able
to declare how state should be instead of doing it imperatively/with
configuration management is just something I enjoy and which I think does not
cost much more in comparison.

That is why I wondered why not use it as a small startup?

~~~
the_other_b
You definitely still could if you feel the maintenance is manageable. This was
just my experience :) I chose to go with something like Cloud Run.

------
spicyramen
Will be nice to include GPUs in v2 once there is more stable support for the
operators

~~~
spalas
Yeah -- Adding in GPUs and doing a deeper dive on how using some of the low-
cost VM types (w/ small but burstable CPU, etc...) impact both cost &
performance are things I hope to take a look at in the future!

