
Kubernetes Is a Surprisingly Affordable Platform for Personal Projects - cdoxsey
http://www.doxsey.net/blog/kubernetes--the-surprisingly-affordable-platform-for-personal-projects
======
sklivvz1971
The list of things mentioned in the article to do and learn for a simple,
personal project with k8s is absolutely staggering, in my opinion.

Having used it, there's a sizeable amount of further work needed which the
article doesn't mention (e.g. learning how to use the pretty confusing google
interface, finding the right logs and using their tools). So the overhead is
really huge.

Furthermore, the whole system is slow. Want to run a SQL query against your
postgres? You need to use a google cloud command that changes the firewall and
ssh's you in on the machine... and this takes a couple of minutes, just enough
to make me desist unless I _really_ need to run that query. Abysmal.

Finally, and this is a pet peeve against many advocacy blog posts, they just
show you the happy path! Sure, _in the best of cases_ you just edit a file. In
a more realistic case, you'll be stuck with a remote management system which
is incredibly rich but also a really steep learning curve. Your setup is not
performant? Good luck. Need to tweak or fine tune? Again, best of luck.

We've tried to adopt k8s 3-4 times at work and every single time productivity
dropped significantly without having significant benefits over normal
provisioning of machines. {Edit: this does not mean k8s is bad, but rather
that we are probably not the right use case for it!}

...which in turn is usually significantly slower than building your own home
server (but that's another story!)

~~~
reacharavindh
This. Time and again. The number of people who adopt complicated stuff like
Kubernetes for what is essentially a couple of web servers and a database is
too high. They're Google wannabies that thinks in Google's scale but forget
that it is utterly unnecessary in their case.

I know a bio scientist who spent two months working on containers and Docker
and what not, for what are essentially independent shell scripts that are best
run as batch jobs. I spoke with him in length, and he realized at the end of
the day that what he really needed was better understanding of standard *nix
processes and not containers...

~~~
codegladiator
I had an excellent time working with kubernetes and I am practically a one
person company. Kubernetes frees my mind from so many things that now I hate
to work without it. Couple of those things include:

\- automated ssl

\- centralized logging

\- super easy scaling up and down (going from 1 instance to 2 instance is a
headache manually)

\- centralized auth (have a service which doesnt have built in auth?)

\- super easy recovery (containers recreated/volumes attached
automatically/apps just work unless you have db type of application which
shouldn't be just restarted, which is rare)

\- Smooth CI/CD (gogs/jenkins/private docker image repository)

As for the "execute query" example, why is it such a headache ? I just
"kubectl ssh <container>" and I am in.

> I know a bio scientist who spent

Super obscure example. Kubernetes is definitely not for non-tech people. And I
didn't pick k8s overnight. Spent a year doing playful POCs before deploying in
a production environment.

If the only thing thats stopping you from using k8s is the learning curve, I
suggest you to go ahead and learn it. Its a huge boon.

~~~
reacharavindh
Thanks for sharing your experience.

I question myself whether it is necessary to hide my operational needs behind
a behemoth of complexity like Kubernetes. The list of conveniences you
mentioned sounds like magic you get from Kubernetes. What if there is a
problem with any of them?

Missing logs?

Inappropriate scaling?

Auth failures? or worse, failure of Auth system?

Easy recovery? what if there were failures to checkpoint/backup/snapshot
containers?

CI/CD is good regardless of whether you use Kubernetes or not.

EDIT: The question is, if you have any of these problems, why is it better to
get your head into how Kubernetes deals with those operations & tools rather
than dealing with well-defined unix tools that specialise in doing these jobs.
syslog instead of however Kubernetes gathers the logs, reading FreeIPA docs
instead of Kubernetes auth system logs?

My point is, that to deal with all of the conveniences you mentioned, you need
to know their details anyway. Why rely on Kubernetes abstraction if there is
no such need? (I'm not trying to being snarky. I'm genuinely curious why you
think it is a good idea. If you convince me otherwise, perhaps I would start
adopting Kubernetes as well.)

I run my cluster(I'm a sysadmin) with

a couple of OpenBSD server that runs redundant DNS and DHCP.

a CentOS 7 box that runs FreeIPA as central Auth.

an OpenBSD server that acts as public facing SSH server.

about 20 nodes, all provisioned using kickstart files, and then configured
using Ansible. They run databases, web servers, batch jobs, Git, etc.

A single server that runs ELK stack for log analysis.

A single server that runs Prometheus for quick aggregated monitoring.

Do you think I should switch over to Kubernetes for any benefits?

~~~
ownagefool
Cool. You have something bespoke. It works for you, but it's going to include
a lot of toil when someone replaces you.

Now personally, I'd rather kubectl get pods --all-namespaces, figure out where
the log collector is, what's wrong with it, and fix it, but instead I'm
probably going to be reading your docs and trying to figure out where these
things are by the time I've fixed it on a kube cluster.

~~~
reacharavindh
I'm not sure I understand. Sorry, my exposure to Kubernetes is only a few days
and is limited to an overview of all of its components and a workshop by
Google.

> It works for you, but it's going to include a lot of toil when someone
> replaces you.

I was thinking that Ansible + FreeIPA(RedHat enterprise product) + Elastic
logging setup(ELK) + Prometheus would be easier for my successor to deal with
than figuring out my bespoke setup of Kubernetes(which keeps adding new
features every so often). Even if I did not create proper docs(I do my best to
have good doc of whatever I do), my sucessor would be better off relying on
RedHat's specific documentation rather than guessing what I did to a
Kubernetes version from 6 months ago...

If something breaks in FreeIPA or Unbound(DNS) or Ansible, it is very much
easier to ask targeted questions on StackOverflow or lookup their appropriate
manuals. They don't change as often as Kubernetes does. Don't you think?

Alternatively, if something breaks on Kubernetes, you'd have to start digging
Kubernetes implementation of whatever feature it is, and hope that the main
product hasn't moved on to the next updated release.

Is it not the case? Is Kubernetes standard enough that their manuals are
RedHat quality and is there always a direct way of figuring out what is wrong
or what the configuration options are?

Here I was thinking that my successor would hate me if I built a bespoke
Kubernetes cluster rather than standard enterprise components such as the ones
I listed above.

~~~
ownagefool
> I'm not sure I understand. Sorry, my exposure to Kubernetes is only a few
> days and is limited to an overview of all of its components and a workshop
> by Google.

No worries. The fact you asked the question is a positive, even if we end of
agreeing to disagree.

> I was thinking that Ansible + FreeIPA(RedHat enterprise product) + Elastic
> logging setup(ELK) + Prometheus would be easier for my successor to deal
> with than figuring out my bespoke setup of Kubernetes(which keeps adding new
> features every so often). Even if I did not create proper docs(I do my best
> to have good doc of whatever I do), my sucessor would be better off relying
> on RedHat's specific documentation rather than guessing what I did to a
> Kubernetes version from 6 months ago...

So both FreeIPA and ELK would be things we would install onto a kube cluster,
which is rather what I was commenting about. When either peice of software has
issues on a kubernetes cluster, I can trivially use kubernetes to find, exec
and repair these. I know how they run (they're kubernetes pods) and I can see
the spec of how they run based on the kubernetes manifest and Dockerfile. I
know where to look for these things, because everything runs in the same way
in kubernetes. If you've used an upstream chart, such as in the case of
prometheus, even better.

For things that aren't trivial we still need to learn how the software works.
All kubernetes is solving is me figuring out how you've hosted these things,
which can be done either well or poorly, and documented either well or poorly,
but with kube, it's largely just an API object you can look at.

> Is it not the case? Is Kubernetes standard enough that their manuals are
> RedHat quality and is there always a direct way of figuring out what is
> wrong or what the configuration options are?

Redhat sells kubernetes. They call it openshift. The docs are well written in
my opinion.

Bigger picture is if you're running a kubernetes cluster, you run the
kubernetes cluster. You should be an expert in this, much the same way you
need to be an expert in chef and puppet. This isn't the useful part of the
stack, the useful part is running apps. This is where kubernetes makes things
easier. Assuming your bespoke kubernetes itself is a different thing. Use a
Managed Service if you're a small org, and a popular/standard solution if
you're building it yourself.

~~~
reacharavindh
Thanks for the patient response.

Reading through your response already showed that at-least some of my
understanding of Kubernetes was wrong and that I need to look into it further.
I was assuming that Kubernetes would encompass the auth provider, logging
provider and such. Honestly, it drew a parallel to systemd in my mind, trying
to be this "I do everything" mess. The one day workshop I attended at Google
gave me that impression as it involved setting up an api server, ingress
controller, logging container(something to do with StackDriver that Google had
internally), and more for running a hello-world application. That formed my
opinion that it was too many moving parts than necessary.

If there is a minimal abstraction of Kubernetes, that just orchestrates the
operation of my standard components(FreeIPA, nginx, Postgres, Git, batch
compute nodes), then it is different than what I saw it to be.

> if you're running a kubernetes cluster, you run the kubernetes cluster. You
> should be an expert in this, much the same way you need to be an expert in
> chef and puppet.

I think that is the key. End of the day, it becomes a value proposition. If I
run all my components manually, I need to babysit them in operation.
Kubernetes could take care of some of the babysitting, but the rest of times,
I need to be a Kubernetes expert to babysit Kubernetes itself. I need to
decide whether the convenience of running everything as containers from yaml
files is worth the complexity of becoming an expert at Kubernetes, and the
added moving parts(api server, etcd etc.).

I will play with Kubernetes in my spare time on spare machines to make such a
call myself. Thanks for sharing your experience.

~~~
ownagefool
> That formed my opinion that it was too many moving parts than necessary.

Ingress controller and log shippers are pluggable. I'd say most stacks are
going to want both, so it makes sense for a MSP to provide these, but you can
roll your own. You roll your own, and install the upstream components the same
way you run anything on the cluster. Basically, you're dogfooding everything
once you get past the basic distributed scheduler components.

> I think that is the key. End of the day, it becomes a value proposition. If
> I run all my components manually, I need to babysit them in operation.
> Kubernetes could take care of some of the babysitting, but the rest of
> times, I need to be a Kubernetes expert to babysit Kubernetes itself.

So it depends what you do. Most of us have an end goal of delivering some sort
of product and thus we don't really need to run the clusters ourselves. Much
the same way we don't run the underlying AWS components.

Personally, I run them and find them quite reliable, so I get babysitting at a
low cost, but I also do know how to debug a cluster, and how to use one as a
developer. I don't think running the clusters are for everyone, but if running
apps is what you do for a living, a solution like this is most for all levels
of stacks. Once you have the problem of running all sorts of apps for various
dev teams, a platform will really shine, and once you have the knowledge, you
probably won't go back to gluing solutions together

------
quaunaut
I'm beginning to think that what Kubernetes needs by far, is a better
development environment story.

1\. Something that makes it easy to run the application(s) you're working on
locally... 2\. ...but have it interact with services in a local kubernetes
cluster easily. 3\. Something that makes your entire development environment
effectively immutable, read-from-files the same way as the immutable
infrastructure we're talking so much about. 4\. Something that gives you a
good start to running in production, so the first day of bringing your service
up isn't a mess of trying to find/remember what environment variables/config
options to set. 5\. Something that's one tool, instead of a mixture of
`minikube`, `kubectl`, `docker`, `draft`, and `telepresence`. 6\. Something
that's opinionated, and follows "convention over configuration", with
conventions that make for the best future designs.

Basically, we need something with a Rails-like impact but in the Kubernetes
ecosystem.

~~~
sgk284
Docker ships with swarm, and swarm + a compose file does everything you just
described.

Swarm is such a pleasant and easy alternative that comes out of the box with
Docker, does 90% of what Kubernetes does, and does 100% of what anyone with
less than 50 servers needs. The learning curve is also an order of magnitude
easier.

~~~
nevi-me
Docker started supporting local k8s as a swarm alternative, I think from last
month's release.

~~~
spicytunacone
I'm reading that it's only for Docker EE or Docker Desktop (Mac/Windows).

[https://forums.docker.com/t/is-there-a-built-in-
kubernetes-i...](https://forums.docker.com/t/is-there-a-built-in-kubernetes-
in-docker-ce-for-linux/54374/2)

[https://blog.docker.com/2018/07/kubernetes-is-now-
available-...](https://blog.docker.com/2018/07/kubernetes-is-now-available-in-
docker-desktop-stable-channel/)

I just installed Docker via package manager so I guess I never really learned
about the organization's offerings in total and didn't know which product I
had (CE for Linux). Is there a more up-to-date/better practice for using
Docker CE with k8s than this guide dated March[0] using Minikube[1]?

[0]: [https://blog.sourcerer.io/a-kubernetes-quick-start-for-
peopl...](https://blog.sourcerer.io/a-kubernetes-quick-start-for-people-who-
know-just-enough-about-docker-to-get-by-71c5933b4633) [1]:
[https://github.com/kubernetes/minikube](https://github.com/kubernetes/minikube)

------
nickjj
I don't think you're being fair with your comparisons to DO when it comes to
price. The f1 micro $5/month Google servers are SO much worse.

Those "always free" f1 micros have 600mb of RAM, do not have SSDs and
according to some tests[0] you only get 20% of 1 vCPU. There's also some
pretty severe write limitations.

That's a really big difference vs DO's 1gb of RAM, SSD and 100% of 1 vCPU. The
performance difference alone with SSD vs no-SSD is massive, especially on a
database and while most web apps aren't CPU bound, 20% of 1 vCPU is bound to
run into problems beyond a simple hello world that you set up in your example.

Also what happens if your app doesn't even fit in that 600mb? On DO you could
get a 600mb app + 100mb OS + 150mb DB + whatever else running on that $5/month
server since it has 1gb of ram.

In a real world application I wouldn't be surprised if that cluster performs
way worse than a single DO server in the $5/month range. In some cases your
app wouldn't even be able to run.

[0]: [https://www.opsdash.com/blog/google-
cloud-f1-micro.html](https://www.opsdash.com/blog/google-cloud-f1-micro.html)

~~~
minimaxir
This is part of the reason why I use a similar setup as OP, but with a single
n1-standard-1 instance (1 vCPU/3.75 GB RAM) instead ($7.30/mo w/ preemptible).
The 3-node requirement of GKE is not a hard requirement apparently.

Of course, that's mostly for running Jobs; a single node + preemptible with a
production app is not a great idea.

------
pstadler
Great article!

I‘m running a full-featured three node cluster for less than $10/month on
Hetzner Cloud. This kind of setup requires a little more effort, but it can be
fully automated and you learn a couple of things about Kubernetes on the way.
I published and maintain this guide[1], including provisioning using Terraform
and a bunch of example manifests to get you started.

[1] [https://github.com/hobby-kube/guide](https://github.com/hobby-kube/guide)

~~~
raesene9
One thing you might want to look at based on a quick scan over your guide is
that, at the moment ,it looks like you're running your etcd cluster with no
authentication.

Any attacker who was able to get on to the pod network (e.g. if there's a
vulnerability in one of the apps running on the cluster) could hit the etcd
endpoint and dump the contents of the database, which generally includes
sensitive information like k8s secrets.

~~~
pstadler
Indeed, see [https://github.com/hobby-
kube/guide/issues/6](https://github.com/hobby-kube/guide/issues/6)

Edit: I think this was you.

~~~
raesene9
ah yeah sorry I'd forgotten I mentioned it before.

------
module0000
>> You don't have to learn systemd; you don't have to know what runlevels are
or whether it was groupadd or addgroup; you don't have to format a disk, or
learn how to use ps, or, God help you, vim

That's a quote from the article. The thought that someone with that mindset is
responsible for anything more than a network-connected waffle iron is
_terrifying_. This article advocates for willful ignorance of Unix/Linux,
because you don't need any of those things if you know k8s.

That said, k8s is nice for some things. I admin a large deployment at work,
and it's relatively painless for developers to be productive with. One of the
reasons for that stability isn't k8s itself though - it's the legion of
seasoned Unix/Linux pros working alongside me. The reason "it just works" is
because under the covers, there are a whole lot of us doing all the boring
stuff to hundreds of physical hosts.

~~~
Spivak
I'm sympathetic to the author's argument, UNIX administration is a full time
job on top of whatever system you're using to do deployments. You leverage
your ops team at work; is ops-as-a-service really all that different?

If one of my devs wants to deploy a rails app I don't expect them to suddenly
be a sysadmin or network engineer or infosec expert. I expect their
application to be well written but the rest of the stack is up to ops.

------
superkuh
This article, and a lot of others, seem to be using an overloaded meaning of
"personal".

All the arguments stem from the assumption that personal projects have the
same uptime, scaling, and deployment requirements as some large commercial
site. Some personal projects might but I'd argue most of those are really
commercial in the sense that you're developing them for a portfolio or the
like. They're not personal.

It makes sense to use business tools like kubernetes when you're making a demo
business site. It doesn't make it a good platform for personal projects.

The best platform for those is hosting from home on a single computer, for
free, with total control. ie, personal.

~~~
daurnimator
I disagree. Even for personal projects I want to be able to access them from
my phone while out and about. Something running on my home computer (which is
turned off when I'm not there!) doesn't help. Nevermind that my home internet
isn't fast or reliable.

If you include running my own blog/mail/XMPP/matrix servers in "personal
projects", then I would love to have perfect uptime.

Additionally, being able to reuse high quality helm charts here just makes
things simple instead of having one set of instructions for "personal" use and
one set of instructions for everyone else.

~~~
wafflesindeed
Why would you want to be able to access your personal projects from your
phone? What context requires this over waiting and solving the problem at
home?

What about work life balance? (yes, it's a personal project, but still)

~~~
jamessb
A personal project might be something that is useful in your day to day life
(e.g., a notes app, todo list, or calendar), or something that you want to be
able to show to fiends/family when you see them in person (e.g. a photo album,
or something related to a mutual hobby).

------
rbanffy
It really depends on your personal project.

Kubernetes is a complicated, complex beast. Even on managed K8s on a cloud
provider, you still need to learn a lot of stuff that has absolutely nothing
to do with what you intend to accomplish (unless your personal project _is_
learning to use K8s.

Most of my personal projects get deployed on Google's App Engine. It's easy,
simple and unbelievably cheap.

There was a fantastic presentation at the last IO about riding as much as you
can on free tiers:

[https://www.youtube.com/watch?v=N2OG1w6bGFo](https://www.youtube.com/watch?v=N2OG1w6bGFo)

------
erikb
Sure if you put enough effort in you can make everything usable and cheap. But
why put in the effort into that instead of investing it into building a system
that naturally fufills such requirements?

Kubernetes was never intended as a system for hobbyists, but for bringing more
users onto google cloud. At that it is incredibly successful. But in exchange
a lot of other stuff it does really badly.

It is much better to start with a single-node system in mind that expects
getting more nodes at some point and develop from there. This will bring in a
lot of hobbyists and really carves out how to solve the developer story. Then
this will grow automatically with the size of popular projects building on it.

That said, of course it's nice if people invest energy into finding other
usecases for k8s as well. But it shouldn't be the community's main effort.

------
07d046
It looks like it isn't surprisingly affordable on Amazon. If I'm reading this
right, EKS is 20c an hour per cluster, or about $150/month, and that's before
EC2 costs for the actual machines in the cluster.

[https://aws.amazon.com/eks/pricing/](https://aws.amazon.com/eks/pricing/)

~~~
throwaway2016a
You can run Kubernetes on AWS without EKS and if you use t2.micro instances it
would be around $28 per month if you use the load balancing technique here
(where it is DNS based on an actual LB). $21 a month if you account for the
free tier (and this article uses Google's free tier).

If you further use spot instances (the equivalent here to "preemptible") you
can save a ton more.

The real savings (or $7 per month) is that on GCE you won't need to run your
control server. GCE does that for you for free.

Edit: now another issue here is (and I have no idea if this is true for GCE
like it is AWS). t2 instances will throttle you if you use too much CPU which
can be an issue when running a lot of containers.

~~~
tannhaeuser
Sure but the context is personal projects or portfolio/small business sites. A
VPS on a discount provider such as Hetzner or OVH costs around 3-4 USD/EUR per
month. A redundant VPS (two VPS and Ceph) costs 10 EUR on OVH. Not to speak
about the complexity of setting up and running k8s. You'll be needing a second
k8s development/testing environment for staging as well. So, no, I don't think
k8s makes sense economically.

------
maxhallinan
I tried out Kubernetes (managed with kops) for a personal project. I had a
single websocket endpoint. I wanted to terminate SSL at the load balancer.
Found this to be extremely complicated and the troubleshooting resulted in a
~$100 AWS bill.

------
tln
The k8s system containers are quite resource hungry. The author is clearly
very good at k8s (I enjoyed the article), cos running it cheap ain't easy

------
heipei
I'm surprised in all of these posts about k8s on HN the Nomad project (by
Hashicorp) rarely comes up. I've found it an absolute joy (yeah, I said it) to
use and a breeze to setup. Unless you're setting up a container orchestration
system where you want to customise every single aspect while you're setting it
up, my guess is you're better off with Nomad at first. At the very least
you'll learn what you need from a container scheduler and can make a more
informed decision if you re-evaluate your options down the line. Nomad also
integrates perfectly with Consul and Vault, so you're never left wondering "is
this is how I'm supposed to do it?" which is something that happens if you're
just starting out.

URL: [https://www.nomadproject.io/](https://www.nomadproject.io/)

~~~
shaklee3
Check out that hashicorp blog over the last three weeks. Consul integrates
tightly to kubernetes now. Vault already did.

~~~
josegonzalez
Consul only integrated "tightly" with kubernetes in the past few weeks. The
vault integration is pretty barebones, and anything done there has been
spearheaded by the Kubernetes community.

------
erulabs
I had a project about this size on GKE, in an attempt to get a feel for Google
hosted Kubernetes product. I went on vacation for two weeks while a container
was printing an error (unbeknownst to me). Google wanted to charge me ~700
dollars for StackDriver logging volume (a feature I was not aware was
connected to my Kube pod stdout).

Google Cloud refused to refund or provide any guide as to how to entirely
disable / purge StackDriver, so I'm back on AWS and won't be recommending
anyone move to GKE any time soon (it's extremely extremely easy to suddenly
more than 100x your bill...)

With my hand-built Kube on AWS I have 15gb assigned for my ELK logging
cluster, and it costs me nothing because it falls under the free tier. Looking
forward to trying out Digital Ocean's product next!

~~~
trumpeta
to be fair, if you have CloudWatch and send logs there it can get pretty
expensive too

------
aksels
From my experience it's quite expensive, at least in GKE. As Kubernetes takes
a lot of cpu & memory by itself. I run a pool of one node n1-highcpu-2 (2
vCPUs, 1.8 GB memory) and it costs me about 72 euros per month.

kube-system is taking about 730 mCPU of the node's 2000 mCPU.

The issue is that the kube-system is deployed on each node of the pool so
(tell me if I misunderstood/misconfigured something). If you have a pool of 3
nodes which have 1 vCPU each, with kube-system taking approximately 700 mCPU
on each you have only 300 * 3 = 900 mCPU allocatable.

If you have any tips on how I could reduce the costs of my personal projects
I'm listening !

~~~
puzzle
I just saw a change the other day which makes system pods fit better on nodes
with one vCPU. It reduced their numbers by the same factor (i.e. keeping the
same proportions). It was meant for GCE and I am not sure if/when it will make
it to GKE, but it looks like Google is looking at the tiny node scenario.
Also, Tim Hockin led a discussion about something similar at last year's
Kubecon.

~~~
puzzle
Here it is. It's in 1.12, actually, not on HEAD as I recalled:

[https://github.com/kubernetes/kubernetes/pull/67504](https://github.com/kubernetes/kubernetes/pull/67504)

Still not sure if these are the configuration files that GKE uses behind the
scenes, but the author is a Google employee.

------
gizzlon
Or you can push your webapp to Appengine standard and it's 0$ per month =)
Appengine requires you to learn some new stuff, but so does k8.

Not the right choice for all apps, obviously, but something to compare this
setup to.

------
GordonS
I need to host a web API and Postgres database in a high availability
configuration. I only need a single instance each of the web API and database,
but uptime is crucial.

I'd like to use containers, partially because it gives an easily reproducible
environment for running locally.

For such a small setup, what's the recommended way of handling this, such that
I get automatic failover and can easily do zero-downtime updates?

I did look a bit into k8s, but quickly came to the conclusion that it's far
too complex for this.

Oh, I'm also constrained to using the Azure cloud, if it matters.

------
nathan_f77
No-one has mentioned Convox yet: [https://convox.com](https://convox.com)

Kubernetes is great, but Convox is way easier to set up. It's much closer to
having Heroku in your own AWS account. It also leverages a lot of AWS services
(ECS, RDS, etc.) instead of re-inventing the wheel. The only downside is that
it only supports AWS.

I'd recommend Convox for personal projects, but it can be a bit expensive
because you have to run a minimum of 3 EC2 instances.

~~~
bdcravens
In the past I've had similar success with Cloud 66.

------
cdelsolar
I set up a single-node cluster for a personal project on DO and it took me
some time, but it's up and running and has been fine for over a year. I used a
decently beefy instance ($20/monthly on DO) because the Kubernetes processes
are actually too resource intensive for the $5 instance. It runs several
processes (the main Python app and a helper Go-based API), automatic SSL
renewal, a daily cleanup Cronjob, etc.

I really like being able to push a completely new container whenever I push to
master - this is done with CircleCI, which builds the image, pushes to Docker
Hub, and just does the kubectl apply. I couldn't think of any other way to do
it that wouldn't result in some downtime if I had to change or upgrade
packages.

I wrote about it here a bit: [https://hackernoon.com/lessons-learned-from-
moving-my-side-p...](https://hackernoon.com/lessons-learned-from-moving-my-
side-project-to-kubernetes-c28161a16c69)

~~~
Rotareti
_> I couldn't think of any other way to do it that wouldn't result in some
downtime if I had to change or upgrade packages._

[https://github.com/helm/helm](https://github.com/helm/helm) ?

You push the new version of your app as a helm chart to your helm repo and
then run:

    
    
        helm upgrade my-app

~~~
cdelsolar
no - I meant, outside of Kubernetes, in the past (sorry, my wording in my post
was a bit confusing). My app used to just be something I would ssh into and
then do a git pull, but whenever I had to upgrade packages it would be tricky.
Then I tried Docker with docker-compose and I had similar issues with
environment variable changes. Kubernetes is great; all I do is kubectl apply
-f new_config.yml and everything just works without downtime. I only very
recently learned about Helm when doing automatic SSL upgrading, but it looks
cool too.

In general I really don't think k8s is the monster people make it out to be. I
did all this on the side over the course of a few weeks, a few hours here and
there. I know I'm only scratching the surface of what is possible, but
everything just works fine, and my .yaml files are pretty self-explanatory.

------
ratiolat
I'm trying to set up a deployment pipeline, quite from scratch. That also
includes picking source repository software. Self-hosted is a must for every
component. So I chose gitlab and after that I dived into it. Oh boy was I in
for a surprise. For CI/CD one needs docker, kubernetes and whatnot:
[https://docs.gitlab.com/ee/topics/autodevops/](https://docs.gitlab.com/ee/topics/autodevops/)
But the alternatives don't look better either.

So, fellow hn'ians. My requirements ATM are quite simple: Submit code via git,
run automated test, if all good, put the files into the live system. Pull-
requests are must and everything must be self-hosted because cloud is just
another persons computer. Code is written in python. What would you recommend?
Edit: grammar.

~~~
jl-gitlab
Hey there, I'm the product manager for CI/CD at GitLab. AutoDevOps is one of
our more advanced features to get up and running in a k8s environment quickly,
but is not required if you just want basic CI.

Our documentation on getting started with CI/CD is here:
[https://docs.gitlab.com/ee/ci/](https://docs.gitlab.com/ee/ci/)

~~~
ratiolat
Thank You! I will look into that. OTOH this is how I came to conclusion that I
must go the AutoDevOps way: This is my first installation of gitlab ever. So
naturally, after somehow successfully installing it on the 2nd try, everything
seems to work. So - lets explore configuration options. For example we should
disable registration since, well, we don't want to be a public gitlab host.
After a while, after the Pages section, we come to a section named "Continuous
Integration and Deployment". And there's a checkbox named "Default to Auto
DevOps pipeline for all projects". And there does not seem to be any other
kind of "Continuous Integration and Deployment" available so one naturally
comes to a conclusion that the only way seems to be Auto DevOps. My bad.

~~~
dgruesso
Hi ratiolat, another GitLab PM here. Thanks for the feedback, as jl-gitlab
mentioned, we'll work to make that clearer. You can think of Auto DevOps as a
CI template that has a job for all devops stages built-in. If you want to
experiment with kubernetes first you can easily add a kubernetes cluster from
GitLab and experiment using with CI with or without auto devops. More info
here:
[https://docs.gitlab.com/ee/user/project/clusters/](https://docs.gitlab.com/ee/user/project/clusters/)

------
pawurb
Dokku is a viable infrastructure solution for side projects, even if you don't
have much dev ops exp [https://pawelurbanek.com/rails-heroku-dokku-
migration](https://pawelurbanek.com/rails-heroku-dokku-migration)

------
snaky
> Kubernetes, right now, supports some workloads really really well, like
> stateless workloads that are CPU intensive that can be easily scaled
> horizontally, with easy to understand patterns that we currently put behind
> load balancers.

> Other workloads, Kubernetes just doesn’t support well and likely will not
> support well in the near future because it’s really hard to do. It’s not
> magic, as some of the hype surrounding it would have you believe.

[https://gravitational.com/blog/running-postgresql-on-
kuberne...](https://gravitational.com/blog/running-postgresql-on-kubernetes/)

------
chuckdries
I prefer simple tools.

\- Heroku is cheap for personal projects. \- A single Digital Ocean droplet
where you manage everything yourself, is cheap for personal projects. \- If
you're working in node, Glitch.com is literally free node hosting

------
Tehchops
I feel like Kubernetes solves a lot of problems.

I don't feel like any of them have to do with personal hosting/blogs.

If you really want to go simple/cheap for that, why not eschewing compute
entirely and go serverless/static?

~~~
nkassis
I tend to agree with this. Keeping thing as simple as possible to start is
ideal.

Also out of the box docker offers a lot of functionality for single machine
and development environments that might be good enough. I've seen kubernetes
make more sense when you need to grow to multiple machines and are trying to
optimize resource utilization.

------
deckarep
When picking preemptive machines for this demo project is the intention here
to simply save money for a pet project?

I guess my main question is would anybody actually use preemptive for k8s
knowing that if a worker node gets reaped k8s will magically route only to
healthy nodes while the preempted instance replacement is standing up?

~~~
tmaier
Sure!

First of all, GK3 would create new nodes when one of them gets killed.

Second, builidng “cloud-native” applications means to build stateless
applications, e.g. using 12-factor. So it does not really matter if your
application gets killed, restarted or horizontally scaled.

For databases, i always recommend to use a PaaS offering to have an easy and
worry-free life

~~~
donmcronald
What if all 3 nodes get killed at the same time? Can you use the always free
instance + 2 preemptible instances?

~~~
DuskStar
You can set GKE up to use multiple groups of nodes, yes. In this case you'd
want to configure one group with a single non-preemptible micro node, and a
second with two preemptible micro nodes. It's not something I'd really worry
about for a hobby project though.

------
leandot
For personal projects I am using multi-node Hetzner instances managed with a
Terraform script heavily inspired from [https://github.com/hobby-
kube/guide](https://github.com/hobby-kube/guide) \+ RKE for K8S setup and it
works really well.

~~~
hobofan
For personal projects, I am using a single dedicated server at Hetzner with
Docker & HAProxy. It's beefy enough to be able to handle as many personal
projects as I have time to dedicate to.

I've tried to replace my Docker setup with Kubernetes multiple, but it was
always too much of a hassle. Things that were easy with a "bare" Docker setup
became hard, and single-node clusters always felt like a second-class citizen.

~~~
ngrilly
I almost do the same (Docker and nginx as a reverse proxy on Google Compute
Engine or Digital Ocean virtual machines).

But how do you manage deployments of new versions without downtime?

I use Docker Compose, but I had to complete it with some scripts to manage the
deployment (start the new containers, wait for new containers to be ready,
switch traffic to new containers, shutdown old containers), and it's a bit
messy and non-standard. I'm thinking of using using Docker in swarm mode which
offers the concept of services with automated deployments.

~~~
leandot
I would not recommend Docker Swarm, it's way too simplistic and my feeling is
that it lags so much behind k8s that it might get discontinued. I've done
Ansible, Docker Swarm, custom scripts and nothing is as clean as using
standard K8S components + it gives you a good skill on your resume.

~~~
GordonS
Is Docker Swarm really too simple for personal projects?

~~~
leandot
Depends on what you call a personal project, it might even be an overkill for
some. My problem with it is that it does not seem to be as actively developed
and gaining traction as other alternatives and support for it might get
removed in the future. That is only an assumption though, I've used it before
and it is much easier to start with than k8s.

For really simple cases what worked well was to use docker-machine to
provision a remote host. When developing I'd use the local docker, for
production I'd switch the active docker to the remote host via docker-machine
and just use docker-compose to bring a container up and down.

------
bcoughlan
I know a mid-sized company who have a team of 3 people dedicated to managing
their Kubernetes clusters. For a small company that can't afford the overhead
of that, it's an extra layer of risk and complexity.

Amazon EKS launched last month in the EU west region, which I was excited
about. But each cluster is $150 per month (20c per hour) before you even add
any servers. To have a dev, qa and production cluster we're looking at $450
per month before the cost of EC2 instances to actually run the application.
Our EC2 costs for backend services are about $800 per month, so it instantly
negates the selling point that we can use reduce costs by using resources more
efficiently.

Our current deployment setup is groups of Docker services deployed to a set of
EC2 instances via Docker Compose with a load balancer in front of each server
group. Having isolated server groups means that outages are isolated to
particular areas. I don't know how long we'd be down if there was an issue
with the Kubernetes cluster. If we introduced Kubernetes it would be left in
the hands of a couple of developers to manage, which is the opposite of the
DevOps culture we strive for.

We stay cloud-agnostic so it's easy for a developer to spin up a stack of
dependent services with Docker Compose (Nginx, Redis, RabbitMQ etc.) when
working on changes to a service. For a developer to run services locally with
Kubernetes, they need to set up MiniKube. Many corporate environments have
tight integration with Windows and our developers run Linux in a VM. I
experimented with getting MiniKube to run in a VM but found it hard to get
going with the vm-driver=none option, and it's something I definitely didn't
want to add to our onboarding process.

Kubernetes has a serious hype train at the moment reminiscent of the MongoDB
days. My inner conspiracy theorist says that this is because Google has a
vested interest in preventing vendor lock-in on cloud providers. It solves a
Google-sized problem and introduces some complexity with the promise of
removing more complexity than it adds. For small to mid sized companies,
however, it adds far more complexity than it removes.

~~~
StavrosK
To be fair, you don't need three clusters, you only need one and you can
namespace things.

~~~
bcoughlan
Maybe I'm oldschool but I'd be quite wary about mixing anything in production,
qa and dev environments.

~~~
matwood
That's not old school, it's pragmatic. For example, if all your environments
are namespaces in the same cluster, how do you test cluster changes before
running them on prod?

~~~
geerlingguy
Also, how do you test cluster K8s upgrades, cluster instance type changes,
etc. These things aren't just magic and could definitely impact production
systems, so you have to have at least one non-prod cluster running.

------
sheepz
Looking to try out Kubernetes, will definitely give this a try.

Although the article starts with kind of a strawman (Kubernetes is robust
section): you could do most of those with Docker (and compose) or with a non-
containerized setup altogether?

------
bwm
Why not just outsource hosting to Heroku or something similar? Unless you're
building something with extremely exotic requirements, it seems like you're
just wasting time trying to build out your own infra.

------
dekhn
I really wanted k8s to be a nice replacement for everything I could enjoy
about borg.

After using it for a while, I found it far more cumbersome to work with, and
the architecture to be confusing.

I'm quite comfortable with complicated environments, typically all I really
want is a small cluster running my long-running infrastructure, and expansion
to spot instance worker nodes to meet burst capability. Just setting that up
in k8s, keeping it up to date, and monitoring the various components that fail
because you configured just-not-enough-RAM for them became tiresome.

------
nicolaslem
Having worked with Mesos, Nomad and an in-house container scheduler, I would
not pick any of them for personal projects. I've not tried Kubernetes yet but
I don't think it would make me change my mind.

I recently went through the infrastructure that runs my personal projects
(Django, PostgreSQL, Redis, a few static sites...). I decided to go with plain
Docker + Ansible.

These tools give me enough automation to easily scale up or change hosting
provider if I need, without introducing too much magic and complexity.

~~~
ngrilly
I do the same (Docker + Ansible). I also use Docker Compose.

How do you deploy new versions of your apps without downtime (start the new
containers, wait for them to be ready, switch traffic to new containers,
shutdown old containers)? I'm using custom Ansible scripts coupled to docker-
compose but it's a bit messy and non-standard.

~~~
nicolaslem
To update an app container to a new version I just run Ansible again:

    
    
        - docker_container:
            image: foo/bar:latest
            state: started
            restart_policy: always
            pull: yes
    

This pulls the new version and restarts the container. The downtime is
probably a second maximum. It's small enough that it is not worth the trouble
of a rolling release or whatnot. It's a personal project after all, no one
will complain for getting a 502 once a month :)

I do not use docker-compose as I never felt the need to and I read on multiple
occasions that it's meant for development only.

~~~
leandot
Many services do not tolerate any downtimes, imagine you are processing money
and you kill the process in the middle, it can be quite the pain to recover.

~~~
nicolaslem
The article and my comments are about personal side projects, not about
running a business. For me this means cutting down on complexity: 80% of the
features for 20% of the work.

Moreover, stopping a container does not mean killing it, it sends a SIGTERM
that the application catches to terminate gracefully.

I would also argue that applications processing money must be able to tolerate
unexpected failures. Because Kubernetes or not, servers go down, processes get
killed, disks get corrupt.

~~~
ngrilly
You may have side projects which are for a small business or a non-profit, and
yet requires being able to sustain multiple requests per second on a single
machine. In such a setup, zero downtime deployments are necessary.

Some (most) application are being able to gracefully terminate, but it doesn't
solve the issue. You still need some mechanism to start the new version, wait
for it to be reader, switch the new traffic to the new version, and then
gracefully shutdown the old version.

------
humbleMouse
I'm going to say it again, just like in every kubernetes thread.

People should check out openshift. It's built by redhat and is an abstraction
layer above k8's. I love it.

------
parsadotsh
While I don't have experience with large production systems, for hobby
projects I found Nomad to be much easier to start with. Even the K8s distros
that were supposedly "minimal" took a huge amount of time to setup in a
reproducible manner. Plus Nomad's fantastic integration with consul makes it
more than just a scheduler and a more complete alternative to Kubernetes.

------
djhworld
I looked at Kubernetes for personal stuff but the overhead was too much for
something I'd rather just set and forget.

My current solution is to just run docker on some raspberry pi's and have
scripts to replace the container when I do an update, with volumes regularly
backed up to S3.

Works fine, although you don't get the clustering capability (I might try
docker swarm one day but for now it's fine)

------
topher200
This seems awesome. I love the automated way to update Cloudflare DNS
addresses via a trigger from the Kubernetes informer API.

~~~
donmcronald
Yeah, that's pretty cool. I don't know much about K8S, but it looks like that
might be a good strategy to build your own API gateway if `crystal-www-
example.default.svc.cluster.local:8080` is a portable URI. I've never used
them, but always thought the API gateway products seem like a way you could
get locked into a specific platform.

------
stephanlindauer
we, at rebuy, built this little thing to run kubernetes on just one node:
[https://github.com/rebuy-de/terraform-aws-
coreos-k8s-single-...](https://github.com/rebuy-de/terraform-aws-
coreos-k8s-single-node) my colleague and i mainly use it to run our private
hobby-projects.

~~~
StavrosK
I freaking love Dokku for this setup. All the ease of use with none of the
headache.

------
betoh5
For those who wanted to try this on GKE don't forget to open port 80 on your
node firewall.

[https://kubernetes.io/docs/tasks/access-application-
cluster/...](https://kubernetes.io/docs/tasks/access-application-
cluster/configure-cloud-provider-firewall/)

------
lostmsu
Why do you even need Kubernetes when you can get free hosting from Azure
(AppServices?) or Google (Cloud Engine?)?

~~~
thrower123
Azure AppServices are hard to beat. One-click deployments from Visual Studio,
or it takes about five minutes to setup continuous deployment from git if you
use a free private VSTS repo. If you want to use the tier that allows a real
domain instead of the azurewebsites domain, it costs about $10/month.

Kubernetes seems like a lot of unnecessary complexity for small workloads.
Kind of plinking tin cans with a nuclear bazooka. What is the break-even point
where all of this stuff starts to really be worth bothering with?

~~~
sp527
> What is the break-even point where all of this stuff starts to really be
> worth bothering with?

Series B

------
sngz
"Out of college I spent 5 years working in the Windows ecosystem. I can tell
you my first job at a startup using linux was not an easy transition."

more interested in how he manage to do that transition. been struggling with
that part myself when competing with other candidates who have years of linux
experience.

------
jhabdas
That guy's contention is written directly above a Website with a copyright
dated 2017 and they personally admitted they couldn't remember how to deploy
their own site. Was it possible they couldn't do it because it was too
complex? My contention is the answer to that question is yes.

~~~
cdoxsey
If you're curious, I used google app engine and google recently changed the
build tools for go on app engine. I have a lot of go source code examples on
my blog which aren't intended to be compiled by app engine, but with the
recent changes I couldn't get it to deploy. After a couple of hours I decided
I was fed up and tried k8s.

Its a bit silly, but sadly that's often how software goes, at least in my
experience.

(also strictly speaking I couldn't do a static site cause there are some other
things I run on here that are more than just html)

I fixed the copyright date :)

------
ngrilly
GKE documentation says 25 % of the first 4 GB are reserved for GKE:

[https://cloud.google.com/kubernetes-
engine/docs/concepts/clu...](https://cloud.google.com/kubernetes-
engine/docs/concepts/cluster-architecture#memory_cpu)

Is it what you observe?

------
ksajadi
In my experience, having a reliable, configurable and flexible enough pipeline
to generate the needed configuration files for Kubernetes makes a lot of
difference in the adoption. Tools like Helm or Cloud 66 Skycap are some
examples but I'm sure there are more.

------
paulcarroty
Well, let's talk about bad side:

1) Google Cloud - hello privacy and support issues.

2) Traditional enterprise-style overcomplexed configs and architecture - the
shitty trick "we want your losted time as justify to stay here"

3) Forget about quick deployment and scaling.

------
vlucas
The issue is not cost. The issue is conceptual overhead and sheer complexity.

------
Can_K
I have not yet hosted anything in a container/cloud. Therefore maybe a stupid
question, how do you keep your system up to date?

What am I required to do in order to receive new operating system and software
versions?

~~~
minimaxir
A hosted provider like GKE will update the OS of the master/nodes for you.
(although the OS itself doesn't do much; Google uses a container-optimized OS
that only handles containers and has a much smaller footprint)

------
edpichler
My personal conclusion using Kubernetes for personal projects: I think it's
very good when you have lots of deployments, iterate fastand also a CDI
environment. For this, K8s is excellent.

------
IshKebab
This is so over-complicated for a personal project.

------
rb808
This is great. I always thought k8s was perfect for personal projects but too
expensive, I'm going to have to double down now.

------
geku
At [https://www.kubebox.com](https://www.kubebox.com) we are working on
affordable Kubernetes clusters for developers and personal usage. It's not as
cheap as the mentioned solution but also comes with more memory and CPU power.
Around $36 for 8GB and maybe less for just a 4GB node, not yet decided. Master
is fully managed and hosted by us. What do you think?

------
xaduha
What kind of personal projects need a cluster?

I'm pretty sure docker-compose + swarm can satisfy 99% percent of your needs.

------
alexnewman
I've never met someone who didn't love k8s unless they already knew how to run
machines

------
TekMol
What does it give me over a bare VM?

~~~
movedx
Are you serious right now?

~~~
TekMol
Yes

~~~
movedx
(can't edit my original comment at this point)

Didn't think so.

------
nqzero
great post - thank you for convincing me not to spend another moment
considering using k8s ! /s

------
mschaef
> You don't have to learn systemd; you don't have to know what runlevels are
> or whether it was groupadd or addgroup; you don't have to format a disk, or
> learn how to use ps, or, God help you, vim. All this stuff is important and
> useful and none of it is going away. I have a great deal of respect for
> sysadmins who can code-golf their way around a unix environment. But
> wouldn't it be great if developers could productively provision
> infrastructure without having to know all of this?

No.

At least for small applications, Unix already has functional, established, and
documented mechanisms for the vast majority of how they need to be managed.
(Windows does too.) At the end of the day, I think people would be far better
off learning and using those, until there is a specific documented need for
some feature unique to some more powerful platform. Otherwise, you're
continually paying a complexity cost for features you might never use.

Going back to the author's original list of 7 questions, I run a few personal
apps in AWS, and here's how I answered:

> How do you deploy your application? Just rsync it to the server?

I scp a JAR file to a staging directory and run an install script. The install
script is pretty much standard across the handful of apps I run.

> What about dependencies? If you use python or ruby you're going to have to
> install them on the server. Do you intend to just run the commands manually?

The JAR file is an uberjar, so the dependencies are included.

> How are you going to run the application? Will you simply start the binary
> in the background and nohup it? That's probably not great, so if you go the
> service route, do you need to learn systemd?

I learned systemd. It gives me easy service management and a standardized way
to ensure the process comes up at server startup. The service script is pretty
much standard across the handful of apps I run.

> How will you handle running multiple applications with different domain
> names or http paths? (you'll probably need to setup haproxy or nginx)

I installed nginx. Took about 30 minutes.

> Suppose you update your application. How do you rollout the change? Stop the
> service, deploy the code, restart the service?

Pretty much.

> How do you avoid downtime?

I don't. Many small apps can tolerate small downtime windows, including the
ones I run, so I don't need to worry about continual operation. If I did have
to worry about that, I'd have additional things to worry about in my
application design.

> What if you screw up the deployment? Any way to rollback? (Symlink a
> folder...? this simple script isn't sounding so simple anymore)

Select the previous jar file, which I will have retained. Schema migrations
complicate rollback, so I keep them to a minimum, test a bunch locally, and
make sure I have backups of the database taken immediately before a
deployment. (So that I can roll back if absolutely necessary.)

> Does your application use other services like redis? How do you configure
> those services?

Persistence handled through an embedded SQL database, and a library I've
written to managed schema migrations. For nginx and git, I have source code
snapshots of the configuration, but that's only needed if I rebuild the
server. (Once in over five years.) Generally, for small apps, I try to keep
the dependency footprint as small as possible. (Which is pretty much the
entire gist of this comment, now that I think about it.)

------
danieldk
I use NixOS, which allows you to define machines declaratively. To the points
that he raises:

 _How do you deploy your application? Just rsync it to the server?_

Define an overlay for your application. For a typical application, this is
just a few lines of code with _stdenv_.

 _What about dependencies? If you use python or ruby you 're going to have to
install them on the server. Do you intend to just run the commands manually?_

If you have an overlay for your applications, you can specify dependencies in
the derivation of your application. Even though these are not web apps (I
primarily work on local applications), I have derivations for some of my
applications, so that I can install them on every machine with little effort.
Two examples:

[https://github.com/danieldk/nix-
home/blob/master/overlays/30...](https://github.com/danieldk/nix-
home/blob/master/overlays/30-alpinocorpus.nix)

[https://github.com/danieldk/nix-
home/blob/master/overlays/20...](https://github.com/danieldk/nix-
home/blob/master/overlays/20-finalfrontier.nix)

If I want to develop one of the applications, say _alpinocorpus_ , I can just
drop in a shell with all the dependencies (cmake, boost, Berkeley DB XML,
etc.) available:

nix-shell '<nixpkgs>' -A alpinocorpus

When you exit the shell, you are again in your usual environment. Another
option is to create a default.nix file in a project directory and use direnv
to automatically load the environment when you switch to a directory.

 _How are you going to run the application? Will you simply start the binary
in the background and nohup it?_

You can define (systemd) services in a NixOS configuration.

 _How will you handle running multiple applications with different domain
names or http paths? (you 'll probably need to setup haproxy or nginx)_

NixOS has support for defining nginx virtual hosts. You can also enable Let's
Encrypt and Nix will set up systemd timers to automatically renew the
certificates. E.g. here is a configuration of one of my machines with a bunch
of virtual hosts with Let's Encrypt:

[https://github.com/danieldk/nixos-
config/blob/master/machine...](https://github.com/danieldk/nixos-
config/blob/master/machines/castle.nix)

 _Suppose you update your application. How do you rollout the change? Stop the
service, deploy the code, restart the service? How do you avoid downtime?_

nixos-rebuild switch --upgrade

Or with nixops you can deploy and update machines remotely.

 _What if you screw up the deployment? Any way to rollback? (Symlink a
folder...? this simple script isn 't sounding so simple anymore)_

Yes, every change creates a new generation. You can always switch into or boot
into an older generation as long as it isn't garbage collected. And this works
for the whole system, including kernel updates, etc.

 _Does your application use other services like redis? How do you configure
those services?_

Declaratively ;). Adding _services.redis.enable = true;_ to your NixOS
configuration does the trick.

As a bonus, with nixops you can deploy a machine configurations on real-
hardware, VPS, Virtual Box, KVM, etc.

------
fridgamarator
+1 for using Crystal

------
rconti
"Kubernetes is free"

FTFY

------
nova22033
How are there only 2-3 references to minikube in the comments?

------
Draiken
People keep saying k8s is so complex and the learning curve is steep but I
honestly struggle to see it. The only hard part about k8s is setting up the
cluster itself. Using it is so easy it boggles my mind how can someone say
it's hard.

It's a simple API with an amazing CLI that immediately solves dozens of issues
you would have sooner or later.

Take for example a very simple rails application. I'll keep it simple with 3
routes: manual, automated and with k8s (assuming you're not building the
cluster itself).

Manually setting up would require you to:

\- Create a VPS

\- SSH into the VPS

\- Install at least Ruby, NGINX (or some webserver), the DB and all Rails
dependencies

\- If you want easy deployments you'd probably use something like Capistrano
and have to set it up

\- If you want the system to be resilient you will have to add some sort of
process monitoring and automatic startup

This is very easy to get wrong. One missing step and you're having to manually
SSH into the machine and trying to figure out what went wrong.

An automated setup would essentially do everything you did on the manual
setup, but you'd have to write that into something reproducible like Ansible,
Chef, Puppet or similar. Every single one of these tools has a gotcha and a
learning curve and that's on top of all the base knowledge of setting it all
up by hand.

Finally with k8s (assuming we're using Google's GKE offering):

\- Dockerize the app (which is extremely simple with tons of tutorials)

\- Download the gcloud CLI (alternatively it can all be done through the
website if that's your thing)

\- Create the cluster

\- Either create some manual yaml files or just use kubectl to create a
deployment for: rails, nginx and the database (again, there are millions of
examples of this even on the official docs).

\- Create an ingress and point my DNS to the ingress' IP

All of these methods have learning curves and are in many ways complex.
However I don't think Kubernetes is a lot different than the others. It's just
a different way of doing the same.

* Instead of manually finding out how to setup load balancing and reverse proxying on NGINX, you learn how to use the k8s services.

* Instead of setting up Capistrano (or any other deployment tool) you learn how a k8s Deployment and rollouts work

* Instead of figuring out how to setup L7 load balancing you learn to use k8s Ingress

On top of all that with GKE (or any other managed k8s solution, I presume)
you'd automatically have: logging, metrics, scaling, rolling updates and much,
much more.

People are resistant to change and it's definitely not an "one size fits all"
situation, but as soon as you need anything beyond an amateur website, k8s
provides the best ROI. I've had to do this on all of these scenarios and I can
confidently say Kubernetes changed the game.

I got a 10 year old Rails application with tons of dependencies from an old
dedicated server setup to Kubernetes on a few days (on Kubernetes 1.1 at the
time, which was a lot harder) and I'm not some kind of genius. If I can do it,
any developer can.

