
Kubernetes for personal projects? No thanks - carlosrdrz
https://carlosrdrz.es/kubernetes-for-small-projects/
======
solatic
Oh man, the original article went way over the author's head. The point of the
original article was that even though Kubernetes is _primarily_ useful for
tackling the challenges involved with running many workloads at enterprise
scale, it _can_ also be used to run small hobbyist workloads at a price point
acceptable for hobbyist projects.

Does that mean that Kubernetes should now be used for _all_ hobbyist projects?
No. If I'm thinking of playing around with a Raspberry Pi or other SBC, do I
need to install Kubernetes on the SBC first? If I'm thinking of playing around
with IoT or serverless, should I dump AWS- or GCE-proprietary tools because
nobody will ever run anything that that can't run on Kubernetes ever again? If
I'm going to play around with React or React Native, should I write up a
backend just so I can have something that I can run in a Kubernetes cluster,
because all hobbyist projects must run Kubernetes now, because it's cheap
enough for hobbyist projects? If I'm going to play around with machine
learning at home, buy a machine with a heavy GPU, figure out how to get
Kubernetes to schedule my machine learning workload correctly instead of just
running it directly on that machine, because uhhh maybe someday I'll have
three such machines with powerful GPUs plus other home servers for all my
other hobbyist projects?

No, no, no, no, no. Clearly.

But maybe I envision my side project turning into full-time startup some day.
Maybe I see all the news about Kubernetes and think it would be cool to be
more familiar with it. Nah, probably too expensive. Oh wait, I can get
something running for $5? Hey, that's pretty neat!

Different people will use different solutions for different project
requirements.

~~~
admax88q
> But maybe I envision my side project turning into full-time startup some
> day.

The state of the art for cluster management will probably something completely
different by then. Better to build a good product now and if you really want
to turn it into a startup, productionize it then.

> Maybe I see all the news about Kubernetes and think it would be cool to be
> more familiar with it.

If learning Kubernetes _is_ your side project, then perfect go do that.
Otherwise its just a distraction, taking more time away from actually building
your side project and putting it into building infrastructure around your side
project.

If what you really wanted to build is infrastructure, then great, you're doing
swell, but if you were really trying to build some other fun side app,
Kubernetes is just a time/money sink in almost all cases IMO.

~~~
carlisle_
> taking more time away from actually building your side project and putting
> it into building infrastructure around your side project.

I generally dislike this way of thinking. Infrastructure is a core component
of whatever it is you're building, not an afterthought. Maybe you can defer
things until a little bit later, but if you can build with infrastructure in
mind you'll be saving yourself so many headaches down the road.

You don't need to build with the entire future of your project's
infrastructure in mind, but deploying your project shouldn't be "ok now what?"
when you're ready, like it was a big surprise.

~~~
theptip
> Infrastructure is a core component of whatever it is you're building

That's true in some sense -- but you can get surprisingly far using a PaaS
like Heroku to abstract that infrastructure away.

I'm a big fan of Kubernetes, and use it in production at my company, but I
would not recommend using k8s in a prototype/early-stage startup unless you're
already very familiar with the tool. The complexity overhead of k8s is non-
trivial, and switching from Heroku to something like k8s doesn't involve
undoing much work, since setting up Heroku is trivial.

~~~
hosh
I am a big fan of K8S too, not only use it in production, but I was also the
one that set it up for my team. I agree that, unless you are already familiar
with it, it is not always useful for protyping stage.

There is something to be said about having the infrastructure in mind though.
That's why I'm inclined to use something like Elixir/Phoenix for web-based
projects. Some (not all) of the ideas that K8S brings to the table are already
built into the Erlang/OTP platform.

As for Heroku, there was a recent announcement that I think shifts things
quite a bit: [https://blog.heroku.com/buildpacks-go-cloud-
native](https://blog.heroku.com/buildpacks-go-cloud-native) ... having
standardized container images that runs buildpacks.

The ecosystem and tooling is not quite there yet, but I can see this as
significantly reducing the investment to put into Dockerizing your app for
K8S.

At that point, for the hobbyist, it might be:

Prototype -> Heroku -> K8S with an Operator that can run the buildpack

K8S is really a toolset for building your own PAAS. If there were a self-
driving PAAS (using Operators) targeting small hobbyists that will run cloud
native buildpacks, then the barriers of entry for a hobbyist using K8S is much
lower.

~~~
theptip
> K8S is really a toolset for building your own PAAS.

I don't agree with this; that's one of the things you can do with it for sure,
but multi-tenant isolation is actually one of k8s' weak points -- for example
by default you can access services in any namespace, and you need something
quite specialized like Calico and/or Istio to actually isolate your workloads.
Plus you're still running workloads in containers on the same node, so inter-
workload protection is nowhere near as good as if you're using VMs.

I see the big value add of k8s as making infrastructure programmable in a
clear declarative way, instead of the imperative scripting that Chef/Puppet
use. This makes it much easier to do true devops, where the developers have
more control over the infrastructure layer, and also helps to commoditize the
infrastructure layer to make the ops team's job easier, if you ever have a
need to run your own on-prem large-scale cluster.

------
jypepin
I totally agree with this article. I'm giving a point of a view of a pure
developer who knows nothing about DevOps things and managing servers. I kind
of know what NGINX is and barely know how to configure something like systemd.

I recently setup a digital ocean droplet and setup my blog there to actually
understand how it works. It was great because I learned a ton and feel in
control. Pretty simple setup - single droplet, rails with postgres, capistrano
to automate deploys and a very simply NGINX config. It took me multiple days
to setup everything, compared to the 5 minutes Heroku would have required -
and it's not as nice as what Heroku offers.

Still, I'd wait as long as I can to get out of something so simple as Heroku
for _anything_. I understand it gets expensive quickly, but I really want to
see the cost difference of Heroku vs the time spent for the engineering team
to manage all the complexities of devops, automated deploys, scaling, and I'm
not even mentioning all the data/logging/monitoring things that Heroku allows
to add with 1 click.

~~~
Draiken
> I understand it gets expensive quickly, but I really want to see the cost
> difference of Heroku vs the time spent for the engineering team to manage
> all the complexities of devops, automated deploys, scaling, and I'm not even
> mentioning all the data/logging/monitoring things that Heroku allows to add
> with 1 click.

Well, if you use a k8s cluster on GKE for example, you will have literally all
those things by default. Not even a click needed.

IMO running your own Kubernetes cluster for a company is insanity unless you
have a very good reason to do so.

~~~
whydoineedthis
unless you already know how to do it. i can setup a production ready cluster
in under a day and as long as things are containerized, run them likety split.
Hardly insane once you know how it works.

~~~
dsnuh
An on-prem "production ready" k8s cluster in under a day? What kind of
businesses are you setting this up for? Are these all greenfield projects? I'm
sorry, but I find it hard to believe that.

We run on-prem k8s, on bare metal and VMs. Integration with existing storage
(we use NetApp Solidfire and NFS), load balancers, firewalls, backup strategy,
DR, etc. takes weeks, if not months of work. But we may disagree on what
"production ready" means.

~~~
whydoineedthis
Whom said anything about on-prem? on-prem anything takes weeks and has nothing
to do with kubernetes. AWS, production ready. I could probably do google too,
but it would take a couple weeks to write the automation first. Also, if your
on-prem system is already well organized, the kubernetes portion can also be
done 1-2 days. If it takes you weeks to attach storage, load balancers, and
configure your firewalls, none of that has anything to do with a production
ready _kubernetes_ cluster.

~~~
dsnuh
My apologies is I misread what you said, it looked like your reply was to
"anyone running their own cluster is insane" in the previous comment. And as I
said, to me that means "on their own bare metal servers".

I'm talking about getting a Kubernetes cluster (of any kind really, but
specifically on-prem) _integrated_ and doing useful things with existing
legacy workflows inside your company using an existing infrastructure is a
larger task than I see it made out to be.

------
nimbius
I work as an engine mechanic full time, and im learning programming as a
hobby. Kubernetes to me is like the shade-tree mechanic vs the professional.

Professional mechanics use high grade tools that can cost thousands of dollars
each. We have laser alignment rigs, plasma cutters, computer controlled
balancing and timing hardware, and high performance benchmarking hardware that
can cost as much as the car you're working on. We have a "Kubernetes" like
setup because we service hundreds of cars a month.

The shade-tree mechanic wrenching on her fox body mustang on the other hand?
her hand-me-down tool box and a good set of sockets will get her by with about
90% of what she wants to do on that car. she doesnt need to perform a manifold
remap, so she doesnt need a gas fluid analyzer any better than her own two
ears.

I should also clarify that these two models are NOT mutually exclusive. If i
take home an old Chevy from work, I can absolutely work on it with my own set
of tools. And if the shade-tree wants to turn her mustang into a professional
race car, she can send it to a professional "kubernetes" type shop that will
scale that car up proper.

~~~
myWindoonn
This is an interesting perspective because I view k8s as the "shade-tree"
version of a robust cloud platform. It's cheap, quick, dirty, and probably can
take off a few of your fingers if not done carefully, but the payoff is in
being able to spin up lots of resources very quickly.

What do the pros use, then? I hear of things like DC/OS, Openstack, I know
that Google's got "Borg", which is like professional k8s.

~~~
munchbunny
I work for one of those tech giants that I think falls into your "pro"
category in a role that you could call dev ops. I used to work in a much
smaller startup. The comparison of a tech giant's stack to large, growth stage
startup tech stacks is like comparing trains to trucks. It's a qualitatively
different problem because the difference between a few thousand and a few
hundred thousand servers is Murphy's law.

In other words, I think there are two answers to your question of "what do the
pros use?" The first answer is "Kubernetes, because that's the right tool for
the job." The second answer is "My product division has an internal team the
size of a growth stage startup, and it's specifically dedicated to solving
server scaling problems, and that's just my product division."

Another analogy would be the question "how would an F1 team solve this
problem?" One answer is "you don't need an F1 team for that", and the other is
"first, hire an F1 team, then have them build all of the custom tooling the F1
car needs."

------
vbsteven
I've been thinking about setting up a small Kubernetes cluster for hosting
some smaller client projects (read websites, shopping carts, API's, admin
panels).

My current setup uses a couple of Hetzner dedicated machines and services are
deployed with ansible playbooks. The playbooks install and configure nginx,
install the right version of ruby/java/php/postgres, configure and start
systemd services. These playbooks end up copied and slightly modified for each
project, and sometimes they interfere with one another in sublte ways
(different ruby versions, conflicting ports, etc)

With my future Kubernetes setup I would just package each project into its own
self-contained container image, write a kubernetes deployment/pod spec, update
the ingress and I'm done.

~~~
hardwaresofton
If you can find the time to get up to speed on Kubernetes, I would say do it.

I actually have a weirdly similar setup to you (I run on Hetzner and used and
still use ansible), and I've written about it, most recently when I switched
my single node cluster to Ubuntu 18.04 [0]. In the past I've also run a single
node kubernetes clusters on CoreOS Container Linux, Arch, and back to CoreOS
Container Linux in that order, from versions 1.7~1.11.

[0]: [https://vadosware.io/post/hetzner-fresh-ubuntu-install-to-
si...](https://vadosware.io/post/hetzner-fresh-ubuntu-install-to-single-node-
kubernetes-cluster-with-ansible/)

~~~
vbsteven
Thanks, I'll check it out.

I have quite some experience working with Kubernetes clusters for my larger
clients. Usually for clients that are big enough to have their own AWS
account.

The thing I am still on the fence about is whether I should go for a DIY
Kubernetes setup on one or more Hetzner dedicated machines (cheap, more work,
less scalable) or if I should just shell out for AWS and run an easily
scalable cluster with Kops (which is what I use for some clients) and take
advantage of all the AWS goodies like load balancing and RDS.

~~~
hardwaresofton
Well I think that's more of a cost question -- AWS can get expensive pretty
quickly. Three t2.micros (one coordinator, 2 nodes) are absolutely pitiful in
terms of processing power but that's already ~$30/month when you could get a
way beefier machine on Hetzner whether dedicated or cloud (also Scaleway[0]).

I'm a fan of Hetzner because I think their cheap dedicated machines are worth
the operational costs for me, and the issues with upkeep I'll face are good
for me because that knowledge has value. Also, I want to note that if you
actually start subscribing to the immutable infrastructure movement that's
going on right now, once you look past all the buzzwords it's a fantastic way
to run servers stress free -- as long as your data is backed up/properly
managed, just shoot servers left and right, spend a lot of time to get them
into the right state ONCE, and never worry about it again. You can even use
terraform to provision hetzner. Again, this kind of thinking and the related
tooling is catchy/popular right now because it's useful at larger scales, but
it can also free you of a lot of worry at lower scale. For example I have a
post on using Container Linux to go from brand new machine to single node k8s
cluster, with _one_ file.

To be honest though, setting up a Hetzner dedicated machine is _very very_
easy -- they've got great utilities. You could even go with Hetzner Cloud and
things will be more managed.

I would say go with AWS if you want to experiment with AWS technology as well
-- and want to use their value added services. If you run kubernetes on a
dedicated machine on hetzner you're definitely not going to get the rest of
that, of course.

BTW, kubeadm is better/less complex than kops -- It's almost impossible to
fuck it up, but there are subtle things in kops due to the AWS integration
that make things a little things ever so slightly more difficult.

[0]: [https://www.scaleway.com/pricing/](https://www.scaleway.com/pricing/)

[1]: [https://vadosware.io/post/k8s-container-linux-ignition-
with-...](https://vadosware.io/post/k8s-container-linux-ignition-with-rkt-and-
kube-router)

~~~
talkingquickly
I'd be really interested in how you're approaching persistence? I've also
found self managing clusters provisioned with kuibeadm fairly hassle free
until persistence is involved. Not so much setting it up (e.g. rook is fairly
easy to get going with now), but the ongoing maintenance, dealing with
transitioining workloads between nodes etc etc.

~~~
hardwaresofton
tl;dr - Rook is the way to go, with automatic backups set up -- using rook
means your cluster resources are ceph-managed, you basically have a mini EBS
-- Ceph does replication across machines for you in the background, and all
you have to do is write out snapshots of the volume contents from time to time
just in case you get spectacularly unlucky and X nodes fail all at once, in
just the right order to make you lose data. Things get better/easier with CSI
(Container Storage Interface) adoption and support, snapshotting is right in
the standard and restore is as well -- barring catastrophic failures you can
just lean super hard on Ceph (and probably one more cluster-external place for
colder backups).

I'd love to share! In the past I've handled persistence in two ways:

\- hostPath setting on pods[0]

\- Rook[1] (operator-provisioned ceph[2] clusters, I free up one drive on my
dedicated server and give it to rook to manage, usually /dev/sdb)

While Rook worked well for me for a long time, It didn't quite work for me in
two situations:

\- Setting up a new server _without_ visiting Hetzner's rescue mode (which is
where you would be able to disassemble RAID properly)

\- Using rkt as my container runtime. The Rook controller/operator does a
_lot_ of things which require a _bunch_ of privileges, which rkt doesn't give
you by default and I was too lazy to work it out. I use and am happy with
containerd[3] (and will be in the future) as my runtime however, so I just
switched off rkt.

 _right now_ , I actually use hostPath volumes, which isn't the greatest (for
example you can't really limit them properly) -- I had to switch from Rook due
to my distaste for needing to go into Hetzner rescue mode to disassemble the
pre-configured RAID (there's no way currently to ensure they _don 't_ raid the
two drives you normally get after the automated operating system setup).
Normally RAID1 on the two drives they give you is a great thing, but in this
case I actually don't really care much for main server contents since I try to
treat my servers as cattle (if the main HDD somehow goes down it should be
OK), and I know that as long as ceph is running on the second drive I should
have reliability as long as I have more machines which is the only way to
really improve reliability, anyway.

Supposedly, you _can_ actually just "shrink" the raid cluster to one drive,
and then remove the second drive from the cluster -- then I could format the
drive and give it to Rook. With Rook though (from the last time I set up the
cluster and went through the raid disassembly shenanigans ), things are really
awesome -- you can store PVC specs right next to the resources that need them
-- this is much better/safer than just giving the deployment/daemonset/pod a
hostpath.

These days, there's also local volumes[4], which are similar to hostPath, but
offer a benefit in that your pod will know where to go because the node
affinity is written right into the volume. Your pod won't ever try and run on
a node where the PVC it's expecting isn't present. The downside is that local
volumes have to be pre-provisioned, which is basically a non starter for me.

I haven't found a Kubernetes operator/add-on that can dynamically
provision/move/replicate local volumes, and I actually wanted to write a
simple PoC one of these weekends -- I think it can be done naively by
maintaining a folder full of virtual disk images[4] and creating/mounting them
locally when someone asks for a volume. If you pick your virtual disk
filesystem wisely, you can get a lot of snapshotting, replication, and other
things for free/near-free.

One thing Kubernetes has coming that excites me is the CSI (Container Storage
Interface)[5] which is in beta now and standardizes all of this even more.
Features like snapshotting are right in the rpc interface[6], which means once
people standardize to it, you'll get a consistent means across compliant
storage drivers.

What I could and should probably do is just use a hetzner storage box[7].

[0]:
[https://kubernetes.io/docs/concepts/storage/volumes/#hostpat...](https://kubernetes.io/docs/concepts/storage/volumes/#hostpath)

[1]:
[https://rook.github.io/docs/rook/v0.8/](https://rook.github.io/docs/rook/v0.8/)

[2]:
[http://docs.ceph.com/docs/master/start/intro/](http://docs.ceph.com/docs/master/start/intro/)

[3]: [https://github.com/containerd/](https://github.com/containerd/)

[4]:
[https://kubernetes.io/docs/concepts/storage/volumes/#local](https://kubernetes.io/docs/concepts/storage/volumes/#local)

[5]: [https://kubernetes.io/blog/2018/04/10/container-storage-
inte...](https://kubernetes.io/blog/2018/04/10/container-storage-interface-
beta/)

[6]: [https://github.com/container-storage-
interface/spec/blob/mas...](https://github.com/container-storage-
interface/spec/blob/master/spec.md#rpc-interface)

[7]: [https://www.hetzner.com/storage-
box?country=us](https://www.hetzner.com/storage-box?country=us)

~~~
talkingquickly
This is an amazing response, thank you!

~~~
hardwaresofton
Absolutely no problem -- hope you found some of it useful!

------
whydoineedthis
So basically, ignore 1/2 of the reasonable problems that are solved in the
first article and then look, no need to learn anything!!!

As someone whom can setup and run a kubernetes cluster in my sleep, I can tell
you that it is a superb production ready platform that solves many real world
problems.

That in mind, kubernetes has constraints also, like running networked elixer
containers is possible, but not ideal from elixer's perspective. Dealing with
big data takes extra consideration. etc. etc.

All said, if you have an interest in DevOps/Ops/SysAdmin type technologies,
learning Kubernetes is a fine way to spend your time. Once you have a few
patterns under your belt, you are going to run way faster at moving your stack
to production for real users to start using, and that has value.

I think the initial author (not this article, the other one) was just pointing
out that you can indeed run kubernetes pretty cheap, and that is useful
information and good introduction. This article is clickbait designed to mooch
off of the others success.

~~~
emodendroket
> So basically, ignore 1/2 of the reasonable problems that are solved in the
> first article and then look, no need to learn anything!!!

I think the point is... do you actually have those problems? A lot of people
jump immediately to worrying about having thousands of requests per second
when it doesn't make any sense.

~~~
whydoineedthis
Sharing code and getting run on other collaborators workstations? Yes, that's
a very real developer problem.

Deploying without downtime? Yep, it's nice to have because your favorite
customer will have been testing that site in the exact 2 minutes of downtime
which you deploy it ....believe me, murphy's law rules here.

Staging and Production environments that are the same, so I don't have
surprises from local development to production release? Yep, another real
problem that will slow momentum of development.

I suppose if you are developing a personal project of garbage that no one will
ever see, than these problems don't exist. But if you are actually developing
a product, these problems exist.

------
pstadler
> Do you want to do all of this because you think is fun? Or because you want
> to learn the technology? or just because? Please, be my guest! [...]

Kubernetes is likely here to stay. If you're interested in running a cluster
to undestand what the hype is all about and to learn something new, you should
do it. Also, ignore everybody telling you that this platform wasn't meant for
that.

Complexity is a weak argument. Once your cluster is running you just write a
couple of manifests to deploy a project, versus: ssh into machine; useradd;
mkdir; git clone; add virtual host to nginx; remember how certbot works; apt
install whatever; systemctl enable whatever; pm2 start whatever.yml; auto-
start project on reboot; configure logrotate; etc. Can this be automated?
Sure, but I'd rather automate my cluster provisioning.

~~~
collyw
> just write a couple of manifests to deploy a project, versus...

Whenever anyone says " _just_ do something" these days, it usually means that
it hasn't been thought through properly. Is that only my personal experience?

~~~
scruple
No, I have the same reaction, too. Especially when it's nestled in to a
paragraph that, essentially, starts off by highlighting that something is an
example of essential complexity.

------
wheresvic1
Totally agree with the author, for my side projects in Node.js, I use the
following:

\- pm2 for uptime (pm2 itself is setup as a systemd serivce, it's really
simple to do and pm2 can install itself as a systemd service)

\- I create and tag a release using git

\- on the production server, I have a little script that fetches the latest
tag, wipes and does a fresh npm install and pm2 restart.

\- nginx virtual host with ssl from letsencrypt (setting this stuff was a
breeze given the amount of integration and documentation available online)

Ridiculously simple and I only pay for a single micro instance which I can use
for multiple things including running my own email server and a git repo!

The only semi-problem that I have is that a release is not automagically
deployed, I would have to write a git hook to run my deployment script but in
a way I'm happy to do manual deployments as well to keep an eye on how it went
:)

~~~
Draiken
Honestly, you could do that in almost the same time on Kubernetes.

I understand why people might not want to invest the time onto learning a new
technology, but that's not a reason to say it's a bad fit. If you know how to
use Kubernetes, doing these bash scripts and doing a few YAML files will take
basically the same time and the end result will be vastly superior on
Kubernetes.

~~~
Aeolun
Good luck setting up an email server on kube. Or a git repository for that
matter

~~~
Draiken
I would say: good luck setting up an email server _anywhere_

Really not sure what Kubernetes has to do with this argument though.

------
majewsky
I looked into using Kubernetes for my personal servers, but I abandoned the
idea when I saw that the minimal Kubernetes setup uses more compute resources
than all the services [1] it's supposed to manage combined (e.g. 0.5 GiB RAM
vs. 0.25 GiB, which is substantial on a 1/1 VM). And that's before you
consider that a single-server setup is not The Right Way (TM) in k8s land.

[1] Gitea (Github clone), Murmur (voice-chat server), Matrix Synapse (instant
messaging), Prosody (XMPP), nginx (for those services and for various static
websites)

~~~
zeeZ
IMHO Kubernetes only makes sense if you can, or want to, run multiples of
things that are either stateless or clustered in some way, or another copy for
a different purpose.

Run two instances of something if you want to survive a single crash or a node
update. Run another copy of your application stack if you want to try out a
different version or config.

Without looking at the docs, most of the things in your list are single-
instance stateful applications, so unless you plan to run another copy of them
for a different purpose, K8S is overkill.

------
AYBABTME
Kubernetes is an operating system. Using it to run your software is as
overkill today as running your own Linux server was overkill before that.
Maybe you just need to run on Heroku and don't need the complexity of writing
systemd service files?

In the end, the steps you take to deploy with rsync and run your systemd
service are the same (conceptually) you'd take to run on K8S, but translated
to some YAML and a docker push. In one case you need to learn a new paradigm,
in the other case you deal with something you already know. Not having to
learn something new is an argument, but it doesn't mean your bare-Linux
approach is simpler than the K8S approach. You just know it more.

------
caymanjim
Kubernetes is just one of many development ecosystem tools that solve real
problems and, once you know them, make your life easier. The arguments in this
article can apply to any development tool or practice.

Why separate your code into multiple files? Why write tests? Why use a code
linter? Why use virtual environments? Why write a Makefile?

If you're working on a small personal project, or you're a newer developer
learning the ropes, or the project is temporary, not important, doesn't need
to scale, etc. then it's simply a matter of personal choice. It doesn't make
sense to get bogged down learning a lot of tools and processes if there's no
compelling business need and you're just trying to get the job done.

If you already know how to use these tools, though, they usually make your
life a whole lot easier. I learn how to use complex systems in my career,
where they're necessary and helpful. I apply these same tools and practices on
my personal projects now, because once you know how to use something like
Kubernetes, there's little cost to it and many of the benefits still apply.

~~~
whydoineedthis
> tools that solve real problems and, once you know them, make your life
> easier.

yep, i think you nailed here.

------
comboy
In my opinion, the best technology for personal projects is the one that you
don't know yet. They are a great opportunity to play around stuff which really
is the only way of learning something new.

Unless the personal project is something that you really care about, potential
startup or something like that, then obviously you choose something that you
are already proficient in because then it's about getting stuff done and
moving forward.

So while it may make sense to discuss what technology is good or bad for some
kind of companies, I think we won't arrive at any ultimate conclusion like "X
is good/bad for personal projects".

------
giobox
I agree that Kubernetes for personal projects is likely going to be totally
overkill for many, but I disagree that containers themselves are overkill,
which this author also suggests. These are arguably two separate issues
entirely, and lumping them together is extremely misleading. I happily run all
my (very small) side projects in containers without Kubernetes and it's really
pretty simple to do so.

As soon as this author mentioned he was happy with using Ansible, Systemd etc
instead (which are all reasonable tools for what they are) he lost me - this
is collectively much more work for me as the sole developer than a simple
Docker container setup for virtually all web app projects in my experience. If
you understand these relatively complex tools, you can likely learn Docker
well enough in about an hour or two, the payoff in time savings in the future
will make this time well spent.

In my experience "Dockerising" a web app is much, much less time consuming
than trying to script it in Ansible (or Chef, Puppet, <name your automation
tool>) and of course much less error prone too. I've yet to meet an Ansible
setup that didn't become brittle or require maintenance eventually. If you are
using straight forward technologies (Ruby, Java, Node, Whatever) your
Dockerfile is often just a handful of lines at most. You can even configure it
as a "service" without having to bother with Systemd service definitions and
the like at all.

~~~
ngrilly
Out of curiosity, how do you deploy new versions of your without downtime
(start new containers, wait for new containers to be ready, switch traffic to
new containers, shutdown old containers)?

~~~
giobox
For me personally zero downtime upgrades are a little beyond the scope of
"personal project" and veering into something more production quality.

If I really had to for one of these, I'd probably just do something at the
loadbalancer to start routing users to the new container stack then shutdown
the old ones, much as you might have in the pre-container days. I can just
wait the old fashioned way (by sitting in my chair for a minute) for them to
start.

~~~
ngrilly
Ok, this is basically what I do with an Ansible script, but I see it as a bit
messy and non-standard, which is why I'm attracted to Docker swarm mode and
Kubernetes (and maybe Nomad).

~~~
giobox
Fair, but I think it's arguable there isn't really a "standard" way for
container orchestration, and Docker Swarm to me is starting to feel like a
dead horse, despite protestations to the contrary from Docker. The
requirements of different software always make each project's needs for zero
downtime upgrades different enough, especially if you are dealing with legacy
software.

Pick the one that works best for you and your projects goals (within
reason...).

~~~
ngrilly
Right, there isn't really a "standard". There are so much existing solutions:
Kubernetes, Docker swarm mode, Nomad, Spinnaker, etc. What I meant by
"standard" is something used by more than one team :-)

Agreed by Docker swarm mode feeling a bit abandoned.

Do you have any recommendation for a solution that I would have missed?

------
codegladiator
Frankly the article is filled with FUD and the author justifies everything
with "i think/what if/my way is fine for me".

You don't need to run a new cluster for every project. You can deploy multiple
projects in a single cluster. I was running close to 5 different projects in a
single cluster, backed by about 3-6 machines (machines added/removed on
demand).

Kubernetes is basically like your own heroku. You can create namespaces for
your projects. No scripts. You can deduce everything (how is a service
deployed, whats the config, whats the arch) from the config files (yml)

> Is a single Nginx virtual host more complex or expensive than deploying a
> Nginx daemon set and virtual host in Kubernetes? I don't think so.

Yes it is. I wonder if the author has actually tried setting this themselves.
I do realise i had similar opinions before I had worked with kubernetes, but
after working with it, I cannot recommend it enough.

> When you do a change in your Kubernetes cluster in 6 months, will you
> remember all the information you have today?

Yes, why does the author think otherwise ? Or if this is a real argument why
does the author think their "ansible" setup would be at the top of the head. I
had one instance where I had to bring a project back up on prod (it was a
collection of 4 services + 2 databases not including the LB) after 6-8 months
of keeping it "down". Guess what, I just scaled the instances from 0 to 3, ssl
is back, all services are back, everything is up and running.

This is not to say you wont have issues, I had plenty during the time i
started trying it out. There is a learning curve and please do try out the
ecosystem multiple times before thinking of using it in production.

~~~
carlosrdrz
> Frankly the article is filled with FUD and the author justifies everything
> with "i think/what if/my way is fine for me".

It is just my opinion after all. I'm just trying to share my thoughts :)

> Yes it is. I wonder if the author has actually tried setting this
> themselves.

I've used K8s for months in production, maintaining a few clusters at my
previous job.

------
reilly3000
Warning to those who think Fargate is green pastures: it has its own learning
curve. Also, it costs about ~1.8-2.5X the price of standard EC2 for the
convenience. Don't waste your money on it for long-running containers that
will rarely need to scale.

~~~
carlosrdrz
Thanks for sharing your thoughts! I wrote about Fargate because I like the
idea of having a service that manages both masters AND workers and where you
only need to care about the API, but didn't really try it yet. That was my
impression as well, though. Even the use cases mentioned in the pricing site
are just containers running for a few hours a day, and not long-running
services like servers.

~~~
tilolebo
The smallest fargate container in us-east-1 will cost you USD 13 per month if
you never shut it down.

Avoid using a load balancer as they are quite pricey (although it will allow
you to create and use auto-managed SSL certificates for free.)

Of course you will also pay for egress traffic.

The nicest part of Fargate is that:

* you can define your whole cluster using a docker-compose like format.

* you can manage your cluster using the ECS CLI. No extra tool needed.

------
markbnj
This whole line of debate is really getting tiresome. Kubernetes has proven
its value in production use cases across a wide variety of application
domains. That doesn't mean everybody should be using it, any more than the
proven value of containers means that everyone should be deploying in them.
I've been working with k8s for three years and run multiple production
clusters, but if I had some little thing I might very well toss it up on a
paas like app engine, or just install it on a free micro instance as the OP
suggests. Or... maybe I would create a cluster and run it there. Point is
kubernetes is an alternative that I can take advantage of where it makes sense
because I've gotten some experience with it. It might make sense for you, it
might not, but it's not essential that all developers immediately come to
agreement on whether or not all software projects should migrate to k8s by
tomorrow.

~~~
bauerd
I wholeheartedly agree with this. It's one of many HN debates that boil down
to whether X is the right tool. Thing is, you can't judge about the usefulness
of a tool without factoring in its users. If you and your team have experience
with Kubernetes and its ecosystem, you'll have no problems reaping its
advantages even for small deployments. If that's not the case, then by all
means, pick something else.

------
fcgravalos
I can't agree more with the author ;).

I work in my day to day 100% and fully dedicated automating Kubernetes cluster
lifecycle, maintaining them, monitoring them and creating tools around it.
Kubernetes it's a production-grade container orchestrator, it solves really
difficult problems but it brings some challenges though. All of their
components work distributed across the cluster, including network plugins,
firewall policies, the control plane, etc. So be prepared to understand all of
it.

Don't get me wrong, I love Kubernetes and if you want to have some fun go for
it, but don't use the HA features as an excuse to do it.

But overall saying "NO" to rsync or ansible to deploy your small project just
because it's not fancy enough it sounds to me like "Are you really going to
the supermarket by car, when there are helicopters out there?"

Great article!

------
isugimpy
For random toy projects, spinning up a whole Kubernetes cluster is absolutely
overkill (unless part of the project is learning Kubernetes). The thing is as
you get further along, for some applications, it becomes harder and harder to
move to a container-based design as you have to unwind all the weird
dependency mappings. I've got an app I've been involved with containerizing
for a client at work, and they're dead set on sticking with an Ubuntu 14.04
base container, because they legitimately don't know if it'll even function on
a more modern base, and don't feel they can spare the development cycles to
figure it out. Thing is, it started as a toy application, deployed to a server
by manually SSHing in and doing a git pull from the repo (not even rsync!) and
restarting all the services, and that's still how it's deployed in production
today.

Containers (and thus Kubernetes) aren't the magical solution to every problem
in the world. But they help, and the earlier you can get to an automated,
consistent build/deploy process with anything that'll actually serve real
customers, the better off you are. Personally, I'd rather design with
containers in mind from day one, because it's what I'm comfortable with.
There's nothing wrong with deploying code as a serverless-style payload, or
even running on a standalone VM, but you need to start planning for how
something should work in the real world as early as you can reasonably.

~~~
yebyen
FWIW, cedar-14 stack is Ubuntu 14.04 and that's been the base of Deis Workflow
(now Hephy Workflow) for years. We've been meaning to upgrade to Heroku-16
stack (and eventually Heroku-18) but our resources are limited too, and we've
had to fight other dragons like getting a website together, and figuring out
the build system. (Deis Workflow was EOL'ed last year, and Team Hephy is the
fork/continuation of development, which we can do because Deis developers were
all gracious enough to keep everything OSS.)

So, back to the point, I'm sure you couldn't deploy your app on Heroku if
that's your requirement (because cedar-14 is deprecated, and not available for
new deployments anymore) but if you seriously wanted to try containerizing it
onto Kubernetes, and if you don't have other obstacles to 12-factor design
that you're also not prepared to tackle, then Hephy Workflow v2.19.4 might
actually work for you.

[https://teamhephy.com](https://teamhephy.com) and
[https://docs.teamhephy.com](https://docs.teamhephy.com)

I'm sure this probably won't work for you, for reasons you may not have
explained, but ... maybe you'd like to look?

I'm not doing a great job selling it, the one redeeming quality I've mentioned
is that it runs an outdated stack that you need ;)

------
stuaxo
The point about complexity is exactly right.

Every new thing that you add, adds complexity. If that thing interfaces with
another, then there is complexity at the interfaces of both.

Modern tools that atomise everything reduce density (and thus complexity), but
people aren't paying attention to the amount of abstractions they are adding
and their cost.

------
pawurb
Dokku works quite well for personal projects and is easy to get started with
even without much dev ops exp [https://pawelurbanek.com/rails-heroku-dokku-
migration](https://pawelurbanek.com/rails-heroku-dokku-migration)

------
cs02rm0
I don't even want to use it for commercial projects.

It needs a certain scale before the overheads are worth it.

~~~
cs02rm0
Didn't expect this comment to come back from being voted down so much! Guess
it splits opinion.

~~~
vesak
People are crazy (literally) about Kubernetes.

------
tomc1985
Why is there always this foregone conclusion that everyone is going to do
their hobby projects on AWS or GCP or paid cloud platforms? You can get fast
baremetal servers off the gray market super cheap, pay once and you're set.

~~~
swebs
Also something like Linode, Digital Ocean or even NearlyFreeSpeech are all
happy compromises. That way you can have a static IP and not have to worry
about power, cooling, or networking.

------
throw2016
Containers can be easy to use, once you drop devops. Containers are simple, a
much more flexible alternative to VMs.

Unfortunately the devops community always wanted to promote themselves as the
only option for containers and even though they were based on the LXC project
they did not explain the technical decisions and tradeoffs made as they did
not want users to think there are valid alternatives. And this is the source
of fundamental confusion among users about containers.

Why are you using single process containers? This is a huge technical decision
that is barely discussed, a non standard OS environment adds significant
technical debt at the lowest point of your stack. Why are you using layers to
build containers? Why not just use them as runtime? What are the tradeoffs of
not using layers? What about storage? Can all users just wish away state and
storage? Why are these important technical issues about devops and alternative
approaches not discussed? Unless you can answer these questions you cannot
make an informed choice.

There is a culture of obfuscation in devops. You are not merely using an Nginx
reverse proxy or Haproxy but a 'controller', using networking or mounting a
filesystem is now a 'driver'. So most users end up trying Kubernetes or Docker
and get stuck in the layers of complexity when they could benefit from
something straightforward like LXC.

------
doppel
I feel that Kubernetes has a lot of "upfront" cost that needs to be tackled -
containerization, manifests for all the pods you want to set up, potentially
setting up the right persistent storage if needed, user access, logging, etc.
And this is still if you use a "hosted" solution with Amazon/Google/Microsoft,
if you set it up yourself there is a ton more complexity.

Using something you are familiar with, even if it's just a 10-line bash
script, a simple virtual private server and the adding an nginx config there,
is usually faster than having to orchestrate everything. If you want to invest
the time in setting up Kubernetes for _all_ your personal projects, it would
probably make sense.

Basically, is it worth it? [https://xkcd.com/1205/](https://xkcd.com/1205/)

------
marenkay
You could also have told that it is a very complex system where one still just
runs Bash scripts to solve exactly the same problems as on bare metal, VMs,
etc.

~~~
icebraining
That's why I never use computers, everything can be done with pencil and
paper, it just takes a bit longer.

~~~
marenkay
Can't hear you, still waiting for your reply letter! :-D

------
mcs_
I start using Kubernetes with gitlab months ago with the free account. Today
i'm using it for a side project. I never used kubectl since then. the project
is very simple but still it is split into 3 different repos (all connected to
the same cluster). Even if the documentation is not clear about how to connect
different repos to the same cluster, after a couple of hours i had to click
some buttons in the gitlab interface and auto devops is enabled in 3 projects.

in office we do not work with docker, containers, cloud etc, we run legacy
asp.net 2.0 on-premise without any kind of automation (just a couple of us
coordinating the releases and copying and pasting into the customer Windows
Server 2008).

Kubernetes for personal projects? In my case, after 10 years of on-premise
deployments, VM Ware, SQL Clusters, web.config, IIS, ARR and the rest of the
things related, YES!

I absolutely want 3 hosts for less then 100$, a gitlab account for 4$, a free
account in cloudflare, code and deploy.

~~~
dsumenkovic
Hello, Community Advocate from GitLab here.

We are glad to hear that you like using GitLab!

Regarding the documentation, have you checked out the following doc?
[https://docs.gitlab.com/ee/user/project/clusters/index.html#...](https://docs.gitlab.com/ee/user/project/clusters/index.html#installing-
applications)

------
tracker1
I would say to the author that for most small/personal projects that dokku is
probably about as extreme as you want/need. Use Docker for (Windows|Mac|Linux)
locally as needed, and use dokku for your deploy target. A $40 DO/Linode vm
will go a long way for small scale. I've even setup round-robin load balanced
deployments, where the app just deploys to two hosts with the exact same
config. Works great on smaller things.

Of course, if you're in a workplace on a project likely to see more than a few
hundred simultaneous users in a given application, definitely look at what K8s
offers.

Edit: as to deploys, get CI/CD working from your source code repository.
GitLab, Azure DevOps (formerly VSTS), CircleCI, Travis and so many others are
free to very affordable for this. Take the couple hours to get this working,
and when you want to update, just update your source repo in the correct
branch.

------
thrower123
Having to worry about redundancy and scaling out on a one-man personal project
is a very enviable problem to have. Personally, I'm just going to stick with
some kind of paas that gives me a managed IIS or Apache type webserver that I
don't have to frig with, and focus my energies on actually building the
project.

------
tiangolo
I don't get why Docker Swarm mode is so underrated. Docker Compose is
excellent for local development, and Docker Swarm mode uses the same file and
is almost the same thing. Some minor additions in extra docker-compose files
and it's ready for production in a cluster. For the same $5 bucks in a full
Linux VPS with Linode or DigitalOcean (or even the free tier in Google Cloud)
you can get a full cluster, starting with a single node, including automatic
HTTPS certificates, etc. Here's my quick guide, with a Linux from scratch to a
prod server in about 20 min. Including full-stack app:
[https://github.com/tiangolo/full-stack/blob/master/docker-
sw...](https://github.com/tiangolo/full-stack/blob/master/docker-swarm-
cluster-deploy.md)

------
djhworld
What are people classing as personal projects here? I have a bunch of
raspberry pi's running docker that I throw some things on to run, and have
some custom scripts to pull a new image and restart the container when I want
to do an update.

But they're tiny, tiny things that are very personal (i.e. they have 1 user -
me)

If you're getitng to the point where you need to scale things using a
kubernetes cluster or whatever it seems to me like that thing has graduated
from "personal project" to an actual product that needs the features of
kubernetes like reslience and so on.

I mean, I'd love the idea of having a kubernetes cluster to throw some things
onto but I really don't have the patience to set it all up right now, it seems
way too much cost and effort

------
Annatar
"I think the point I'm trying to make is: do you actually need all of this?"

Yep! For any thing which goes beyond the initial viability test, I make an OS
package. SmartOS has SMF, so integrating automatic startup/shutdown is as easy
as delivering a single SMF manifest, and running svccfg import in the package
postinstall. For the configuration, I just make another package which delivers
it, edits it dynamically and automatically in postinstall if required, and
calls svcadm refresh svc://...

it's easy. It's fast. The OS knows about all my files. I can easily remove it
or upgrade it. It's clean. When I'm done, I make another ZFS image for
imgadm(1M) to consume and Bob's my uncle.

------
hosh
I have deployed by hand, with Capistrano, with Chef, with Heroku, with
systemd, with Docker, with AWS EC2, and with Kubernetes.

Like everything, there are tradeoffs. If there were a fairly easy way to do a
one-node Kubernetes setup (say, Minikube), I would probably just go that
route. One doesn't have to use the full feature set of Kubernetes to get one
or two things that are advantageous.

As it is, I setup Minikube for the dev machines for the team I am on. I might
consider Kubernetes for my personal side project if I knew Minikube would do
well for machines under 1 GB of memory (it doesn't really).

The pre-emptible VMs that cost less than $5 is interesting, and I might do
something like that.

------
strzibny
Good response. I agree with the intent of the article. For one I have a
project where I just use Bash to set everything up (no containers what so
ever). It's simple and convenient. I have it set up with Let's Encrypt,
SELinux and git push deploy. The whole script is maybe like 100 lines of Bash
+ 2 configs (nginx and SELinux policy module).

For anybody who is interested in understanding this basic building blocks I
decided to write [https://vpsformakers.com/](https://vpsformakers.com/).

------
kujaomega
I read the article. I'm sorry, but my impression about it is the following:
Why use Kubernetes in your personal projects when you have got no idea about
it?. One think I have experimented after working with Docker for two years, is
that once you know it, you will put every service inside a docker. It only
takes 5 minutes to do it and the benefits are huge. Kubernetes might be an
overkill, but containerize the apps is another thing.

------
beat
Back when I was trying to do a startup full-time, I avoided Kubernetes because
of the steep learning curve. Now that I'm back to being a regular employee,
I've learned Kubernetes out of necessity, and it's great. So if I go back to
working on the startup, I'll do it on Kubernetes, because it will save me time
and ops grief. But it will save time and grief _because_ I already know it
now.

------
jitl
I’m itching to replace my home server’s FreeBSD with Linux and Kubernetes. I
use it (& build dev tools for others) at work plenty so for me, the learning
curve is in the past. I’m not sure if I would recommend this journey for
others, but I also wouldn’t recommend FreeBSD to anyone, either. In both
cases, you know what you’re getting in to - something complex, opinionated,
powerful, and industrial strength.

~~~
tannhaeuser
You'll know this already but I'd say when you're coming from FreeBSD you'll be
disappointed with Docker in particular, because it serves no purpose in the
(usually) well-organized BSD world where the good stuff is built from source,
and developed to POSIX guidelines most of the time anyway. Docker is just a
workaround for the perceived mess of shared libraries in the Linux world of
multiple O/S vendors (by not using shared libs in the first place which could
be solved by statically linking everything), and FreeBSD's jails is IMHO
superior as sandbox technology anyway.

~~~
jitl
I found the jails ecosystem as old and creaky as FreeBSD itself; the
technology is good, and everything makes sense, but after using Docker on Mac
and Linux for a while, I've started to prefer the lighter-weight and more
user-friendly abstractions.

If there was a Jailfile equivalent for FreeBSD and a command-line tool with
the same interfaces as docker, namely `docker run --rm -it ...`, I might be
staying on FreeBSD.

------
chilicuil
I agree with the idea the post is trying to do but not with the
justifications, I think it's wrong to compare Kubernetes vs rsync or ansible,
it resolves a different problem, container cluster management, a more
appropriate comparison would had been to compare with simpler solutions in the
same domain, such as docker swarm or nomad.

~~~
Docker_Docker
Or with smth even simpler, like condo
([https://github.com/prepor/condo](https://github.com/prepor/condo))

------
z3
kubernetes has more advantages than pure horizontal scaling capacity. there
are services, secrets, networking etc, which are useful for small size
projects at the same way as big ones. I agree that it can be over kill, but I
would not throw away whole kubernetes only for assumptions base on the size of
the projects.

------
gant
I am sort of a kubernetes person at work, and I just wasted 3 hours trying to
get a cluster up with Rancher. Everything worked fine, except somehow the
network started isolating namespaces and the nginx ingress couldn't reach my
service.

So I'm calling it quits for now. Just running the cluster requires a small ops
team.

------
dropmann
For me also the overhead played a big role, to just get a bare kubernetes
cluster you already need 3 nodes + 1-2 load balancers.

In case you are using GKE, you actually need two ingresses to support IPv6 +
IPv4

This adds up to like 10 times the cost of an single droplet. For personal
projects this seems kind of wasteful to me.

------
lazyant
He makes some valid points but I think if you want to write a blog entry about
why "B" is not good for a case and you rather use "A" (that you already know),
then you should at least try "B" technology; it doesn't seem the author has
even tried k8s.

------
wwarner
I absolutely love docker for development. It saves so much set up time, and of
course is handy later.

------
hardwaresofton
If you can devote some time and learn the underpinnings, Kubernetes is great
for personal projects. I personally use it for 1 business website, 1 blog, 2 3
tier applications (backs are SQLite though) and 1 3-tier client project. Where
kubernetes shines is that that it handles most things in a principled, and
self-consistent manner -- once you've made your way up the learning curve, you
can think _in terms_ of kubernetes without having many hiccups.

I'd argue that a lot of the complexity people find in Kubernetes is
_essential_ when you consider what it takes to run an application in any kind
of robust manner. Take the simplest example -- reverse proxying to an instance
of an application, a process (containerized or not) that's bound to a local
port on the machine. If you want to edit your nginx config manually to add new
upstreams when you deploy another process, then reload nginx be my guest. If
you find and setup tooling that helps you do this by integrating with nginx
directly or your app runtime that's even better. Kubernetes solves this
problem once and for all consistently for a large amount of cases, regardless
of whether you use haproxy, nginx, traefik, or whatever else for your "Ingress
Controller". In Kubernetes, you push the state you want your world to be in to
the control plane, and it makes it so or tells you why not.

Of course, the cases where Kubernetes might not make sense are many:

\- Still learning/into doing very manual server management (i.e. systemd,
process management, user management) -- ansible is the better pick here

\- Not using containerization (you really kinda should be at this point, if
you read past the hype train there's valuable tech/concepts below)

\- Not interested in packaged solutions for the issues that kubernetes solves
in a principled way that you could solve relatively quickly/well adhoc.

\- Launching/planning on launching a relatively small amount of services

\- Are running on a relatively small machine (I have a slightly beefy
dedicated server, so I'm interested in efficiently running lots of things).

A lower-risk/simpler solution for personal projects might be something like
Dokku[0], or Flynn[1]. In the containerized route, there's Docker Swarm[2]
+/\- Compose[3].

Here's an example -- I lightly/lazily run
[https://techjobs.tokyo](https://techjobs.tokyo) (which is deployed on my
single-node k8s cluster), and this past weekend I put up
[https://techjobs.osaka](https://techjobs.osaka). The application itself was
generically written so all I had to do for the most part was swap out files
(for the front page) and environment variables -- this meant that deploying a
completely separate 3-tier application (to be fair the backend is SQLite),
only consisted of messing with YAML files. This is possible in other setups,
but the number of files and things with inconsistent/different/incoherent APIs
you need to navigate is large -- systemd, nginx, certbot, docker (instances of
the backend/frontend). Kubernetes simplified deploying this additional almost
identical application in a robust manner massively for me. After making the
resources, bits of kubernetes got around to making sure things could run
right, scale if necessary, retrieve TLS certificates, etc -- all of this is
possible to set up manually on a server but I'm also in a weird spot where
it's something I probably won't do very often (making a whole new region for
an existing webapp), so maybe it wouldn't be a good idea to write a super
generic ansible script (assuming I was automating the deployment but not with
kubernetes).

Of course, Kubernetes is not without it's warts -- I have more than once found
myself in a corner off the beaten path thoroughly confused about what was
happening and sometimes it took days to fix, but that's mostly because of my
penchant to use relatively new/young/burgeoning technology (for example kube-
router recently instead of canal for routing), and lack of business-value to
my projects (if my blog goes down for a day, I don't really mind).

[0]: [http://dokku.viewdocs.io/dokku](http://dokku.viewdocs.io/dokku)

[1]: [https://github.com/flynn/flynn/](https://github.com/flynn/flynn/)

[2]:
[https://docs.docker.com/engine/swarm/](https://docs.docker.com/engine/swarm/)

[3]: [https://docs.docker.com/compose/](https://docs.docker.com/compose/)

------
billylindeman
This seems like a grumpy and lame rationalization of not wanting to learn
something new

------
jeremychone
yes, there is a learning curve, but once you have your system in place (for
local Kuberenetes Development as well as deployment), then, even of a web-
sites, it is a breeze. But yes, still need to be organized.

------
barbecue_sauce
What about using Kubernetes to manage several personal projects
simultaneously? Different namespaces, but with a single point of
administration.

------
michaelmior
> Of course, they won't work for _any_ project.

I assume "any" should be "every"?

~~~
emodendroket
Any as in just any.

~~~
michaelmior
Sure. It seems much more ambiguous written as it is though.

------
mychael
Thank you for writing this. We've reached peak Kubernetes fanboyism.

------
adamc
Minor word nit: "alright" is not a word. Or shouldn't be. It's not "all
right".

------
amthna
kubernetes for some, miniature american flags for others.

------
Annatar
"Oh man, the original article went way over the author's head."

No, the author of the Kubernetes article completely, so utterly missed the
point that it's not even funny: none of those Kubernetes complications are
necessary if one runs SmartOS and optionally, as a bonus, Triton.

Since doing something the harder and more complicated way for the same effect
is irrational, which presumably the author of the Kubernetes article isn't,
I'm compelled to presume that he just didn't know about SmartOS and Triton, or
that the person is just interested in boosting their resume rather than
researching what the most robust and simplest technology is. If resume
boosting with Kubernetes is their goal then their course of action makes
sense, but the company where they work won't get the highest reliability and
lowest complexity that they could get. So good for them, suboptimal for their
(potential) employer. And that's also a concern, moreover, it's a highly
professional one. I'm looking through the employer's eyes on this, but then
again, I really like sleeping through entire nights without an incident. A
simple and robust architecture is a big part of that. Resume boosting isn't.

~~~
sctb
We detached this subthread from
[https://news.ycombinator.com/item?id=18138716](https://news.ycombinator.com/item?id=18138716).

~~~
Annatar
The reason being...?

~~~
sctb
It relates primarily to the article and not the parent comment.

