
Maybe You Don't Need Kubernetes - ra7
https://matthias-endler.de/2019/maybe-you-dont-need-kubernetes/
======
gouggoug
The argument that Kubernetes adds complexity is, in my opinion, bogus.
Kubernetes is a "define once, forget about it" type of infrastructure. You
define the state you want your infrastructure to and Kubernetes takes care of
maintaining that state. Tools like Ansible and Puppet, as great as they are,
do not guarantee your infrastructure will end up in the state you defined and
you easily end up with broken services. The only complexity in kubernetes is
the fact that it _forces_ you to _think_ and carefully _design_ your infra in
a way people aren't used to, yet. More upfront, careful thinking isn't
complexity. It can only benefit you in the long run.

There is, however, a learning curve to Kubernetes, but it isn't this sharp. It
does require you to sit down and read the doc for 8 hours, but that a small
price to pay.

A few month back I wrote a blog post[1] that, through walking through the few
different infrastructures my company experimented with over the years,
surfaces many reasons one would want to use [a managed] Kubernetes. (For a
shorter read, you can probably start at reading at [2])

[1]:
[https://boxunix.com/post/bare_metal_to_kube](https://boxunix.com/post/bare_metal_to_kube)

[2]:
[https://boxunix.com/post/bare_metal_to_kube/#_hardware_infra...](https://boxunix.com/post/bare_metal_to_kube/#_hardware_infrastructure)

~~~
ken
It's pretty common for new technologies to advertise themselves as "adopt, and
forget about it", but in my experience it's unheard of that any actually
deliver on this promise.

Any technology you adopt today is a technology you're going to have to
troubleshoot tomorrow. (I don't think the 15,000 Kubernetes questions on
StackOverflow are all from initial setup.) I can't remember the last
[application / service / file format / website / language / anything related
to computer software] that was so simple and reliable that I _wasn 't_
searching the internet for answers (and banging my head against the wall
because of) the very next month. It was probably something on my C=64.

As Kernighan said back in the 1970's, "Everyone knows that debugging is twice
as hard as writing a program in the first place. So if you're as clever as you
can be when you write it, how will you ever debug it?" I've never used
Kubernetes, but I've read some articles about it and watched some videos, and
despite the nonstop bragging about its simplicity (red flag #1), I'm not sure
I can figure out how to deploy with it. I'm fairly certain I wouldn't have any
hope of fixing it when it breaks next month.

Hearing testimonials only from people who say "it doesn't break!" is red flag
#2. No technology works perfectly for everyone, so I want to hear from the
people who had to troubleshoot it, not the people who think it's all sunshine
and rainbows. And those people are not kind, and make it sound like the cost
is way more than just "8 hours reading the docs" \-- in fact, the docs are
often called out as part of the problem.

~~~
westurner
> _As Kernighan said back in the 1970 's, "Everyone knows that debugging is
> twice as hard as writing a program in the first place. So if you're as
> clever as you can be when you write it, how will you ever debug it?"_

What a great quote. Thanks

~~~
ignoramous
Some more quotes taken from [http://typicalprogrammer.com/what-does-code-
readability-mean](http://typicalprogrammer.com/what-does-code-readability-
mean) that you might find interesting.

Any fool can write code that a computer can understand. Good programmers write
code that humans can understand. – Martin Fowler

Just because people tell you it can’t be done, that doesn’t necessarily mean
that it can’t be done. It just means that they can’t do it. – Anders Hejlsberg

The true test of intelligence is not how much we know how to do, but how to
behave when we don’t know what to do. – John Holt

Controlling complexity is the essence of computer programming. – Brian W.
Kernighan

The most important property of a program is whether it accomplishes the
intention of its user. – C.A.R. Hoare

No one in the brief history of computing has ever written a piece of perfect
software. It’s unlikely that you’ll be the first. – Andy Hunt

------
gerbilly
I currently use Ansible to provision and configure about 50 instances with
various home grown services deployed on them, and I'm quite happy with that.

I don't use Docker, or Kubernetes. What am I missing here? This is an honest
question.

I don't particularly see the point of docker in production. I guess it can
help resolve clashes between dependencies, but I'm doing ok with Ansible.

One guy in another team just tells his application developers to throw
everything into a docker container and then deploys whatever they give him. I
guess that prevents dependency clashes, sure, but to me that seems like it's
just inviting different problems though. (Containers with a lot of
disorganized junk in them).

And we don't need to grow the cluster for now, so I don't see the point of k8.
We already have log aggregation, and service restarting, service
monitring/metrics.

To me, Kubernetes feels like a brand new alternative OS with crappy
documentation.

On the other hand I've been managing UNIX machines for 20+ years, and that I
do know how to fix (mostly, since they keep changing linux so much these days,
it's almost as hard to keep up with as javascript).

If somebody could tell me what I'm missing by using this approach, I'd be
grateful.

~~~
ngrilly
What do you use for log aggregation and monitoring/metrics? And how do you
deploy new versions with zero downtime? And how do you manage kernel/OS
upgrades?

~~~
gerbilly
> log aggregation

Graylog, with applications using various GELF libraries to send logs to it.

> metrics

graphite/grafana

> And how do you deploy new versions with zero downtime?

I don't deploy with _zero_ downtime, but within say ~5min. This is acceptable
to us. We use jenkins with a lot of tests to ensure components are in good
shape, then we mostly manually deploy them, but with a script. Sometimes we
deploy directly from jenkins.

> Kernel os upgrades

All our stuff is internally hosted, so we handle those infrequently. This
stuff isn't exposed the the open internet, so we upgrade when we get around to
it, or when we encounter a bug.

~~~
ngrilly
Thanks for following up! I'm not surprised because I noticed on HN and in
other online discussions that almost all the people that advocate against
Kubernetes and PaaS (Heroku, App Engine, Clever Cloud, Scalingo, etc.) are
willing to tolerate downtime during deploys and during OS upgrades.

~~~
gerbilly
> are willing to tolerate downtime during deploys and during OS upgrades.

Yeah, that's not a big deal for us. We don't have customers, and this
environment is for internal use at the company.

You could say it's a 'semi production' environment.

~~~
ngrilly
I like the "semi-production" concept :-)

------
ekidd
Over the years, I've deployed applications using various combinations of
custom RPMs, Chef, Heroku, Docker, ECS and Kubernetes.

If you can, you should probably deploy to Heroku. (Or a similar service.) It's
far cheaper than spending time on devops. Just run "git push" and you're
running.

When I've deployed on ECS (or other "simple" orchestrators), I've found that I
ultimately wound up re-inventing lots of Kubernetes features.

So if you can't use Heroku, but you _do_ know basic Docker, then it's worth
considering somebody's fully-managed Kubernetes. Google's is nice. Amazon's is
a considerably more work. I hear Microsoft's is still a bit sketchy. And I'd
love to take a look at Digital Ocean's. But do _not_ attempt to host your own
Kubernetes if you can possibly avoid it.

If you do try Kubernetes, then read a book like _Kubernetes: Up & Running_
first. Kubernetes is not self-explanatory, but it's pretty straightforward if
you're willing to spend a few days reading.

Finally, don't overcomplicate it. Just use the basic stuff for as long as you
can before trying to layer all sorts of other tools over it.

~~~
arp242
Isn't Heroku really expensive though? I looked at it a few weeks ago and the
cheapest plan was $25/month (there's also a $7 "Hobby" plan though, but that
seems just for, well, hobby stuff?)

I instead got a Linode VPS at $5/month, which gives me more than the $25
Heroku plan? Setting up a VPS is not very hard either – although this may
depend a bit on your environment, my app compiles to a static binary – and a
lot more flexible.

My devops stack thus far consists of scp and tmux.

~~~
mslate
Suggesting that $25/month is expensive illustrates how insanely cost-sensitive
this community is.

Don’t come to HN for a representative take of how people in US businesses
evaluate vendors and their pricing.

Do you know how much it cost my employer for me to spend an hour reading about
Kubernetes?

~~~
andrewingram
Not everyone is thinking in terms of an employer, or looking to build a
profitable business. One of the main sticking point for my side projects is
related to deployment environments.

Heroku is painless enough for unprofitable hobby projects, but far too
expensive. Using AWS, GCP or Azure directly is relatively affordable, but
requires a lot of extra work.

I'm aware of various tools that are supposed to make working with the various
cloud platforms much easier, but every time I see one of these used in
practice (eg at work), people seem to spend an enormous amount of time getting
things working properly. It still feels like we're missing a sweet spot for
hobbyists who don't want to invest their spare time learning about and
wrangling with devops, just to get a simple project up and running.

~~~
bww
Has anyone ever recommended Kubernetes to anybody as a production environment
for an "unprofitable hobby project"? In the context of the article we’re
discussing, the suggestion that $30 is an unreasonable operations overhead for
a team of four engineers is thoroughly preposterous.

------
TomBombadildoze
This starts with a lot of "you don't need Kubernetes" and then concludes with
a pretty compelling argument in favor of using Kubernetes.

From the "The Nomad ecosystem of loosely coupled components" section:

> It integrates very well with other - completely optional - products like
> Consul (a key-value store) or Vault (for secrets handling).

> At trivago, we tag all services, which expose metrics, with trv-metrics.
> This way, Prometheus finds the services via Consul and periodically scrapes
> the /metrics endpoint for new data.

> The same can be done for logs by integrating Loki for example.

> Trigger a Jenkins job using a webhook and Consul watches to redeploy your
> Nomad job on service config changes.

 _shudder_

> Use Ceph to add a distributed file system to Nomad.

> Use fabio for load balancing.

And the icing on the cake:

> All of this allowed us to grow our infrastructure organically without too
> much up-front commitment.

So if I understand correctly, the author (and his team) preferred to do all
the work of integrating/testing/debugging those components, rather than using
a tool that provides every single on of those features, and more, out of the
box.

Kubernetes isn't a trivial lift but it's a damn sight easier than trying to
roll a cheap imitation yourself.

~~~
omn1
The point is that Nomad allows to add more components as they are needed while
Kubernetes is an all-or-nothing solution. Also, we used tools like Jenkins and
Ceph before. They have been tested and we trust them so there's no point in
replacing them. Quite the contrary: a lot of internal processes depend on them
so it would have been painful to migrate away from them.

Nomad is not a cheap imitation of Kubernetes, it is a simple orchestrator
which favors composability over an all-in-one approach.

~~~
TomBombadildoze
> Nomad is not a cheap imitation of Kubernetes

 _Nomad_ isn't a cheap imitation of Kubernetes but all these components taped
together _are_. Kubernetes is no less composable than a system built on Nomad,
it just includes more functionality from day one.

If you're running containers with a orchestrator like Nomad, at some point,
you'll need DNS. So you do it yourself and then you're managing Nomad and DNS.
Then (we'll assume, because you're using a container orchestrator to manage
multiple services) you'll need service discovery, so you write some jobs and
event handlers to glue together Nomad and your DNS solution. And then, because
you're a team of responsible people, you'll want to store secrets securely, so
you graft in Vault. And _then_ , you realize zero-downtime config changes
would be great, so you slap Consul in there and write some sidecars or library
code to handle config updates. Then metrics. Then logs. Then rolling
deployments. And it continues, indefinitely, as you add features.

If you started doing this five years ago, fine. If you start doing this
_today_ , you're out of your mind. You're just doing work for the sake of "but
it's composable". There's a reason why teams still build applications on
frameworks like Rails and Django even though they don't need half the features
--it's more important to them to get something functional up and running than
it is to satisfy delicate sensibilities about only using what's needed.
Kubernetes is the analogue in the world of systems and operations.

~~~
insanejudge
your description of "grafting in" consul and vault _is_ exactly what it feels
like to do that with kubernetes, and it happens rather often.

Using industry standard components like consul and vault is not really a
second thought for k8s in production, leaving you with duplicated hunks of
infrastructure to step around, where the idea of tacking on kube-dns and
kubernetes ~secrets~ onto something else is rather laughable. This, again, is
the point which was being made -- you're forced to bear the brunt of
kubernetes' NIH.

I'll assume by the contrived situations of inventing some wacky custom
mousetrap to bind nomad to dns rather than using the "slapped in"
([https://www.nomadproject.io/docs/configuration/consul.html](https://www.nomadproject.io/docs/configuration/consul.html))
consul integration for dns, or writing config update/logs/rolling deployment
code rather than using the core nomad scheduler features to do that, that you
don't actually know, and this is FUD?

------
chucky_z
I've started heavily using Nomad recently and it's a real joy. Adding things
in is just so easy because it doesn't do much and (mostly) everything is very
well defined. I cannot suggest it highly enough.

An example is I setup a log pipeline recently spanning multiple data centers
with full mTLS. It wasn't that hard and thanks to Vault all the certs are
refreshed at regular intervals all over the place. Pretty great!

~~~
SteveNuts
+1 for Nomad, I'm extremely impressed with all of the Hashicorp suite.

Single binary deployments and upgrades are helpful.

~~~
mirceal
even Terraform?

~~~
chucky_z
I manage everything soup to nuts with Terraform. If you know something vastly
better please let me know, especially considering the changes to 0.12. It's a
very well integrated system.

~~~
mirceal
what kind of infrastructure are you managing? if you’re in the aws cloud,
CloudFormation is the gold standard for “infrastructure as code”.

~~~
jugg1es
says you

~~~
mirceal
Terraform does not automatically rollback in the face of errors. Instead, your
Terraform state file has been partially updated with any resources that
successfully completed.

do you know who can rollback and leaves your infra in a consistent state? can
you guess?

~~~
jugg1es
yea that is surprising that you can't roll back when you first start with
terraform but as you gain more experience with it, you realize that not
rolling back means you can resume instead. And if you need to roll back, you
can do so by just running destroy. It's actually a feature, not a bug.

~~~
mirceal
no it’s not. i want to leave my infrastructure in a consistent state. i am in
state A and want to move to state B. I want it to work. I don’t want a half-
assed attempt to make it work.

what does terraform bring to the table? I have to use HCL to describe my
infrastructure in terms that are NOT cloud agnostic (therefore introducing
another layer) and in the face of adversity it throws its hands in the air and
now you’ve got to figure out what went wrong, manually, by yourself. This is
what I call True Devops (TM).

I have seen Terraform crap out and it cannot recover. It cannot move forward,
it cannot rollback, it cannot destroy. It’s stuck. At that point you start
praying that someone really understands the underlying cloud + knows the
shenanigans terraform plays to fix it now and also make terraform happy moving
forward.

we’re talking basic stuff here.

i don’t want to go into more advanced issues like: losing network
connectivity, terraform process crashing (think oom conditions) or being
killed or non-responsive cloud apis.

not to mention that destroying infrastructure you’ve created almost never
works (unless it’s trivial infrastructure).

based on what I’ve seen up until now I would not use terraform in a production
environment.

~~~
jugg1es
If I had experienced what you just described, I would probably have the same
opinion - but after the initial learning curve, I haven't really had any of
the problems you've listed. The only times I've had to go manually modify
cloud resources to fix something was always because I was doing it wrong in
the first place.

On the other hand, CloudFormation is not perfect either. The rollback does not
work 100% of the time and I've had it roll back a set of templates that took
45 minutes to deploy because there was some inconsequential timeout that could
have been ignored. I've also had pre-built templates developed by AWS outright
fail, which is strange considering AWS themselves built it.

Use what works best for you and your team.

~~~
mirceal
so I ran into the issues I’m mentioning while test driving it.

i have never experienced unpredictable behavior from CloudFormation, but it’s
possible YMMV.

------
jugg1es
So many comments here are touting the benefits of this, that or the other, but
aren't mentioning Nomad at all. Nomad is definitely worth a try even for large
deployments. Yea, it doesn't autoscale without 3rd party tools, but you don't
always need that.

If you just need to keep your jobs running, load balance to those apps, use
externalized config and need a simple UI view system status, then
Nomad/Consul/Fabio really works great.

------
doublerebel
I've been running SmartOS containers for 6+ years and it's the best of all
worlds. Easy service management, works with regular binaries or Dockerized
deploys, SDN always built-in, service discovery built-in (CNS), even Windows
VMs available. Best kept secret in the cloud even though the entire stack is
open-source.

On the orchestration side I use Puppet Pipelines (formally Distelli) which
works with standard VMs, SmartOS containers, Docker, Linux images, from
packaging through testing and deployment. And they just lowered pricing (!??!)

There's no reason to be locked into Kubernetes unless you're sure you need it.
SmartOS scales in seconds. As CTO I'm always checking out the next platform,
but all the newer solutions look much more complicated.

EDIT: Heroku has weird errors when you push CPU or RAM, also the CPUs aren't
so fast, Linode can't be trusted, Digital Ocean is OK but is still very manual
roll-your-own, AWS is a behemoth and medium fast, Google is fine but
specialized and hard to migrate away from, OpenStack isn't for small orgs...
I've run production services on them all. It's easy to try a new service with
a Pipelines-style orchestration because it doesn't care which service its
talking to. And makes it obvious when platform-specific instructions or
concessions are required.

~~~
winrid
Thanks for your insight.

Why can't Linode be trusted?

~~~
bifrost
They've proven many times that they're not. Read through slashdot and google
for "linode bitcoin stolen"

------
Daishiman
For most small deployments you are far better off using Ansible playbooks or
similar solutions. Declarative orchestration management seems, for most
relatively small and medium-sized deployments, a black box with an obscenely
high learning curve which isn't justified if your deployment and scaling needs
fall within the vast majority of use cases.

~~~
hosh
If someone on a small team brings K8S expertise to the table, it is worth
considering. Otherwise, yeah, if you're at the stage where your team is
focusing on building the product, then K8S is going to be a distraction.

~~~
zerkten
> If someone on a small team brings K8S expertise to the table, it is worth
> considering.

Perhaps you have some unstated assumptions here like a small team managing a
relatively large infrastructure for their team size? Or, a small team in a
much larger infrastructure department.

If it was me, I'd put lots of disclaimers around a single person bringing any
kind of specialist expertise to the average small team. I think you need to
lean on that person to level the whole team up. If there is any chance of that
individual leaving you risk leaving the team high and dry.

~~~
richardknop
You can use managed Kubernetes. And that person with expertise would mostly
influence design decisions in order to write software that is easily deployed
to kubernetes and orchestrated in containers. They wouldn’t be a dev ops
person taking care of k8s cluster obviously that makes no sense for a small
team.

------
AaronM
Some fair points are made here, and I think running kubernetes the hard way,
without a managed service, does introduce a lot of complexity. However, its
not that bad when using a managed service such as EKS or GKE

~~~
drdaeman
Is there any significant difference between e.g. `kubeadm` and `gcloud
container clusters`?

I believe when the cluster malfunctions, you're still on your own figuring
that out. I had a stuck node on GKE just recently, which broke our CI. The GCE
machine was there, but the node wasn't visible with kubeadm and quick SSH onto
the VM hadn't discovered any immediately visible obvious issues. Auto-repair
was enabled but hadn't worked - and TBH I have no idea how should I've
diagnosed why it didn't (if there's even a way).

Thankfully, this was issue with the node, not master, and the CI nodes are all
epheremal, so I've quickly dismissed the idea of debugging what went wrong and
had just reinitialized the pool. Could've done the same with bare metal.

~~~
AaronM
I don't know. I haven't had any random failures using EKS. Anything crashing
was of my own doing.

------
jordanbeiber
I did an eval about 4 years ago of schedulers and service discovery software.

Lot’s of cool stuff around (dc/os, triton, hashi, k8s, flynn etc)!

After 2 days with k8s and not so much as a bootstrapped cluster the turn went
to hashicorp.

Within 10 min I had a container running, registered in service disc (consul)
on my laptop.

We ended up using the hashicorp stack in production with great success. Sure,
some custom services was needed (service managment, scaling and such).

Running primarily on prem the complete simpleness and un-opiniated approach is
an edge.

It allowed us to implement MACVLAN at first, and the ubuntu FAN-network and
integrate totally with existing infrastructure.

Now having spent the last 8 months implementing k8s I’m torn.

I’ve built from scratch, used kubeadm, kops and now EKS. From 1.13 kubeadm
honestly works really well and is my prefered way of deploying k8s.

Still it’s a beast... running large deployments with many teams... there’s
just so much... stuff.

One GH issue leading down a rabbit hole of other GH and gists with conflicting
or not working configurations. I’ve had co-workers bail on the project after
mere hours of digging through code and GH discussions and SIG docs.

Nomad/consul and it’s concise docs is a breeze in comparision.

Torn. Cause I see the point of k8s, just not sure about the project. :)

------
bifrost
I'm very much in the camp of "Kubernetes is a problem factory".

Personally, I've found very few problems that it solves and more problems that
it creates. That said, its probably a great time to be a Kube consultant!

~~~
mirceal
if your team is only a handful of devs who work on the thing that pays the
bills + do k8s, yes. bad time. the only way i could justify running this is if
you’re using a managed solution like gke or eks.

otoh, if you’re in the cloud, you already have the primitives to build your
stuff w/o the k8s headache (vms, autoscalling, managed svcs). but that’s just
my opinion. the koolaid is strong.

~~~
bifrost
Honestly, if you have a $GENERIC_SYSTEM that you deploy your code to, you
don't need the wasteful overhead.

If you need to (vs want to) rebuild your OS/Environment regularly, you're
doing it wrong. IE: PaaS is for you. That said, I've found replicating OS &
Environment extremely challenging on certain operating systems so I understand
the appeal. I also don't build products on those operating systems and I've
found myself much happier :)

~~~
mirceal
yeah. it depends a lot and k8s has its place in some scenarios, but in most
cases you don’t need it and its use is purely driven by koolaid

------
jjeaff
The thing that all the "you don't need kubernetes" articles seem to ignore or
skim over, is the huge network effect of a popular setup like k8s.

Not only is it much easier to find services (like helm) and articles for k8s,
but there are so many ready to go configurations. For most major services,
there is already a ready to go helm chart or at least some well vetted yml
config on GitHub to start from.

And most any issue I have come across, I have been able to search and find
solutions.

~~~
insanejudge
The thing the 'there's a ready to go helm chart' folks always seem to skim
over is that outside of an absolute startup position where you can curlbash to
hello world heaven, you will have integration to do, often huge amounts of
customization (read: complete rebuild) to do to take a toy helm app
configuration to a production configuration, and I've never seen even so much
as a nascent k8s (or nomad, mesos, etc.) buildout leveraged in a preexisting
environment without quite a bit of pipeline, acl/naming convention and ux
glue.

The proliferation of operators, CRDs, things like k3s and the growing
ecosystem of vendor distributions makes 'stock k8s' an increasingly nebulous
target.

~~~
jacques_chester
I think CRDs are mostly going to be a quagmire. It's Wordpress plugins all
over again, if Wordpress plugins could also affect the OS kernel.

------
bribroder
One argument that I think has been propelling kubernetes is that projects
using orchestrated containers benefit from a consistent application
environment, and such orchestrated deploys can be much more easily distributed
when the entire stack configuration can be packaged, and it enables greater
out-of-the-box functionality like autoscaling for the teams deploying these
projects--the zero to jupyterhub project is a great example: [https://zero-to-
jupyterhub.readthedocs.io/en/latest/](https://zero-to-
jupyterhub.readthedocs.io/en/latest/)

------
freedomben
The author mentions coupling with kubernetes due to config maps and such. That
is certainly a risk, but if you use environment variables for config (which
IMHO you should) then k8s makes it trivial to just map your ConfigMap to
Environment Variables.

I've got a few apps running in k8s and none of them know anything about k8s.
If avoiding coupling with k8s is what you want (and I agree it's the way to
go) then it really isn't that bad. The hard part is not knowing what you don't
know.

~~~
snupples
I think the real question then is why are people running apps that need to be
k8s-aware at all? K8s already provides a number of primitives to solve many
problems without reinventing the wheel and going "cloud-native" within the
application. There should be a very strong reason for any need to access the
api directly.

------
ericsoderstrom
> Kubernetes is an Ancient Greek word meaning "More containers than
> customers." [1]

[1]
[https://twitter.com/srbaker/status/1002286820078571532?lang=...](https://twitter.com/srbaker/status/1002286820078571532?lang=en)

------
humbleMouse
If people would spend 1/10th the time they spent complaining about kubernetes
actually learning kubernetes, maybe more people would know kubernetes and
realize its not even that difficult to administrate.

------
ashton314
Rancher [1] looks really nice for a guided use of Kubernetes. I saw a demo on
how to use this with Digital Ocean's Kubernetes clusters; the presenter went
from having a blog in a container to having the blog on a K8s cluster behind a
domain name in 30-ish minutes. Anybody using Rancher?

[1]: [https://rancher.com](https://rancher.com)

~~~
dmlittle
Do you have any info regarding Rancher pricing? There is no info on their
website other than reaching out to them...

~~~
V99
(Rancher employee) All the software is free to use and open source (Apache 2).
Customers run the same software that's on GitHub/DockerHub.

We sell "enterprise support", which means something like "starting at mid-5
figures", and occasionally professional services engagements around/outside
the product.

------
fred909
+1 Nomad is great. You can run services that aren't dockerized easily.

------
reilly3000
Its usually when these kinds of articles come out that their subjects are
finally worth paying attention to.

Immutable infrastructure is the right thing to do. Maybe K8's isn't the best
or most convenient way to achieve that, but it gets you almost all the way
there to fully declarative ops. I really think there is a lot of room to
improve the ergonimics. That said, there is a business/project opportunity. I
want to be able to build a values.yml file in a gui, on my phone, with bounded
sliders and smart recommendations for values, along with contextual help and
community templates. We have chart repos, but the default values suck for
production, and learning what the values are and should be set to for your use
case leads to a painful traversal of abstractions. Maybe there is already a
project or vendor out there I'm missing?

------
inopinatus
All any of us really need to do is put files on a server and start a process.
The question is how much scaffolding you want to build and manage to achieve
it. The answer should be “as little as possible”.

------
nerdbaggy
At work we have about 50 containers, which total RAM usage less than 32GB and
about 10 CPU cores.

Docker swarm works great for us

~~~
marmaduke
I was scrolling down the comments for Swarm. Do you run multimachine? How does
storage work? (I’m looking for an container thingy for bare metal)

~~~
nerdbaggy
Yup multimachine. It works fantastic. We have a netapp san
[https://hub.docker.com/plugins/trident](https://hub.docker.com/plugins/trident)
But we don’t really use it except for databases

------
namelosw
1\. Always think about Heroku first, seriously. Because your need is deploy
application.

2\. If you thought about 1 but Heroku is really not for you, go Kubernetes
directly. It's not that hard at all.

3\. Choose any other stuff only when you know what you're doing.

Kubernetes is not simple or good. The api is mostly cumbersome. But it's the
first thing that makes sense in this field. Before Kubernetes there are just
toys or rocket science.

------
mancerayder
I'm attempting to 'transition' \-- as a DevOps / infra guy professionally
rather than pure dev -- does anyone have any recommended resources for reading
non-dry, fairly example-filled docs to get started? The Kubernetes O'Reilly
book seems to be very broad and overly high-level from the reviews (and I
remember best if I read "why" and "how").

Thanks in advance

~~~
dankohn1
This is particularly well-liked for a more in-depth review of Kubernetes:
[https://github.com/kelseyhightower/kubernetes-the-hard-
way](https://github.com/kelseyhightower/kubernetes-the-hard-way)

------
rbjorklin
I’ve come to much the same conclusions as the author in this blog post and
have put together a small Nomad demo in case someone is interested:
[https://github.com/rbjorklin/nomad-demo](https://github.com/rbjorklin/nomad-
demo)

------
kissgyorgy
> It takes a fair amount of time and energy to stay up-to-date with the best
> practices and latest tooling. Kubectl, minikube, kubeadm, helm, tiller,
> kops, oc - the list goes on and on.

When you start with a technology, you need to have a solid basis. You need to
fully grasp the given technology in it's most basic forms, stuck with the
problems for days so you can fully grasp what any of these can solve what
problem. If you know "naked" Kubernetes inside and out, then you can add other
things to your toolset. Until then, you don't need no helm, tiller, kops,
Knative or any of those distractions.

~~~
tarikjn
Isn't knative exactly supposed to solve this problem by giving a portable
Heroku-like dev experience?

~~~
kissgyorgy
Doesn't matter. I talked about the initial learning about Kubernetes. It's
just too much to learn everything at once.

~~~
jacques_chester
Restating: the point is to not have to "learn everything". If your platform
makes you do hours of homework, then it's not a platform, it's a pallet of
bricks.

Heroku got this right with `git push heroku master`. IMO Cloud Foundry got it
even more right with `cf push`.

Knative is likely to get to that point, probably with the same means as Heroku
and Cloud Foundry: Buildpacks.

I'm wildly biased of course, having worked on Cloud Foundry, Buildpacks and
Knative.

------
he0001
I’ve been using Kubernetes for a while (1,5y). And I can’t say I’m very
impressed. For anything simple it works fairly well. But as soon you get
outside the “normal” workflow it gets downright hard. Certain areas are opaque
AF and if things doesn’t work it almost impossible to find out why. And if
it’s impossible to fix something fast if there’s something burning. I think
that, there’s an irony to k8s, it’s supposed to fix the “hard problems” of
clusters. But it really only fixes the easy part of them, and obscures the
rest.

------
AnthonyWnC
The overall trend of IT infrastructure goes from tradition DC -> IaaS -> PaaS
-> FaaS. k8 sits at PaaS and the logically next step is FaaS/serverless.

In fact, today you can surely do purely FaaS without the need k8 at all by
leveraging cloud provider managed services for any long-live type of workload.
Of course, the tradeoffs will the usual suspects: flexibility, control, cost
etc. There are still cases where you would want a cluster manager like k8 (at
least for now).

------
guiriduro
Any reason you didn't consider Docker swarm?

~~~
omn1
I like swarm. Seems like Docker is moving away from it however. At least,
that's what the commit numbers say [1]. The fact that Docker integrated
Kubernetes into their desktop solutions is also telling.

At this point, I would rather have a closer look at either Nomad or k3s for
smaller clusters.

[1]:
[https://github.com/docker/swarm/graphs/contributors](https://github.com/docker/swarm/graphs/contributors)

~~~
sandGorgon
Wrong repo. [https://stackoverflow.com/questions/38474424/the-relation-
be...](https://stackoverflow.com/questions/38474424/the-relation-between-
docker-swarm-and-docker-swarmkit)

Swarmkit is what is being used in production and is fairly popular.

[https://www.digitalocean.com/currents/june-2018/](https://www.digitalocean.com/currents/june-2018/)

 _" While @kubernetesio was most popular overall, the smallest companies (1-5
employees) use @Docker Swarm more often (41 percent use Swarm vs. 31 percent
that use Kubernetes)."_

------
marmaduke
I’m curious how this compares to pacemaker and corosync for core infra devices
like a active/passive failover for SAN or a virtual IP. It doesn’t seem like
they’re in the same category in terms of typical usage, but both can handle
the notion of a cluster.

------
hjacobs
Triggered by Matthias' blog post, I wrote down my thoughts on the topic here:
[https://srcco.de/posts/why-kubernetes.html](https://srcco.de/posts/why-
kubernetes.html)

------
kissgyorgy
I'm pretty sure you don't need Kubernetes.

------
xaduha
Docker Compose is great, Docker Swarm isn't.

------
sbhn
Yes i do, [https://homeidea3d.com](https://homeidea3d.com) runs on kubernetes

------
eric_khun
anyone got recommendation on how to properly set CPU limits?

I've set limits about 2x higher than the max cpu usage, but still seeing some
throttling, what cause performance degradation. I've ended up setting
deployments without limits to avoid throttling but that's definitely not nice
if one pod goes crazy for a reason ...

~~~
oso2k
Not sure what you mean by throttling. I tell my customers to do 4 things

    
    
       1) Set a Request at about 50% of max CPU usage
       2) Set a Limit at about 90% of max CPU usage
       3) Configure a HorizontalPodAutoscaler that kicks in at 70% 
       4) Scale out, in advance, to 2 or 3 pod replicas

------
echohack5
Have you looked at [https://habitat.sh](https://habitat.sh) ?

~~~
omn1
I actually did and I love the idea!

In my opinion, that's gonna be the next wave of container management
solutions. If you think about it, you shouldn't care about container
orchestration at all; you should care about services.

Habitat ships the orchestrator together with every service. That sounds
strange and redundant at first, but if you think about it, it has some nice
advantages:

* Services are fully self-contained and can run anywhere (in dev, stage, prod) without any changes. * By tagging services, one can create a service-mesh quite easily without having to think about network-topologies. Services find each other automatically (I think via a Raft/Gossip protocol).

I haven't looked at it too closely yet, but it's on my list. Fair disclaimer:
I know the folks behind it from the Rust community.

------
ilaksh
I guess I am a little behind the times, but no one uses the orchestration
stuff that comes with Docker?

~~~
andbberger
I use it a fair bit for development, provides a seemingly simple way to keep
things organized and do 'infra-as-code' and have your ducks in a row with
reproducible stack deployments, without the overwhelming overhead of k8s.

It's an absolute dumpster fire though. A serious convoluted mess. They try to
indelicately cram something with incredible intrinsic complexity into a happy
little (incredibly leaky) abstraction.

docker compose is really what I want, simple service management on one
machine. Except, it can't do really basic shit like restart services (this
isn't all that obvious at first). So soon you'll find that you need to use
docker swarm on the one node, which is similarly chock full of inexplicable
and unforeseeable shortcomings.

docker swarm/compose is an interim step on your journey to a better solution.
in my judgement it's fine while you're developing your core apps. but once you
need to do get serious and go into production, it's time to find a big boy
tool

~~~
ngrilly
I did the same thing. Docker Compose was what I wanted for a single-machine
deployment, but I soon discovered it was unable to manage a rolling deploy
(start new containers before stopping the old ones). So I switched to Docker
Swarm on a single node, but I have to agree there is a lot of unexpected
shortcomings. I'd like it to be more polished. And I'd like Docker Compose and
Docker swarm mode to converge/merge at some point.

I guess the "big boy tool" you adopted after that is k8s?

~~~
andbberger
No I never got the point of needing the big boy tool. But it would have been
k8s.

------
jpomykala
I only use Docker, because I'm too lazy to learn Kuberneres or Ansible. :)

------
auslander
Should we use containers at all is the _first_ question. Why people skip over
this one ? :)

------
squid3
Maybe all you need is NodeChef. Saves you time and money.
[https://www.nodechef.com/](https://www.nodechef.com/)

------
nukeop
Pretty much just an advertisement for Nomad.

~~~
csomar
I'm not sure why you are getting downvoted but I'm a bit suspicious. Either he
really likes Nomad that he went ahead and put their logo on a scooter or a
disclaimer is lacking.

~~~
omn1
I put the logo on a scooter. Not affiliated in any way with HashiCorp.

------
sdan
I spent a few days developing a Kubernetes Cluster with some Docker knowledge.
Other than creating the config file, I didn't see too much of a problem. It
definitely has a learning curve and takes a lot of time to setup (even with
GKE).

From reading this article, I'm liking Nomad.

------
erulabs
It's a bit strange to say the best part of Nomad is its ecosystem, then to go
on to say it has 10x less contributors than Kubernetes.

Additionally, one of the benefits is... a lack of features?

> The entire point of Nomad is that it does less: it doesn’t include fine-
> grained rights management or advanced network policies, and that’s by
> design.

PodSecurityPolicies and advanced Network Policies are not a default or
required part of Kubernetes, either...

~~~
omn1
10x less contributors than Kubernetes is still over 200 contributors (actually
more like 300 at this point). That's one of the bigger Open Source projects in
my book. This number pales in comparison to Kubernetes, but show me a project
which doesn't.

That doesn't mean that the ecosystem is not vibrant, however. The HashiCorp
stack is just more modular than Kubernetes. Consul and Vault are standalone
products for example.

With less functionality, it's harder to build a system which depends on
features that only exist in a particular orchestrator. This makes it easier to
replace Nomad with something better in the future. I'm not sure if I'd want to
work on migrating Kubernetes clusters in five years. :)

~~~
shaklee3
There's almost nothing proprietary about kubernetes. As the next post said, it
uses dns. Which part are you referring to that it locks you into?

