
Do I Need Kubernetes? - mocko
https://mbird.biz/writing/do-i-need-kubernetes.html
======
gravypod
> Run your applications in Docker from day 1 (with docker-compose it’s as
> valuable for dev as it is for production) and think carefully before letting
> your applications store local state.

I think this is the key take away for many startups. Get it so you:

    
    
        1. Have a single-command way to bring up your entire backend
        2. That command can run on your dev machine
        3. You document all of the settings for your containers and deps 
    
    

Once you have that in a docker-compose.yml file migrating to something like
kube when you need health checks, autoscaling, etc is easy.

The only thing you must be critical of is bashing people over the head to make
everything you run in your company run in this environment you've created. No
special deployments. Establish 1 method with 1 tool and go from there.

Every company I've worked at I've brought a single command `docker-compose up
-d` workflow and people were amazed by the productivity gains. No more
remembering ports (container_name.local.company.com), no more chasing down
configuration files and examples, no more talking to shared DBs, etc.

~~~
thom
Can I ask people who _don't_ use Kubernetes and maybe have architectures built
on proprietary cloud services: how do you manage this?

~~~
tootie
Ask every enterprise that existed prior to 2018? Even for small and mid-size
enterprises, manual deploys aren't that huge of a deal. Automation with some
bash scripts can get you pretty far. Especially if you're only managing a
monolith or two. I've worked in some platforms that absolutely defied
automation due to their proprietary nature. It just meant that we'd need a few
people to spend 1-2 hours every two weeks to run a deployment. That cost added
up over even a year or two is probably less than paying for a redesign and
replatform.

I've also done deploys just using managed services like ECS/Fargate or Heroku
where we just build an artifact and push it to the host with a script. A stack
that's a load balancer, stateless app server then a DB and/or file store can
be provisioned manually once then not really worried about for a long time.

~~~
thom
To be clear, I was more asking how people achieve the sort of fast
bootstrapped dev environments the parent comment describes using k8s,
especially where you’re not necessarily talking about simple topologies like a
monolith plus a database.

~~~
peterwwillis
Option 1: Everyone develops on the CI/CD platform.

Ops gives devs "disposable environments" to do their testing in. Basically,
GitHub + Jenkins + Terraform + AWS. I've used this to stand up real
infrastructure every time a PR is opened, and pipelines run against the infra.
Code and infra match each other because they both exist in the same
branch/repo (monorepo; you can of course do multi-repo, just takes more
coordination). It's all destroyed as soon as the PR is closed, but you can
also keep it up to do dev work against. You can also keep one copy of the
latest master branch up at all times (the "dev" or "cert" or "test"
infrastructure) as a shared test environment. Downside: you have to have
access to the network (which is in AWS, so that's not so difficult). Upside:
the dev environment always mimics production, devs don't need to do anything
to stand it up.

Option 2: A bunch of mocks, and scripts to stand up stuff in Docker, using
docker-compose or something else. This becomes a bit of a problem to manage
with lots of devs, though, and docker-compose is not a good model for
production deployments.

~~~
thom
Thank you for describing option 1, which is really what I was picturing but
seemed ambitious enough that maybe nobody tried it.

------
tombh
I've always seen the story like this:

It all started with Ruby. Ruby's syntactic sugar inspired the "syntactic
sugar" of tooling, primarily Bundler and Rspec. Tooling, for what felt like
the first time, became a first class citizen. Ruby's tooling made Heroku
possible: ie, reproducible builds across; dev, testing, staging and production
environments. Heroku's success was based on the primitives of the Twelve-
Factor App[1]. The 12 factors (and therefore Heroku) were fundamentally
designed around the already-old lightweight virtualisation technology of LXC.
The success of Heroku paved the way for Docker. The success of Docker created
the world in which Kubernetes makes sense.

To be blunt: if you don't understand the relevance of Kubernetes, or whether
it's relevant to you, you don't understand the benefits of the 12 factors in
their broadest sense. The 12 factors are much, much more than just "How To
Deploy On Heroku".

Copypasting the 12 factors:

    
    
        I. Codebase
        One codebase tracked in revision control, many deploys    
        II. Dependencies    
        Explicitly declare and isolate dependencies    
        III. Config    
        Store config in the environment    
        IV. Backing services     
        Treat backing services as attached resources    
        V. Build, release, run    
        Strictly separate build and run stages    
        VI. Processes    
        Execute the app as one or more stateless processes    
        VII. Port binding    
        Export services via port binding    
        VIII. Concurrency    
        Scale out via the process model    
        IX. Disposability    
        Maximize robustness with fast startup and graceful shutdown    
        X. Dev/prod parity    
        Keep development, staging, and production as similar as possible    
        XI. Logs    
        Treat logs as event streams    
        XII. Admin processes    
        Run admin/management tasks as one-off processes
    

1\. [https://12factor.net/](https://12factor.net/)

~~~
LockAndLol
Reading this, I doubt it can be taken as gospel.

> Store config in the environment

How do you pass hierarchical config? In JSON, YAML, TOML, etc., it's easy to
group env vars, but how do you do that with env vars?
LOGGING__HANDLER__FORMAT, LOGGING__HANDLER__ARGS__ARG1,
LOGGING__HANDLER__ARGS__ARG2 ? If so, that looks positively awful. If the
solution is passing a YAML or JSON string in the env variable, that sounds
even worse than having a config file that the app reads.

>> it’s easy to mistakenly check in a config file to the repo

Add it to the ignore file. Someone would really have to force commit it to get
into version control. Plus, even if it's stored in env vars, you're going to
commit the values somewhere be it your Ansible secrets, SaltStack yml, Chef or
Puppet repo, whatever.

To be fair, that's the only point I contend with. The rest is very reasonable.

~~~
mdaniel
> How do you pass hierarchical config? In JSON, YAML, TOML, etc., it's easy to
> group env vars, but how do you do that with env vars?

Spring Boot approaches that problem via `SPRING_APPLICATION_JSON` (and likely
`SPRING_APPLICATION_YAML` but I haven't personally tried it)

    
    
        containers:
        - name: web
          image: whatever
          env:
          - name: SPRING_APPLICATION_JSON
            # language=json
            value: |
              {"spring": {"logging": {"level": "debug"}}}

~~~
LockAndLol
Goodness gracious. That's dreadful and exactly what I feared the effect of
that rule would be were it followed strictly. It looks like a hacky workaround
to an unnecessary problem.

------
overgard
I think one of the underrated parts of using something like Kubernetes early
(or even w/ simpler orchestrators like swarm or rancher), is that it
encourages (and sometimes enforces) architecture best practices from the
start. IE, you won't be storing state locally, you'll be able to handle
servers being randomly killed, you'll already have horizontal scaling, etc. In
my experience the hard part of migrating to containers in a legacy app is when
they break those constraints, especially around local state and special
servers. It's easier to do these things sooner rather than later, and the
constraints kubernetes places on you aren't that hard to work around if you
design it in from the start.

I good reason _not_ to use kubernetes though is if you know your app is
probably never going to scale, or if it's the kind of thing that can scale
very well on a single machine, or if it's not primarily based on http
communication. (I wouldn't write a real time game server in kubernetes, for
instance, because I doubt it'd really help with a primarily UDP workload that
is likely going to be attached to one server)

~~~
throwaway894345
I’m not avidly opposed to k8s by any means, but you can get these same
properties from any of a variety of easier-to-use schedulers such as Fargate,
Heroku, or even EC2 autoscaling groups. Of course, there are probably
Kubernetes distributions that lower the threshold of using Kubernetes (and if
there aren’t, there really should be) by providing solutions for logging,
monitoring, certificate management, Functions (a la AWS Lambda), load
balancing / ingress, state management (databases as a service), etc
preconfigured out of the box (similar to what you get with AWS or Heroku).

~~~
threeseed
The problem with your approach is that you're firmly locked in to the vendor.

And in the case of Fargate, Heroku etc you're paying significantly more than
if you had made use of Spot instances or shopped around for a cheaper vendor.

~~~
throwaway894345
Vendor lock-in concerns are overblown. Unless there’s a real chance you’ll
need to pick up and move, don’t worry about it. Your savings by not
building/operating everything yourself will dwarf other costs (unless your
business has huge scale _and_ you have a world class internal cloud capability
which you probably don’t and if you do, you can probably just negotiate a
better deal from a cloud provider a la Netflix and Amazon). To that end, you
only “save money” by managing everything yourself if you write off the cost of
engineering time and talent, which is to say you lose money by doing it
yourself because you don’t have the scale or talent to compete with Amazon
even with their markup (certainly not when you account for opportunity cost).

~~~
threeseed
Nobody is saying you have to build everything yourself.

But you can just use cloud providers for their hardware and not needlessly tie
yourself to their software. For example you can use a managed K8s solution
like EKS but then have all of your monitoring, logging, databases etc all be
self-hosted.

And it's not just about cost but also about being able to take advantage of
other cloud provider's unique strengths or being truly resilient to outages.

~~~
throwaway894345
The same arguments apply to hardware and software. Cloud providers’ core
competency is cloud infrastructure and services and they have the scale to
economize their offerings. Your business very likely can’t compete with them,
so _to the extent that you’re owning things that cloud providers could sell
you, you’re throwing away money_ and that figure very likely dwarfs the _risk
adjusted_ cost of maybe having to migrate to another provider one day. (Of
course, there are services that are overpriced here and there, but the general
principle holds).

------
elliebike
Honestly nobody _needs_ k8s, but nobody really needs anything. If you know it
and you know it suits your needs, then sure! Go for it! Have fun.

If you don't know it, then learn it first. Then you can properly evaluate
whether it suits your requirements or not.

There's huge value in choosing something that your team already knows + is
familiar with

I'd totally advocate for learning something new, but solving a business
problem probably isn't the best time for this :)

I think the article gave a great overview though, and is perhaps a good way to
decide what to learn next

------
madushan1000
I was a huge proponent of not hosting monolithic applications on kubernetes,
that was until the company I worked for acquired another same size company and
I had to learn their hand rolled puppet2/shell script based infrastructure
management/deployment logic all over again(we had our own hand rolled
puppet3/shell script based infrastructure logic too, so that's two stacks with
their own quirks.)

Now I'm completely in favor of hosting anything you can cram into kubernetes
in kubernetes, even though kubernetes is more complex than most other infra
tools, most of the time there is only one way to do things(configmaps for
config, PV allocations for storage etc..) So if you understand kubernetes,
it's easier to get the larger picture about the infrastructure even if you
know nothing about the application stack.

------
martythemaniak
Remember Greenspun's 10th rule?

"Any sufficiently complicated C or Fortran program contains an ad hoc,
informally-specified, bug-ridden, slow implementation of half of Common Lisp."

Well, there's a kubernetes version, which is that any sufficiently advanced
deployment system will ultimately be half assed version of kubernetes.

~~~
betaby
Including Kubernetes itself!

~~~
ezekiel68
To be fair, there was a ring of truth to this

...back in 2015 ;)

------
enos_feedler
I think what isn't fully appreciated about k8s yet, but we will look back on
is how it creates an open standard platform for apps to be deployed. It is one
thing to port your own apps to run within a k8s cluster, it is another to have
and operate a k8s cluster that you can use to deploy services built by others.
I hope we see more of this soon

------
ramraj07
Not a single mention of elastic beanstalk or App Engine? The best middle
ground for small teams who just want one reliable website with minimal scaling
(and who can't just choose a nom-aws service).

~~~
rudolph9
Do you have any examples of non-proprietary solutions?

~~~
harpratap
Knative + buildpacks is one. CloudFoundry is open source too (although CF now
runs on Kubernetes I think)

~~~
ramanujank
Cloud foundry offers their old(er) architecture as well as 2 ways to get the
“cf push” experience over Kubernetes. Check out KubeCF and cf-for-k8s. Both of
these are open source projects that you can deploy to a Kubernetes
infrastructure of choice.

[1] [https://github.com/cloudfoundry-
incubator/kubecf](https://github.com/cloudfoundry-incubator/kubecf) [2]
[https://github.com/cloudfoundry/cf-
for-k8s](https://github.com/cloudfoundry/cf-for-k8s)

------
mrweasel
I only have one tiny detail: Bursty traffic means that your cluster needs to
be able to deal with the peaks. If you running an on prem Kubernetes cluster,
then there is no savings, unless you can use the capacity for something else
during non peak periods.

The scaling, and potential savings is a cloud feature, not a feature of
Kubernetes.

~~~
tstrimple
It also enables you to achieve app density. Many of the companies I've been
working with lately have large batch processes on nightly / weekly / monthly
basis. For some reason, each job previously was setup on on-prem hardware
dedicated per workload. Using Kubernetes and scheduling jobs to manage
capacity has enabled us to reduce the number of on-prem servers substantially
as part of the migration plan for the cloud.

~~~
halbritt
Precisely.

In my experience hosting services in VMs on-prem I was able to achieve roughly
30% efficiency across 3k instances and hundreds of nodes. In my experience
hosting microservices on k8s, I was able to achieve 80% efficiency across
hundreds of nodes.

Both were the result of a great deal of work to optimize efficiency. In this
case, I use the word "efficiency" to refer to a blend of CPU and memory
utilization.

------
ChicagoDave
My assessment is that if you have a large, complex micro service environment
with several applications, docker may be a good route to take as an interim to
cloud native.

If you have a relatively simple deployment structure (one big application with
10-20 services), maybe you don't need to add the docker skill to what you're
already doing.

If you're in the process of rebuilding your applications and you've decided
cloud native is the way to go, then containers are pointless.

I'd argue if you don't have the skills and you don't want to pay someone to do
this work full-time, you might put your developers in a position to fail.

Based on my 30+ years of building software, I see containers as a dead end. It
may help you out in the short term, prove to your CTO that you're using "well-
known" technology, but in the end, cloud native is going to replace
everything. And before you say "but we can't be in the cloud," then you should
know that cloud-native development like Lambdas can be done on-prem as well.

I'm positive this post will get negative votes. That's fine. I like tilting at
windmills.

~~~
aliswe
Containers are a good tool for scaling resources?

~~~
ChicagoDave
It depends on the app. If it's an existing app, a container is fine. If you're
building a new app and you don't care about the infrastructure, Serverless is
better.

My problem is that eventually all apps will be rewritten...and it's probable
they will be in some kind of Serverless manner.

Containers still have some separation of concerns issues as well as
maintaining state or sharing data.

My default would be Serverless with a plan B for Containers and a plan C for
VM's.

------
k__
This sounds like people try to compare K8s with what was before and sure, that
all holds true.

You get all of this with serverless (be it with managed services or FaaS),
with reduced (albeit not zero) complexity compared to K8s.

~~~
threeseed
And with serverless you get significant lock-in, are overpaying for resources,
have less ability to debug when things go wrong, have basically zero
flexibility and it's very difficult to have a local setup mirror your
production one.

It's fine for certain use cases but you get the benefits of both worlds by
just using a managed K8s service like AWS EKS.

~~~
k__
I'll dive into K8s this week, after a few years of serverless. I'm intrigued
to see what it has to offer, that serverless lacks.

------
irontinkerer
> To make a cluster useful for the average workload a menagerie of add-ons
> will be required. Some of them almost everyone uses, others are somewhat
> niche.

This is the concern I have with k8s. All this complexity introduces
operational and security concerns, while adding more work to do before you can
just deploy business value (compared to launching on standard auto-scaling
cloud instances)

~~~
gravypod
If you are using a managed kubernetes cluster from a cloud provider you mostly
don't need to worry about these sorts of things. If you're not, and deploying
to bare metal, the main things you need to worry about are: load balancers,
storage & monitoring. If you're large enough that you can effectively run kube
on bare metal you probably have enterprise solutions for load balacing [0],
storage [1] & monitoring your applications that you've already validated as
being secure/stable.

If you want to go all out you can also grab an operator to manage rolling out
databases for you (postgres [2], mongo, etc).

A lot of the complexity people bump into with kube is really poorly planned
out tools like Istio that have way too many features, a very overly complex
mode of operation (out of the box it breaks CronJobs!!!), and very sub-
standard community documentation. If you avoid Istio, or anything that injects
sidecars and initcontainers, you'll find the experience enjoyable.

[0] -
[https://clouddocs.f5.com/containers/v2/kubernetes/](https://clouddocs.f5.com/containers/v2/kubernetes/)

[1] - [https://www.netapp.com/us/kubernetes-
storage.aspx](https://www.netapp.com/us/kubernetes-storage.aspx)

[2] - [https://github.com/CrunchyData/postgres-
operator](https://github.com/CrunchyData/postgres-operator)

------
peterwwillis
Here's a quick reference:

1) Are you on AWS? Then you don't need Kubernetes. Use Fargate.

2) Are you on Google Cloud? Then you don't need Kubernetes. Use Cloud Run.

3) Are you on Azure? Then you don't need Kubernetes. Use Azure Container
Instances.

4) Are you on a PaaS like Heroku? Then you don't need Kubernetes.

5) Are you on a random VPC provider / bare metal machines? You could probably
still do without Kubernetes using Docker Swarm (apparently it's not dead!),
Nomad, Mesos DC/OS, or a standard Linux box and systemd (or some other process
or cluster scheduler).

6) Do you need to solve the bin-packing problem? Do you need to self-host a
service mesh of microservices in multiple colocated regions? Do you need a
fully automated redundant fault-tolerant network of disposable nodes to
constantly reschedule different versions of applications with stringent RBACs,
scheduled tasks, dynamic resource allocation, and do you have about a million
dollars to spend on building and maintaining it all? Then you need Kubernetes.

~~~
halbritt
> do you have about a million dollars to spend on building and maintaining it
> all? Then you need Kubernetes.

I think you're overstating the investment necessary to overcome the initial
complication of Kubernetes and also understating the benefit of being on a
platform with a massive and thriving community behind it.

As an example, in a prior role, there were a set of data engineers that would
receive data in the form of MS SQL server backups from which these engineers
would need to query and transform data on an exploratory rather than
production basis. Certainly one could use an "undifferentiated" service from a
cloud provider, but it was also a roughly 5-minute process for me to use the
rather high quality helm chart and docker image commonly available to stand up
a new service for the benefit of each engineer that had the need.

The process of creating that automation necessary to deploy the helm chart and
restore the backups took approximately one hour and could be repeated ad
nauseum in the aforementioned 5-minute time period.

There are many, many other examples of this. Want a data-science platform
complete out of the box with no vendor lock-in, how about data8.org? The list
goes on.

~~~
peterwwillis
What happens when your car's A/C stops working? Most people think, I'll just
get a little can of R134a, fill up the system, and it'll be good as new.
Somebody said they did that once and it worked just fine, so it should work
for you too, right? I mean, it says so right on the can, and there's YouTube
videos of it and everything.

The trouble is, A/C is a complex system. There are moving pieces with
specialized oils that oxidize and break down over time. There's a sealed
system of pressurized gas. There's a pump, clutch, coils, fans, filter,
thermostat, drain, belt, and electronics, Any of those parts could fail in a
number of ways. Just to inspect it you need a custom gauge set, a tank of
R134a, and a vacuum pump.

 _Etcd_ is about as complex as an A/C system. That is one of a dozen
components of a Kubernetes system, before we get into custom integrations,
which you will need about another dozen of.

The million dollars is to pay for everything needed to set up and maintain all
of that, create the custom integrations that do not come turn-key from the
community, create the custom integrations the community doesn't even have,
integrate it with your development and deployment systems, business
requirements, application-specific needs, and so on.

A million is an average. You can get away with less, just like you can get
away with pumping a pre-pressurized A/C system with extra coolant: if you're
lucky it won't break. When it does, I hope you have either a lot of time, or a
lot of money to pay a consultant.

~~~
shaklee3
I would strongly disagree. Etcd, for what it does, is extremely simple. What
is so complex about it? The configuration?

~~~
peterwwillis
It's a distributed decentralized database using self-signed certs. Just by
itself it requires maintenance: upgrading the software, upgrading the host it
runs on, rotating keys, networking, access control, key space maintenance,
backup, etc. Here are the docs you need to know to run it:
[https://etcd.io/docs/v3.4.0/op-guide/](https://etcd.io/docs/v3.4.0/op-guide/)
And there's another dozen docs not written there that the admin just sort of
finds out over time.

But it's part of other systems too, making the overall thing a system of
systems. Interactions between systems of systems are complex and cause
unexpected behavior. At some point you will run into an error in K8s that you
can't resolve that will require you to debug Etcd. And "Bob" help you if the
database gets corrupt or overwritten, or incompatible versions of software
screw up what's in the database, etc. (My original analogy was inaccurate...
Etcd is more like the engine than just A/C, because if it stops working,
everything stops working)

Do you know what happens if an Etcd certificate's SAN field does not include
domain names but only IP addresses? The client requests HelloInfo with an
empty ServerName so it doesn't trigger a TLS reload on handshake, making it
more difficult to replace expired certs. That is a single random quirk in a
single component of this software which underpins all of Kubernetes. I cannot
sit here and explain every single reason why Etcd is complex; it must suffice
to say that the software just _is_ complex, and that this reality means that
while it may sometimes be simple for some people to operate, it will
definitely not always be simple to operate, and there will come a time that
the true cost will emerge.

Now, most people don't _need_ to pay for that high cost of complexity. They
can use a SaaS/PaaS product like AWS ECS/Fargate or others, where somebody at
some other company is dealing with the cost of complexity for you. All you
have to do at that point is run some API calls and everything just works. Not
only is it easier, it's immensely cheaper, less time-consuming, and more
reliable.

...But you might not even need ECS! There's a lot of work just to get a simple
PoC up on Fargate with an NLB, ACM cert, RDS instance, security groups, VPCs,
cluster, service, task, etc. Compare that to just spinning up a micro instance
and running MariaDB and a Python on it, and the latter you can have done in 20
minutes. If you can avoid complexity and still meet your SLOs, do that.

------
devn0ll
To me the question is more: Is there an extremely valid reason to _not_ use
K8s.

As a freelancer I visit quite a number of enterprise companies, think: Banks,
Insurance, Airports, and they are all making the switch or are full-on
invested in living in K8s by default. If it does not run in or was not made
with K8s in mind: It will not be used/bought.

Another thing I'm noticing with smaller companies: If you start fresh, you
choose k8s. Which mens all other stuff is already slowly dying by virtue of
not beeing chosen.

Developers want/expect it, sysadmins see the benefits from day one, and
companies see the potential gains of using less cloud resources and a platform
that could potentially run in multiple clouds for the first time ever.

K8s, openebs, prometheus/grafana, loki, kustomize, github actions. This is
truly where "it's at" at the moment.

------
spost
I work for a startup whose product is small (half a dozen servers, if
relatively beefy ones) clusters that will be run on-prem by customers, at
least sometimes in a low-to-no-touch capacity. Most of our application
components are micro-ish services that are run on all hosts in the cluster for
either extra capacity or fault tolerance.

We currently run everything on mesos/marathon, but are looking to switch away
from it. K8s is kinda the “default” option, and is potentially appealing to
some potential acquirers and investors.

But I never really see k8s being talked about in that context of “physical
hardware that’s on prem, but not on MY prem.” Is there a reason for that? If
we go with k8s is it going to bite us? Does anyone have experience with
something like that they could share?

~~~
escardin
I did an on-prem k8s deployment at my last place. It is definitely challenging
compared to EKS and GKE, but the difficultly is not in base k8s.

Following the kubeadm getting started guide on the kubernetes.io site can get
you an 'ha', 'production ready' going in a couple hours. Most of it is pretty
mechanical, and only needs a couple key decisions, mainly your networking
plugin. Generally the most popular ones have instructions as part of the
getting started guide, making the process straight forward.

Where it quickly becomes difficult is after this step. You have a cluster
ready to serve workloads, but it has no storage, no ingress/external load
balancer.

Storage can be as simple as NFS volumes (you don't even need a provider for
this, but you should use one anyway). Rook/Ceph will work, but now you've just
taken on two complex technologies instead of one.

Without an external load balancer of some sort, you will have trouble getting
traffic into your cluster, and it likely won't be actually HA. You can use
MetalLB for this, or appliances. If you're just starting out though, you can
totally get away with setting up CNAME aliases in DNS to your nodes in a round
robin type fashion. It won't be HA, but it will work, and is simple and
straight forward.

Ingress is pretty easy to setup for the most part. Usually just applying an
available manifest with a tweak or two. If you go the CNAME route, you will
need an ingress setup so you can serve http/https on standard ports without
too many issues.

If you do all these things, then you have a real deal cluster. Things like
ingresses are recommended even if you're running in the cloud, so you may find
that you're not all that far off from what you might find there.

Overall, the biggest trouble is all the choices you need to make. If you're
starting out, maybe read up on two or three of the most popular choices for
each step, and then just pick one. Anything that exists entirely within the
cluster can usually be expressed purely as source controlled manifests, and
kubeadm deployments can be simple shell scripts if you don't make them do
everything (i.e. only support one container driver, not all of them).

One major caveat; If you screw up your network layer, you basically have to
start over. This isn't strictly true, but it's the one where you are often
better off starting over when you need to make fundamental changes to your
network setup (like podCIDR and serviceCIDR or your network plugin). Pretty
much everything else can be made to work with multiple setups at once, or you
just need to delete and redeploy that component.

~~~
shaklee3
Kube-vip is another good alternative to metallb.

------
kissgyorgy
I would say if you have the question "Do I need Kubernetes?", you don't need
it because the benefits are not immediately crystal clear to you. Also, the
author starts the evaluation from the wrong point of view, because the
decision should be not so technical, but like everything else should be based
on BUSINESS REQUIREMENTS. Evaluating Kubernetes is no different:

\- Do you have such high traffic that you need a distributed system?

\- Will a unified framework solve all your distribution problems?

\- Do you really need high availability?

\- Can you swallow the cost of high availability?

\- Can you handle the insane complexity of Kubernetes at a reasonable cost?

You should not start asking questions about "Pods, Ingress" or anything
Kubernetes specific, those are just implementation details.

~~~
aliswe
> Can you handle the insane complexity of Kubernetes at a reasonable cost?

For me, this point is akin to asking "Can you handle the insane complexity of
Linux at a reasonable cost?"

And what I mean by that is: If noone exists on your company who can administer
it, then you shouldn't do it.

------
luord
The article outlines a very nuanced detailing of how to answer that question,
but I have a more blunt first consideration: If you need kubernetes, you don't
need to ask whether you need it.

Basically, I think that a team/product _knows_ when the time has come in which
the infrastructure has grown in complexity so much for it to need something
like kubernetes to orchestrate it. If there are doubts, then whatever current
setup is in place* is probably still enough and kubernetes is beyond what the
team requires.

I am very proud of the one time I managed to convince both my then tech lead
and project manager, in one of my past jobs, to move away from kubernetes into
a simpler architecture leveraging docker, compose and PaaS.

* Hopefully one using docker and compose or similar, as mentioned in the article.

------
bdcravens
Vendor lock-in aside, I've found ECS to be a great alternative.

~~~
bg24
Yes if you are on AWS. Significantly speeds up the engineering workflow.

Not sure if it is a vendor lock-in. If a customer wanted to move from ECS to
kubernetes, they will need to migrate the manifests, roles/rolebinding etc.
That is a small effort as long as they do not have to rewrite the code.

------
vasilakisfil
For small apps and projects that continuous delivery is not required I would
start with lxd. I think it's the easiest way to containerize an app.

Once that feels too little, I would start looking at docker, and only when
docker feels again too little, to kubernetes.

So in essence, for 95% of the apps/people, the answer is no.

~~~
marcc
Can you elaborate on why you think that lxd is easier to use than Docker? I've
always found docker build && docker push to be a simple interface to
understand. What makes "start[ing] with lxd" more approachable or easier than
creating a Dockerfile, given the abundance of Docker-101 tutorials, advice,
and expertise available?

------
aliswe
I've sprinkled some comments in this thread, but as someone who is working
more or less full time with k8s infrastructure, architecture and maintenance
(and I do love k8s!), my take is that if you have to ask this question then
the answer is invariably: NO.

------
Axsuul
I need Kubernetes since we're outgrowing Docker Swarm. Docker Swarm has a lot
of issues we deal with on a constant basis so it's becoming quite painful.

Can anyone suggest a good migration guide from Docker Swarm -> Kubernetes?

~~~
ezekiel68
My gentle suggestion is to avoid phrases like "a lot of issues" and instead to
list some of the top ones. This could give some people a chance to share how
they have overcome some of those specific challenges.

At a recent employer, we moved cold turkey from Swarm to Kubernetes (K8s in
the rest of this reply). But we did this one microservice at a time. We didn't
want our resulting K8s solution to be compromised by a misguided attempt to
foist Swarm concept into the K8s way. Probably the biggest decision to make is
whether to manage the cluster from scratch (not recommended), by using kops to
deploy on a cloud platform, or use a cloud-native solution like EKS on AWS.
After that -- here's a good guide to help with the differences in configuring
the services[1]

[1] [https://kubernetes.io/docs/tasks/configure-pod-
container/tra...](https://kubernetes.io/docs/tasks/configure-pod-
container/translate-compose-kubernetes/)

------
runxel
Would have been much funnier, if this would have been just a page with a
single word on it: NO.

See also:
[http://dowebsitesneedtolookexactlythesameineverybrowser.com/](http://dowebsitesneedtolookexactlythesameineverybrowser.com/)

------
kesor
Probably not.

------
airnomad
No.

------
kanobo
When a newspaper headline or blog post title ends in a question mark, the
answer is almost always 'no'.

~~~
dang
[https://news.ycombinator.com/item?id=24179969](https://news.ycombinator.com/item?id=24179969)

------
dennis_jeeves
No

~~~
kseistrup
Exactly!

Betteridge’s law of headlines:

Any headline that ends in a question mark can be answered by the word no.

⌘
[https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headline...](https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headlines)

~~~
2mol
I get it, but it's not really doing the author justice, who's trying to give a
quite nuanced overview of where kubernetes is a good fit, and where it isn't.

As somebody who's pretty skeptical about onboarding the big lump of complexity
that is k8s, I really appreciated the information in the article.

~~~
dennis_jeeves
You see, any pile of junk(Kubernetes being one of them) can have a use case if
you searched hard enough. Do not learn/work with Kubernetes unless your
livelihood depends on it.

------
sytelus
> Kubernetes isn’t just a 2018-era buzzword.

And then he goes on to throw buzzwords. Yet another poorly written article as
is now the norm for Kubetnetes.

~~~
aliswe
Agreed, except the point about "the norm".

------
gatvol
Probably not. Why not utilise ECS/ Fargate and attach managed services, rather
then tending a whole now flock of things?

~~~
aries1980
I find ECS/Fargate and managed Kubernetes not a bit easier to manage than
Kubernetes itself. On the other hand, the inflexibility or vendor-specificity
of these requires as much learning as you'd use FOSS tools. Also those skills
are not transferable to an other cloud vendor, which might happened to be
required if your primary cloud provider does not cover markets you supposed to
operate in.

