
You might not need Kubernetes - tannhaeuser
https://blog.jessfraz.com/post/you-might-not-need-k8s/
======
combatentropy
Some day I would like a powwow with all you hackers about whether 99% of apps
need more than a $5 droplet from Digital Ocean, set up the old-fashioned way,
LAMP --- though feel free to switch out the letters: BSD instead of Linux,
Nginx instead of Apache, PostgreSQL instead of MySQL, Ruby or Python instead
of PHP.

I manage dozens of apps for thousands of users. The apps are all on one
server, its load average around 0.1. I know, it isn't web-scale. Okay, how
about Hacker News? It runs on one server. Moore's Law rendered most of our
impressive workloads to a golf ball in a football field, years ago.

I understand these companies needing many, many servers: Google, Facebook,
Uber, and medium companies like Basecamp. But to the rest I want to ask,
what's the load average on the Kubernetes cluster for your Web 2.0 app? If
it's high, is it because you are getting 100,000 requests per second, or is it
the frameworks you cargo-culted in? What would the load average be if you just
wrote a LAMP app?

EDIT: Okay, a floating IP and two servers.

~~~
wpietri
As somebody who has his own colocated server (and has since Bubble 1.0), I
definitely agree that the old-fashioned way still works just fine.

On the other hand, I've been building a home Kubernetes cluster to check out
the new hotness. And although I don't think Kubernetes provides huge benefits
to small-scale operators, I would still probably recommend that newbs look at
some container orchestration approach instead of investing in learning old-
school techniques.

The problem for me with the old big-server-many-apps approach is the way it
becomes hard to manage. 5 years on, I know that I did a bunch of things for a
bunch of reasons, but I don't really remember what or why. It mixes intention
with execution in a way that gets muddled over time. Moving to a new server or
OS is more archaeology than engineering.

The rise of virtual servers and tools like Chef and Puppet provided some ways
to manage that complexity. But "virtual server" is like "horseless carriage".
The term itself indicates that some transition is happening, but that we don't
really understand it yet.

I believe containers are at least the next step in that direction. Done well,
I think containers are a much cleaner way of separating intent from
implementation than older approaches. Something like Kubernetes strongly
encourages patterns that make scaling easier, sure. But even if the scaling
never happens, it makes people better prepared for operational issues that
certainly will happen. Migrations, upgrades, hardware failures, transfers of
control.

~~~
jerf
"5 years on, I know that I did a bunch of things for a bunch of reasons, but I
don't really remember what or why."

For my home servers, I've settled on "a default install of distro $X and an
idempotent shell script that sets everything up for me". You have to use
discipline to do everything in the shell script rather than simply fix the
problem, but if you can do that, you end up with documentation as to how your
server differs from a default install, and the ability to recover it again
_reasonably_ well if you store it in git somewhere or something.

It's only "reasonably" well because when you have one server running for years
at a time, your script decays more quickly than you are going to fix it. If
your server goes down three years later, and you decide to go with the latest
$X instead of whatever you used last time, then your script will be out of
date and need to be updated. It isn't nirvana. But it's the best bang for the
buck when you're in a situation where chef/ansible/puppet/etc. is massive,
massive overkill.

If you're already an expert with Docker, go nuts, but IMHO it's a bit silly to
run a server just to run two Docker containers, just so you can say you're
running Docker or something. Plus no matter how slick Docker has gotten, it's
still more of a pain that just setting a few things up.

~~~
codyb
Huh, I haven't found Docker to be a pain at all now that I sort of vaguely
have an idea of what I do.

A docker file takes maybe ten minutes, and is really documentation more than
anything.

That with a tmuxp yml file to set up a tmux session for developing can pretty
much outline both how the product is released and how it's developed for
anyone coming into the project.

Pretty neat, super easy, very cool.

I'm not really doing docker to say I'm doing docker but because once I
realized how easy it is to containerize things it's not much more than a few
steps to have a development environment as well as a production environment
even for my crappy little website.

~~~
pcl
> _That with a tmuxp yml file to set up a tmux session for developing can
> pretty much outline both how the product is released and how it 's developed
> for anyone coming into the project._

Would you mind sharing more about your team uses tmuxp? Sounds like an
interesting alternative to a README for shared configuration etc.

~~~
codyb
Hey pcl, So I discovered tmuxp relatively recently and am between jobs at the
moment but I'll tell you that for my personal projects I can look at my yaml
file and immediately see that there's a gulp dev command which is run in the
front end directory, a sync bash script which is run, and a gmake run which
runs the server.

It's nothing groundbreaking, but it's nice to have it all laid out and it's
possible if I got to the point where someone else was working on the same
project they'd find it useful to know these three commands without having to
wonder why their static assets weren't updating on change, or why make didn't
work.

I think wherever I end up I'll likely start creating tmuxp files and possibly
docker files for any repos I work in, mainly so it's super easy for me to hop
on a terminal, type one command, and have a whole environment to work in. It
is pretty neat to have a server start, a watch, a sync, and two windows for
vim for front and back end.

------
atleta
Yeah, you probably don't. And not only that, but it probably makes your life
harder. I've interviewed for a tech lead position at a company working with
freelancers and I'm pretty sure the reason they ended up rejecting me was that
I mentioned the technical interviewer that I think containers, container
infrastructures (like Kubernetes) and even cloud infrastructure is being
overused/used without giving too much thought about it as if it came free (in
the sense of setup and operating complexity). Too bad the interviewer started
rambling about how he was into Kubernetes those days :). (Actually, this was
the most technical part of the interview.)

I'm mostly working with startups and small companies creating MVPs and that
was their client base too. Now most of the time these are just building CRUD
apps, most of the time these apps don't see heavy usage for years (maybe
never). Developers love technology, love to play with new(ish) things so quite
a few of us will prefer using whatever is new and hip for the next project.
Now it's containers and microservices. And it feels safe, because done right,
these will give you scalability. And once you convince the client/boss that
you need it it's unlikely that anyone will come back in a year and say: hey,
it seems that we'll never need this thing that made the development $X more
expensive. (Partly because they won't know the amount.) So actually
politically it is the safe choice. But professionally and cost wise it's
usually worse. It's a lot better to have to transition after seeing the need
(preferably from the projected growth numbers). At least you minimize the
expected value of the costs (bacause YAGNI).

~~~
djsumdog
I once got an interview from a company in the container space because one of
their exec read an article I published talking about the trouble with
container systems[1]. (Really good talk/interview, but I ended up not moving
forward because I didn't want to move back to the west coast).

I've been in smaller shops that wasted a lot of time on K8s stuff and fell
behind on their timeline. If you want to run k8s, DC/OS, etc. you need a lot
of ramp up time and at least 4 ~ 8 dedicate staff members. I've talked to
other startups that preferred running Nomad instead due to setup complexity.

I doubt k8s will go the way of Open Stack since it does actually work, but I
do think we'll see it limited to big-end enterprise systems and smaller
startups will push forward with other, easier to build up clustering
technologies.

[1]: [https://penguindreams.org/blog/my-love-hate-relationship-
wit...](https://penguindreams.org/blog/my-love-hate-relationship-with-docker-
and-container-orchestration-systems/)

~~~
segmondy
4-8 dedicate staff members to run k8s? seriously, How did you come up with
that number?

1) You can run k8s hosted on Google, DigitalOcean with zero effort.

2) I built a k8s cluster in 3 days with zero experience after spending a week
playing with minikube, reading the docs are kuberentes.io

~~~
Already__Taken
> I built a k8s cluster in 3 days with zero experience after spending a week
> playing with minikube, reading the docs are kuberentes.io

I'm pretty sure I can do it in an afternoon from scripts on github. But if
something goes wrong all bets are off. Just getting something setup is not
building a competence around it.

~~~
vkou
How well would you, with zero experience managing and deploying a stack, do
the same in an orthodox LAMP setting?

~~~
yebyen
Honestly, the answer is only relevant if you actually have zero experience
deploying any stack, or orthodox LAMP stack.

If you have no experience with (stack Z), then you will have to go out and get
some experience before opting to use (stack Z). The problem is, many people
hear this and stop there.

While there are some barriers to experience and production-readiness, they are
not insurmountable, and there may be a pot of gold at the end of the rainbow.
There is a cost for everything. Sometimes it's an opportunity cost. (Sometimes
the cost can also come from not acting.)

------
maxxxxx
"Anyways, the point I am trying to make is you should use whatever is the
easiest thing for your use case and not just what is popular on the internet.
"

This is good advice in theory but in the real employment world you are killing
your own career that way. At some point you get marked as "dinosaur" that
hasn't "kept up". Much better to jump on the latest tech trend.

~~~
brooksyd2
I get the sentiment here, but I don't think it's strictly true. The way I look
at new technology is that I need to know enough about it to either discount
it, or choose to use it. So long as I know what I'm talking about when I tell
a prospective employer that I advise not using technology X, then they
typically understand that I have the knowledge to make that decision.

So your advise should be, learn about the latest tech trend, try it out, and
then have an informed opinion.

~~~
maxxxxx
" try it out,"

How much time do most of us to really "try out" something deeply enough to
have an informed opinion?

~~~
noxToken
Informed enough to talk (read bullshit) your way through an interview.
Honestly though - think about all of things on your resume that you said you
had experience with (when it more of a resume of hope and less of a CV of
experience). Was it actual, working knowledge that was applicable to your
professional career, or was it passing knowledge from that time you followed a
few tutorials?

I don't ask that to denigrate you. I did it. Lots of my peers did it. It's
part of this silly game we play for employment. We complain about needing to
pad resumes to get our foot in the door, but when we get to make hiring
decisions, we automatically bin resumes of students who only put knowledge of
one language and a handful of basic tools.

------
elsonrodriguez
Most organizations don't need to manage servers or Ansible playbooks either.

The reason Kubernetes became so popular is because the API was largely
application-centric, as opposed to server-centric. Instead of conflating the
patching and configuration of ssh and kernels with the configuration of an
application, you had clearly separate objects meant to solve different
application needs.

The problem with Kubernetes is that to gain that API you need deploy and
manage etcd. To bring your API objects to life you need the rest of the
control plane, and to let your objects grow into your application you need
worker nodes and a good grasp of networking.

This is a huge burden in order to gain access to K8's simple semantics.

GKE helps greatly, but the cluster semantics still come to the forefront
whenever there's a problem, or upgrade, or deprecation, or credential
rotation.

Of course there's always a time for worrying about those semantics.
Specialized workloads might have some crazy requirements that nothing off the
shelf will run. However I think the mass market is ready for a K8s
implementation that just takes Deployments and Services, and hides the rest
from you.

In lieu of that, people will just continue adoption of App Engine and other
highly-managed platforms, because while you might not need Kubernetes, you
almost certainly don't need to go back to Ansible.

~~~
marmaduke
Ansible isn’t just ssh though. In principle you could have a k8s_deployment
role for example.

Most playbooks are host oriented but one write k8s playbooks that are cluster
oriented

------
Sahbak
I honestly don't understand the amount of negativity towards dockers and
kubernetes sometimes.

All major cloud providers have a managed k8 service, so you don't have/need to
learn much about the underlying system. You can spend a few days, at most, to
learn about dockers, k8 configuration files and helm and you're pretty much
set for simple workloads (and even helm might be overkill).

Afterwards, deploying, testing, reproducing things is, in my opinion, much
better than managing your applications on random servers.

Might I be wasting some money on a k8 cluster? Maybe. Do I believe the
benefits outweigh the money? Absolutely.

~~~
Enginerd3
I honestly think this website's negativity towards stuff stems from not
understanding use cases and being a general curmudgeon.

"All major cloud providers have a managed k8 service, so you don't have/need
to learn much about the underlying system. You can spend a few days, at most,
to learn about dockers, k8 configuration files and helm and you're pretty much
set for simple workloads (and even helm might be overkill)."

This is the reason why I use k8s. It is ridiculously easy to deploy
applications and I don't have to worry about hardening the VM.

------
chess44
I am interested in people's opinion on the "break even point" between using
Kubernetes and not using Kubernetes. Let's pretend that the only options are
Kubernetes and something substantially less powerful.

What is the simplest/easiest personal project where using Kubernetes might be
justified?

I am a junior software engineer trying to figure out how to contextualize all
of these container/container management systems.

~~~
eropple
This is a little bit negotiable, but it's where _I 'd_ start considering
Kubernetes:

1\. at least six independent twelve-factor-app services with their own
datastores _and_ a need for high availability across all of them _and_ a near-
complete understanding of the high-availability interactions between instances
of your services

2\. an _inability_ to predict ahead of time where your system's hot spots are,
necessitating rapid scaling of different parts of the application

3\. a willingness to overspend on capacity to be able to respond to scaling
events or deploys in seconds rather than minutes

4\. an code-focused ops team (as opposed to a mouse-driven ops team) with
extremely strong diagnostic skills _and_ the bandwidth to babysit a service
with a potential pain-in-the-ass ceiling around that of a Cassandra cluster

Without #1, you don't have enough variation in systems to benefit; just stick
a monolithic application in an autoscaling group. (Most people should do
this.) Without #2, you can lean into the hot spots of your application by
scaling them horizontally--bear in mind that you'll be paying for capacity you
don't use with k8s in order to get that environmental reactivity, so you could
just spend that on making your hot spots faster. Without #3...well, that one's
pretty obvious when you look at things like EC2 instances, which are more
easily partitioned, can be spun up in smaller/cheaper groupings, and their
primary downside is that it takes longer than deploying a container. And
without #4, you're gonna go off the cliff.

Reasonable people can nibble at the edges. But to answer the thrust of your
question: it's probably never reasonable to design a personal project around
k8s _unless_ the point of the project is to be done on k8s.

~~~
tetha
> 4\. an code-focused ops team (as opposed to a mouse-driven ops team) with
> extremely strong diagnostic skills and the bandwidth to babysit a service
> with a potential pain-in-the-ass ceiling around that of a Cassandra cluster

Here it is running fine... running fine.. running fine... aaaaand there's a
compaction-and-gc cycle of death and fire and lost data and tears. Thank you
for this terrible memory.

~~~
eropple
I was going to say "we've all been there," but we haven't, and that's the
deceptive thing about the five-minute-demo culture that a lot of "devops" has
gotten into.

Everything is easy when it has nothing riding on it. When it isn't is where
the value of a tool comes into focus.

~~~
tetha
Yeah I'm dealing with crap out of that area atm.

I've been recently asked why I'm extremely restrictive and careful with our
primary production cluster. Well, we got 20k+ full time employees of our
customers depending on this system for their everyday work. An hour of
downtime of this thing will cost our end customers 20k man-hours of work done
in a worse way.

We're not touching the tooling this system sits on without good reason and a
lot of testing. And even then I'll be bloody scared. Sorry modern world, but
in this case, I'll be wearing my hard ops hat.

------
freehunter
Maybe someone here can help me figure out what I need, since the world of
containers is growing faster than I can understand.

I have one code base that I run on multiple servers/containers independently
of each other. Think Wordpress style. I used to run it on Heroku but I
switched to Dokku because it's substantially cheaper and I don't mind taking
care of the infrastructure. I like Dokku but I do worry about being tied to
just one server and not being able to horizontally scale or easily load
balance between multiple servers/regions. Ideally what I'd like is Dokku with
horizontal scaling built in. I've seen Deis and Flynn but they seem less
active/mature even than Dokku, which is saying something.

Is Kubernetes the right answer here or should I stick with Dokku and forget
about horizontal scaling?

~~~
tetha
Kubernetes isn't the only thing around. Kubernetes and Mesos are kinda the
heavyweight solutions, but there are smaller things around like Hashicorps
Nomad and Swarm, and probably a lot more I don't know.

We're currently evaluating nomad, and it's surprisingly pleasant. Nomad
doesn't solve every problem every application in every situation might have.
Nomad schedules containers, VMs or whatever else on hosts. This reduces
complexity a lot.

It took us like 1 - 2 man-weeks to have an almost arbitrarily scalable nomad
setup which allows you to submit a bunch of jobs and mark some public ports
for a loadbalancer, be it mysql, http, whatever. And it's easy to understand
and operate. There's 3 nodes of consul, 3 nodes of nomad-server and 2 hosts of
nomad-client, some certs in the middle, consul-template + haproxy with a
config almost from a blogpost. That's it. It has very few moving parts and
it's easy to understand and troubleshoot with 2-3 main guys in our ops team.
(EDIT: this doesn't read clear. We have 2-3 guys on our ops team. They are not
working on nomad alone. Nomad atm is a low-maintenance system and we're mostly
dealing with other crap /EDIT)

And now we're just going with it for now. Our CI needs resources for on-demand
test-systems, so let's figure out how to make that happen. Our self-service
test system for demos / manual acceptance testing needs resources for systems
so let's figure that out. We might need to use gluster or something for
persistent storage if we want to migrate internal tooling to this. A sister
company might want to tinker around with windows VMs scheduled by nomad, or
windows containers, so why not?

But the good thing: It took us 2 weeks to start delivering business value.
That's a relatively small up-front payment for an established company, even a
small one. Now we can leave it alone for some time, or we can invest some more
well defined packages of time to make it better in concrete, requested ways.
That's easy to schedule and prioritize.

~~~
heipei
Just came here to second this. We evaluated both Kubernetes as well as Nomad
for a relatively small cluster of some worker nodes and web services. In the
end, the ease of standing up a Nomad cluster and the whole feel of the thing
won us over.

Nomad is a single golang binary that you can run on your laptop and have a
fully working working Nomad client and server, along with a builtin UI and
command line tools (same binary). The story for production is the same: Throw
the binary on your server, setup a systemd unit to run it and you have another
Nomad node.

If you're evaluating container schedulers and are not sure what you need, take
an afternoon or so and just run it locally and play with it. If there aren't
any specific features about Kubernetes you could point to that Nomad does not
meet my suggestion would be to get started with it first.

~~~
shaklee3
Kubernetes is a single binary, too (hyperkube).

------
bg4
You probably don't need microservices either - it's insane how much money and
time is being thrown away to these industrial strength hammers by companies
that simply don't need it.

~~~
garysahota93
So true! I think the Rick & Morty reference alone speaks volumes for
everything. haha

------
frostyj
Depends on the scale. If I only have 10 containers to manage I'd throw them on
a m4 and let it be. Benefit of using k8s kicks in when your use case gets
complicated.

~~~
hasperdi
What's m4?

~~~
enigmango
Probably the AWS m4 instance type, meant for general purpose workloads (and
replaced with the m5 instance type about a year ago).

[https://aws.amazon.com/ec2/instance-
types/m5/](https://aws.amazon.com/ec2/instance-types/m5/)

------
jammygit
Its a bit funny to ask this question in this thread, but here we go:

What are the important topics & technologies to learn about with these types
of topics? My uni experience didn't really include things like distributed
systems or containerization.

Ideally fundamentals that won't be invalidated in 5 years when 'the new thing'
becomes something else.

(Love good book recommendations on any subject a new grad should learn, not
just this topic)

------
geo_mer
Kubernetes may be overkill for small projects and it's actually hard to setup
for a single-machine cluster, but the idea of container orchestrators (k8s,
docker swarm, nomad, etc...) is extremely useful, I understand that some abuse
the word "scale", but for me container orchestration is far bigger than just
scaling, these features include:

1\. rolling updates

2\. decoupling configs and secrets from code and mounting/changing config
files easily

3\. robust and predictable testing/production environments

4\. centralized logging

Also microservices's goal isn't really about just "scaling" in my opinion,
there are other important advantages even if you have no intention to scale,
aspects like modularity, separation of concerns, robustness and lowering the
technical debt are still as important whether your app serves 1 or a 10000
users at the same time. Of course you can pull your python app from your repo
or even rsync it (just like you can just develop any software without using
git or any revision control) and just execute it might work very well, but
sooner or later you are going to regret it if you're a business

------
sebringj
It was interesting to note about workers and using web assembly together
within V8 as this scenario could bypass the need for complexity and memory
overhead, while combining different programming languages on the server-side.
Not that it could replace Kubernetes as that is an amazing technology but if
you are in a scenario where your tech could fit within workers, could be
interesting. [https://blog.cloudflare.com/introducing-cloudflare-
workers/](https://blog.cloudflare.com/introducing-cloudflare-workers/). I was
amazed to think web assembly would be used for that purpose but i guess it
does make sense in reading about how it is put together.

------
vemv
What bothers me about k8s is that it promises a lot ("15 years of experience
of running production workloads at Google" at your fingertips! yay!) but it's
in fact still a young, ever-changing solution.

Even _developing_ an app locally with minikube is a PITA for a lot of reasons.
From Helm to Telepresence to Skaffold, every tool out there is just unpolished
and overambitious.

Don't want to imagine how those problems might amplify in production.

~~~
shaklee3
Skaffold is only 5 months old. That's a little unfair to call it unpolished an
over-ambitious.

~~~
vemv
It's made by Google which boasts about its 15y experience on containers?

~~~
shaklee3
I've been using skaffold for about 3 months. It works well, and there
continually improving it. I'm not sure what else is expected from a new
project.

------
barbecue_sauce
Sometimes choice of technology acts as a signifier. If you're building a
startup, and you want to communicate to investors that "hey, we may not have
the users yet, but we're built to scale!", Kubernetes and microservice
architectures and sophisticated ETL pipelines convey that image better than
saying "we've built for the minimal load that we're currently experiencing
with a LAMP-based monolith.". The reality may be that your product's
consumption patterns will never necessitate having anything more than that,
even at a large scale. Your product may be great, you might easily be able to
scale manually, but someone who holds the purse strings who knows just enough
to be dangerous, might decide that if you're not using the "hot" technologies,
you must not know what you're doing.

------
beiller
My experience is that Kubernetes is too complex for the average functioning
product. At our company, everyone is obsessed with it because it promises no
cloud vendor lock ins! But at what cost? The complexity. Also the direction
cloud vendors are going in my opinion, is more hardware-centric (eg. TPUs).
How will you avoid the cloud lock in when only Azure offers the image tagging
machine learning as a service? How will Kubernetes solve that? I believe a
balance between a small bit of lock-in, but retain environment freedom (free
programming languages like python, javascript...) is the sweet spot for cloud,
eg. PaaS like appengine or azure app service or beanstalk.

~~~
weberc2
I'm not a k8s expert, but I'm pretty sure it has an extensible architecture
(plugin or similar), so it probably would allow for the definition of an image
tagging service interface that could be satisfied by Azure or whomever. Would
love to hear from someone more knowledgable than I.

------
rcarmo
If you’re looking for a simple way to manage web apps on Linux, check out
[https://github.com/rcarmo/piku](https://github.com/rcarmo/piku)

I wrote it as a sort of micro-Heroku/Dokku replacement to run on small ARM
boards, and ended up deploying a few apps with it on Intel boxes (I also use
Docker Compose, but for simple stuff it’s overkill).

It uses uWSGI and is heavily Python-oriented, but I’ve run other stuff on it
(it’s basically a supervisor with automatic reverse proxy setup and a Procfile
approach to specifying what to run - git push to it and you’re in business).

------
mfer
If you're going to use Kubernetes it's good to look at your business case or
other need. Don't use a hammer if you need to unscrew something.

Kubernetes has it's place... I recently wrote a post on that...
[https://codeengineered.com/blog/2018/kubernetes-biz-
case/](https://codeengineered.com/blog/2018/kubernetes-biz-case/)

But, there are many times you just don't need it. Like, for my personal
sites... just isn't a need there.

------
martinlaz
Yeah, but... Nobody ever got fired for using K8s.

~~~
mharroun
I'm sure many startups and other companies have close by failing to hit their
product goals and buisness kpi's.

Then again I guess that's not being fired... just picking technologies your
company doesnt need that may ultimately kill everyone's jobs.

------
bashmonkey
I do my level best to stay away from containers. I don't think most people
even need them. It's a fad of sorts. I tend to stick with the tried and true
and not follow trends, cloud or otherwise. Nothing worse than having your data
on someone else's HW and losing connectivity through no fault of your own.
Years ago, I worked for UUNET in Reston/Ashburn, VA, and built web servers and
the attendant HW/SW that ran them (usually Sun Solaris/Apache/Oracle). We
always had a "back net" into every device. Now? One NIC, one way in. I always
like having more than one way to get to a device, be it local or remote. With
the cloud, you tend to give this up. I recommend VMs over the cloud using
someone else's data and data centre. Nothing worse than going to a tech
conference with your boss, and him being the "deer in the headlights" as it
were with regard to buying into what's being sold by the vendors. Last time we
went, it took me the entire 3-hour car ride home to convince him we didn't
need half of what was on offer. I tend to be old school and prefer to make do
with Linux/FreeBSD VMs, and whatever software is needed to make something
work. I like being in control of my own architecture.

~~~
devhead
i like being able to sleep at night knowing my infrastructure will self heal
in almost all cases.

~~~
bashmonkey
HA is self-healing. And for things that don't do HA very well, there are hot
spares that can be activated in 30 seconds. Depends on the SLA, though, as
with most things. Most things in my firm can be stood up within two minutes'
time should there be an issue with something.

------
segmondy
If you don't have microservices/soa architecture then you don't need k8s. Most
people don't need skyscrapers. But yet there's a lot of them in the world.
Just because you don't need one and don't have one doesn't mean that other's
don't

------
marghidanu
What bothers me the most is the assumption that adopting Kubernetes will solve
all design problems.

------
amai
Unfortunately standalone Docker container don‘t allow to inject secrets (keys,
passwords, etc) in a secure way for your app in the container. For doing this
it seems to be necessary to use a orchestration solution like swarm or
kubernetes.

------
vander_elst
As much as I agree with the fact that for 95% of the cases k8s is not needed,
the problem most likely lies in the overengineerin around it. Take a managed
k8s solution setup a deployment that serves through an ingress controller.
Done.

------
metalrain
You most likely need few cheap servers, a loadbalancer and great team. So yeah
no Kubernetes, no microservices nor chaos monkey. What you need is to know,
really deeply know your technology stack. This is where boring software helps.

------
AzzieElbab
What is wrong with running your app on a managed kubernetes? Of course the
cost of maintaining your own cluster is overwhelming for a company with less
than 1k servers, but if Google is taking care of this for you, why not?

------
wstrange
If you are writing "business" apps (for some definition of business), you
should probably use a PaaS.

If you want to create a PaaS, Kubernetes is an excellent foundation (see
Knative).

------
konschubert
I just create a container image for my service and run it via the heroku
container registry. Zero setup. I think it has auto scaling as well but I
never needed that.

------
sjapkee
If you think you need kubernetes, docker, etc., you are wrong. This is the
only thing you need to know about these technologies.

------
amai
Isn‘t Kubernetes of today the same thing Tomcat used to be fo Java web
applications yesterday?

------
zwischenzug
A lot of people don't have a choice.

~~~
Walkman
The choice of not using Kubernetes?

~~~
ThePadawan
I believe the parent is referring to

Businessperson: "What do you mean <software> isn't in the cloud/on
kubernetes/<some other catchy phrase> yet?"

Engineer: "Well we believe there are pros and cons which don't yet justify the
additional cost or complexity for this use case..."

Businessperson: "All our competitors use Kubernetes! We can't sell <software>
if it isn't on Kubernetes! Get on it ASAP!"

~~~
bdcravens
As I commented to grandparent, the article's audience probably isn't the
powerless worker bee, but the architect who gets to make those choices.

------
xmly
K8S is too complex. I just want a small and easy to use cluster toooool

~~~
devhead
aws / gce

------
jasonlotito
This is another way of saying "You might need Kubernetes."

------
bg4
You don't need K8S, you don't need microservices.

------
effnorwood
You don’t.

~~~
beagledude
or Do you?

~~~
toomuchtodo
You don't.

EDIT: 99 times out of 100, you don't.

~~~
beagledude
but what if? Imagine.

~~~
toomuchtodo
I imagine if you have an ops team, they're going to continue to pray every
time you have to upgrade k8s or a supporting underlying service and the
expectation is that everything will continue to function without dropping an
inbound request. I imagine they are going to be less than impressed being on
call for something that is essentially still in beta. And if you have no ops
and your devs are responsible for it, god help you unless you're leaning
heavily on a Kubernetes managed service (which is of course, as we recently
saw with Google's outage [1], no guarantee everything will work flawlessly).

My hesitation and cautiousness doesn't come from being a greybeard curmudgeon,
it comes from a healthy dose of skepticism that this is The One True Path that
will Solve All The Problems. I am cautiously optimistic it might be a stable,
proven ecosystem, eventually. But it isn't today.

When making technology decisions, imagine that you're the one with the pager
at 3am and work backwards accordingly.

[1]
[https://news.ycombinator.com/item?id=18428497](https://news.ycombinator.com/item?id=18428497)

~~~
jeletonskelly
Man, I wish I could give you more upvotes for this sentiment. Making the
decision to put the livelihood of a company on any platform is not to be taken
lightly. As an SRE I feel like I have to take a slightly conservative approach
to new technologies.

~~~
toomuchtodo
Do me a favor and pay it forward. When technology decisions are made, look at
them with a conservative eye. Make absolutely sure the technology selected is
being selected for the right reasons (ie not resume driven development,
premature optimization, culture signaling, or because it's "new" it must be
better). It'll pay dividends for whomever has to operate it, as well as the
business.

