
“Let’s use Kubernetes.” Now you have eight problems - signa11
https://pythonspeed.com/articles/dont-need-kubernetes/
======
atonse
The odd thing about having 20 years of experience (while simultaneously being
wide-eyed about new tech), is that I now have enough confidence to read
interesting posts (like any post on k8s) and not think "I HAVE to be doing
this" – and rather think "good to know when I do need it."

Even for the highest scale app I've worked on (which was something like 20
requests per second, not silicon valley insane but more than average), we got
by perfectly fine with 3 web servers behind a load balancer, hooked up to a
hot-failover RDS instance. And we had 100% uptime in 3 years.

I feel things like Packer (allowing for deterministic construction of your
server) and Terraform are a lot more necessary at any scale for generally good
hygiene and disaster recovery.

~~~
hinkley
I have, at various times in my career, tried to convince others that there is
an awful, awful lot of stuff you can get done with a few copies of nginx.

The first “service mesh” I ever did was just nginx as a forward proxy on dev
boxes, so we could reroute a few endpoints to new code for debugging purposes.
And the first time I ever heard of Consul was in the context of automatically
updating nginx upstreams for servers coming and going.

There is someone at work trying to finish up a large raft of work, and if I
hadn’t had my wires crossed about a certain feature set being in nginx versus
nginx Plus, I probably would have stopped the whole thing and suggested we
just use nginx for it.

I think I have said this at work a few times but might have here as well: if
nginx or haproxy could natively talk to Consul for upstream data, I’m not sure
how much of this other stuff would have ever been necessary. And I kind of
feel like Hashicorp missed a big opportunity there. Their DNS solution, while
interesting, doesn’t compose well with other things, like putting a cache
between your web server and the services.

I think we tried to use that DNS solution a while back and found that the DNS
lookups were adding a few milliseconds to each call. Which might not sound
like much except we have some endpoints that average 10ms. And with fanout,
those milliseconds start to pile up.

~~~
takeda
I personally would advise against using DNS for service discovery, it wasn't
designed for that.

The few milliseconds that you get though, most likely is due to your local
machine not having DNS caching configured, this is quite common in Linux.
Because of that every connection triggers a request to DNS server. You can
install unbound for example to do it. nscd or sssd can also be configured to
do some caching.

~~~
totony
Why is dns not used for service discovery? The internet as a whole uses it for
service discovery currently

~~~
takeda
Internet as a whole uses it to provide human friendly names.

I'm saying it is not good idea to use DNS for service discovery, there's a way
of using it correctly, but it requires software to do the name resolution with
service discovery in mind, and you're guaranteed that majority of your
software doesn't work that way.

Why you shouldn't use DNS? It's because when you communicate over TCP/IP you
need an address that's really the only thing you actually need.

If you use DNS for discovery you probably will set low TTL for the records,
because you want to update them quickly, this means for every connection you
make you will be checking DNS server providing extra load on the DNS server
and adding latency when connecting.

On failure of a DNS server, even if you set a large TTL, you will see an
immediate failure on your nodes the reason is that's how DNS cache works.
Different clients made the DNS request at different time so the records will
expire at different times. If you did not configure a local DNS cache on your
hosts (most people don't) then you won't even cache the response and every
connection request will go to a DNS server, so upon a failure everything is
immediately down.

Compare this to have a service that edits (let say an HAProxy) configuration
and populates it with IP addresses. If the source that provides the
information goes down, you simply won't have updates during the time, but the
HAProxy will continue forwarding requests to IPs (if you use IPs instead of
hostnames, then you also won't be affected by DNS outages).

Now there are exceptions to this, certain software (mainly load balancers such
as pgbouncer (I think HAProxy also added some dynamic name resolution)) use
DNS with those limitations in mind. They basically query DNS service on the
start to get IP and then periodically query it for changes, if there's a
change it is being applied, if the DNS service is down they will keep the old
values.

Since they don't throw away the IPs when a record expires, you don't have this
kind of issues. Having said that, majority of software will use system
resolver the way DNS was designed to work and will have these issues, and if
you use DNS for service discovery, you, or someone in your company will use it
with such service and you'll have the issues described above.

~~~
totony
>Compare this to have a service that edits (let say an HAProxy) configuration
and populates it with IP addresses.

Just edit the hosts file? If you have access to machines that run your code
and can edit configuration, and also don't want the downsides of resolvers
(pull-based instead of push-based updates, TTLs), DNS still seems like a
better idea than some new stacks, plus you can push hosts files easily via
ssh/ansible/basically any configuration management software

EDIT: The only issue I see with DNS as service discovery is that you can't
specify ports. But usually software should use standard ports for their uses
and that's never been a problem in my experience.

~~~
drybjed
You can specify ports using SRV resource records.

~~~
totony
You could but there's no integration for that that i know of so it'd be a bit
of work to get working, which is why i didnt include it

~~~
imtringued
[https://www.haproxy.com/documentation/aloha/9-5/traffic-
mana...](https://www.haproxy.com/documentation/aloha/9-5/traffic-
management/lb-layer7/dns-srv-records/)

------
jorams
These kinds of posts always focus on the complexity of running k8s, the large
amount of concepts it has, the lack of a need to scale, and that there is a
"wide variety of tools" that can replace it, but the advice never seems to
become more concrete.

We are running a relatively small system on k8s. The cluster contains just a
few nodes, a couple of which are serving web traffic and a variable number of
others that are running background workers. The number of background workers
is scaled up based on the amount of work to be done, then scaled down once no
longer necessary. Some cronjobs trigger every once in a while.

It runs on GKE.

All of this could run on anything that runs containers, and the scaling could
probably be replaced by a single beefy server. In fact, we can run all of this
on a single developer machine if there is no load.

The following k8s concepts are currently visible to us developers: Pod,
Deployment, Job, CronJob, Service, Ingress, ConfigMap, Secret. The hardest one
to understand is Ingress, because it is mapped to a GCE load balancer. All the
rest is predictable and easy to grasp. I _know_ k8s is a monster to run, but
none of us have to deal with that part at all.

Running on GKE gives us the following things, in addition to just running it
all, without any effort on our part: centralized logging, centralized
monitoring with alerts, rolling deployments with easy rollbacks, automatic VM
scaling, automatic VM upgrades.

How would we replace GKE in this equation? what would we have to give up? What
new tools and concepts would we need to learn? How much of those would be
vendor-specific?

If anyone has a solution that is actually simpler and just as easy to set up,
I'm very much interested.

~~~
rawoke083600
"Pod, Deployment, Job, CronJob, Service, Ingress, ConfigMap, Secre"

Wow as a new developer coming onboard your company, I will walk out the door
after seeing that, and the fact that you admit its a small serivce.

~~~
nrb
It's an afternoon worth of research to understand the basic concepts. Then,
with the powerful and intuitive tooling you can spin up your own cluster on
your computer in minutes and practice deploying containers that:

\- are automatically assigned to an appropriate machine(node) based on
explicit resource limits you define, enabling reliable performance

\- horizontally scale (even automatically if you want!)

\- can be deployed with a rolling update strategy to preserve uptime during
deployments

\- can rollback with swiftness and ease

\- have liveness checks that restart unhealthy apps(pods) automatically and
prevent bad deploys from being widely released

\- abstracts away your infrastructure, allowing these exact same configs to
power a cluster on-prem, in the cloud on bare metal or vms, with a hosted k8s
service, or some combination of all of them

All of that functionality is unlocked with just a few lines of config or
kubectl command, and there are tools that abstract this stuff to simplify it
even more or automate more of it.

You definitely want some experienced people around to avoid some of the
footguns and shortcuts but over the last several years I think k8s has easily
proven itself as a substantial net-positive for many shops.

~~~
scarface74
So why should I do all of that instead of throwing a little money at AWS, run
ECS and actually spend my time creating my product?

Heck, if my needs are simple enough why should I even use ECS instead of just
putting my web app on some VM's in an auto-scaling group behind a load
balancer and used managed services?

~~~
Legogris
I don't think anyone is arguing that you should use k8s for a simple web app.
There's definitely some inherent stack complexity threshold before solutions
like k8s/mesos/nomad are warranted.

When you start having several services that need to fail and scale
independently, some amount of job scheduling, request routing... You're going
to appreciate the frameworks put in place.

My best advice is to containerize everything from the start, and then you can
start barebones and start looking at orchestration systems when you actually
have a need for it.

~~~
geggam
What requirement is driving containers ?

How are you managing your infrastructure and if you have that already
automated how much effort is it to add the software you develop to that
automation vs the ROI of adding another layer of complexity ?

The idea everything needs to be in containers is similar to the idea
everything needs to be in k8s.

Let the business needs drive the technology choices don't drive the business
with the technology choices

~~~
Legogris
Portability - the idea being that you can migrate to an orchestration
technology that makes sense when you have the need. The cost and effort of
containerizing any single service from the get-go should be minimal. It also
helps a lot with reproducibility, local testing, tests in CI, etc.

Valid reasons to not run containerized in production can be specific security
restrictions or performance requirements. I could line up several things that
are not suitable for containers, but if you're in a position of "simple but
growing web app that doesn't really warrant kubernetes right now" (the comment
I was replying to), I think it's good rule of thumb.

I agree with your main argument, of course.

~~~
geggam
The overhead managing a container ecosystem to run production is not trivial.
IF you are doing this in a service then by all means leverage that package
methodology.

If you are managing your systems who already have a robust package management
layer then adding the container stacks on top of managing the OS layers you
have just doubled the systems your operations team is managing.

Containers also bring NAT and all sorts of DNS / DHCP issues that require
extremely senior well rounded guys to manage.

Developers dont see this complexity and think containers are great.

Effectively containers moves the complexity of managing source code into
infrastructure where you have to manage that complexity.

The tools to manage source code are mature. The tools to manage complex
infrastructure are not mature and the people with the skills required to do so
... are rare.

~~~
Legogris
> If you are managing your systems who already have a robust package
> management layer then adding the container stacks on top of managing the OS
> layers you have just doubled the systems your operations team is managing.

Oh yeah, if you're not building the software in-house it's a lot less clear
that "Containerizate Everything!" is the answer every time. Though there are
stable helm charts for a lot of the commonly used software out there, do
whatever works for you, man ;)

> Containers also bring NAT and all sorts of DNS / DHCP issues that require
> extremely senior well rounded guys to manage.

I mean, at that point you can just run with host mode networking and it's all
the same, no?

~~~
scarface74
Or you can just use ECS/Fargate and each container registers itself to Route53
and you can just use DNS...

------
sho
This, and the other articles like it, should be required reading on any "how
to startup" list. I personally know startups for whom I believe drinking the
k8s/golang/microservices kool-aid has cost them 6-12 months of launch delay
and hundreds of thousands of dollars in wasted engineering/devops time. For
request loads one hundredth of what I was handling effortlessly with a
monolithic Rails server in 2013.

It is the job of the CTO to steer excitable juniors away from the new hotness,
and what might look best on their resumes, towards what is tried, true, and
ultimately best for the business. k8s on day one at a startup is like a mom
and pop grocery store buying SAP. It wouldn't be acceptable in any other
industry, and can be a death sentence.

~~~
marcinzm
>It is the job of the CTO to steer excitable juniors away from the new
hotness, and what might look best on their resumes, towards what is tried,
true, and ultimately best for the business.

Then they might simply join another startup or a big tech company as
competition for good engineers is fierce. Startups also famously underpay
versus larger companies so you need to entice engineers with something.

~~~
jerf
Well, when you pay your engineers 6-12 months of extra salary before you ship
anything because they _had_ to use Kubernetes-on-Highways to host this clever
NoNoNoPleaseNoSQL DB hosted that some guy on Github wrote last week, hosted on
ZeroNinesAsAService.com and with a new UI built in ThreeReact (the hot new
React-based framework that implements an OpenGL interface that works on
approximately 3% of devices in the wild right now, and approximately 0% of
your target user base's devices), don't forget to account for that in the
investor pitch and salary offers.

I mean, seriously, this is a startup killer. Our host wrote an essay a long
time ago about beating standard companies stuck in boring old Java or C++ with
your fast, agile Python code, but in 2020 it seems to me it's almost more
important now to try to convince new startups to be a little _more_ boring.
Whatever your special sauce that you're bringing to market is, it _isn 't_
(with no disrespect to the relevant communities) that you're bringing in Rust
or Nim or whatever for the first time ever, for Maximum Velocity. Just use
Python, cloud technologies, and established databases. Win on solving your
customer needs.

While by no means is everyone in the world using effective tech stacks well-
chosen to meet the needs and without over-privileging "what everyone else is
doing and what has always been done", enough people _are_ now that it's
probably not a competitive advantage anymore.

Honestly, you can beat most companies in just getting stuff out the door
quickly.

(Excuse me, off to file incorporation papers for ZeroNines LLC. Why _wonder_
whether your provider will be up when you can _know_? Nobody else in the
business can make that promise!)

~~~
StavrosK
I like to refer to this availability as "nine fives".

~~~
inkeddeveloper
We like to call it "A 9, a 4, and a 7". You pay depending on what order you
want those numbers to be in.

------
rossdavidh
Having been at a company that was starting to move things to Kubernetes, when
it had absolutely no reason to, I can say that it was being done because: 1)
the developers wanted to be able to say they knew how to use Kubernetes, when
they applied for their next job (perhaps at a company big enough to need it)
2) the managers didn't really understand much about what it was, to evaluate
if it was necessary, but 3) some of the managers wanted to say they had
managed teams that used Kubernetes, for the same reason as the developers

Which is not to say that it should never be used. But we have a recurring
pattern of really, really large companies (like FAANG) developing technologies
that make sense for them, and then it gets used at lots of other companies
that will never, ever be big enough to have it pay off. On the other hand,
they now need 2-3x the developers they used to, because they have too many
things going on, mostly related to solving scale problems they'll never have.

Don't use a semi-tractor trailer to get your groceries. Admit it when you're
not a shipping company. For most of us, the compact car is a better idea.

~~~
Bombthecat
For those companies i recommend rancher... It's kubernetes under the hood but
a lot is stuff is abstracted away..

~~~
colecut
So docker runs a bunch of system services but abstracts them away... And
kubernetes runs docker but abstracts that away... and rancher runs kubernetes
but abstracts that away..

Should I just wait a year for something that lets me use rancher without
knowing anything about it?

~~~
rumanator
The problem of infrastructure is that low level interfaces are always consumed
by higher-level interfaces.

And if you want to run a process, but you want to distribute the apps and run
them as process containers, and you want to run them in an automatically
configurable cluster of COTS computers communicating through a virtual private
network...

Don't you understand where and why are there abstractions?

If anything, having people naively complain about how things are layered and
abstracted is a testament of the huge success of the whole teck stack, because
complainers formed such a simple mental model of how to distribute, configure,
run, and operate collections of heterogeneous services communicating through a
virtual network that they simply have no idea of the challenge of implementing
a workable system that does half of this.

But with docker+kubernetes it only takes a click, so it must be trivial right?

~~~
colecut
I haven't used kubernetes, but it must be a very difficult click if another
tool (Rancher) exists to make it easier.

I understand why abstractions exist, but the amount abstractions in the chain
I mentioned is amusing to me.

~~~
root_axis
Why is it amusing? Do you find the amount of abstraction between the CPU and a
browser similarly amusing? That judgement seems arbitrary. The reason why an
abstraction is created is because it's sometimes helpful to have complexity
managed automatically if full control of the complexity is not necessary for
your needs, your reaction seems to suggest "kubernetes doesn't need to be so
complex", but I am not sure if you really believe that.

I can understand the "kubernetes may not be the best engineering decision for
your needs" argument, but that's a different argument from kubernetes is _too
complex_.

~~~
colecut
I suppose amusement is arbitrary.

This comment chain started with: "Having been at a company that was starting
to move things to Kubernetes, when it had absolutely no reason to, I can say
that it was being done because: 1) the developers wanted to be able to say
they knew how to use Kubernetes... "

Someone responded by saying "For those companies i recommend rancher... It's
kubernetes under the hood but a lot is stuff is abstracted away.."

So if you dont need Kubernetes, and are just using it to learn Kubernetes, you
should throw an additional tool on top of Kubernetes, that abstracts away
Kubernetes?

I'm sorry, that is amusing to me.

Some abstractions are necessary. Some aren't.

~~~
root_axis
I said the judgement is arbitrary, not the amusement.

> _Some abstractions are necessary. Some aren 't._

It just seems bizarre to me that you can suggest that the abstraction is
unnecessary when you also claim to have never used the tool. What makes you
think it's unnecessary?

~~~
colecut
1\. I didn't judge anything. I said I was amused. You inferred judgement.

2\. I didn't say it wasn't necessary. The poster of the parent comment did. I
didn't work there, I don't know what was necessary. But it's safe to say, if
you don't need Kubernetes (which the parent poster said, not me), then you
don't need something to abstract Kubernetes (Rancher)...

And also, if I did know the environment, and the environment was incredibly
simple, I don't think it's necessary for me to have Kubernetes experience to
determine that it is not necessary... Sometimes a couple of VMs in different
zones behind a load balancer is just fine...

And if you don't agree, you probably also think a static landing page requires
React to be "done properly." How's that for inferring things you didn't say?
I've never used React either, I guess I'll never know if I really need it for
that landing page!

------
flowerlad
I am a solo developer (full stack, but primarily frontend), and Kubernetes has
been a game changer for me. I could never run a scalable service on the cloud
without Kubernetes. The alternative to Kubernetes is learning proprietary
technologies like "Elastic Beanstalk" and "Azure App Service" and so on. No
thank you. Kubernetes is very well designed, a pleasure to learn and a breeze
to use. This article seems to be about setting up your own Kubernetes cluster.
That may be hard; I don't know; I use Google Kubernetes Engine.

For others considering Kubernetes: go for it. Sometimes you learn a technology
because your job requires it, sometimes you learn a technology because it is
so well designed and awesome. Kubernetes was the latter for me, although it
may also be the former for many people.

The first step is to learn Docker. Docker is useful in and of itself, whether
you use Kubernetes or not. Once you learn Docker you can take advantage of
things like deploying an app as a Docker image to Azure, on-demand Azure
Container Instances and so on. Once you know Docker you will realize that all
other ways of deploying applications are outmoded.

Once you know Docker it is but a small step to learn Kubernetes. If you have
microservices then you need a way for services to discover each other.
Kubernetes lets you use DNS to find other services. Learn about Kubernetes'
Pods (one or more Containers that _must_ reside on the same machine to work),
ReplicaSets (run multiple copies of a Pod), Services (exposes a microservice
internally using DNS), Deployments (lets you reliably roll out new software
versions without downtime, and restarts pods if they die) and Ingress (HTTP
load balancing). You may also need to learn PersistentVolumes and
StatefulSets.

The awesome parts of Kubernetes include the kubectl exec command which lets
you log into any container without almost any setup or password, kubectl logs
to view stdout from your process, kubectl cp to copy files in and out, kubectl
port-forward to make remote services appear to be running on your dev box, and
so on.

~~~
FreeHugs

        If you have microservices then you need
        a way for services to discover each other
    

Why not run them in docker containers with fixed IPs?

~~~
petilon
What happens when the IP address changes? You need some way to lookup current
IP addresses. Why re-invent DNS? Also, how do you protect these services from
unauthorized access?

~~~
FreeHugs

        What happens when the IP address changes?
    

Changes how? It's not as if the IP of a server magically changes out of the
blue.

    
    
        Why re-invent DNS?
    

There is no reason to re-invent DNS. Each docker container will have to have
the info where the other containers are. So you could write that into
/etc/hosts of the containers for example.

    
    
        Also, how do you protect these services
        from unauthorized access?
    

You need to do this no matter if you use Kubernetes or your own config
scripts.

------
jblake
I run a Saas business solo, for eight years now, netting six figures, and I've
been on Heroku the entire time for just under $1,000 a month. Monolithic rails
app on a single database, 300 tables.

Sometimes I feel teased by 'moving to EC2' or another hot topic to save a few
bucks, but the reality is I've spent at most 2 hours a month doing `heroku
pg:upgrade` for maintenance once a year, and `git push production master` for
deploys and I'd like to keep it that way. I just hope Heroku doesn't get
complacent as they are showing signs of aging. They need a dyno refresh,
http/2, and wildcard SSL out of the box. I honestly have no idea what the
equivalent EC2/RDS costs are and I'm not sure I want to know.

~~~
winrid
Congrats on the business taking off! What is it?

~~~
eitland
Based on jblake's profile it seems to be guestmanager.com, but I might be
wrong.

(Also, for everyone who doesn't know:

\- clicking on a username takes you to that users profile

\- clicking on the n minutes/hours/days ago take you to a permalink directly
to that comment

)

------
BerislavLopac
Software engineering is the perfect example of the "blind scientists and the
elephant" problem. It is a very complex field, with a number of related but
distinct disciplines and activities required to make it work; it's impossible
to be an expert in everything, so we tend to specialise: we have back-end
engineers, front-end engineers, data engineers, SRE experts, devops
specialists, database experts, data scientists and so on. Additionally, the
software we are building varies wildly in terms of complexity, dependencies,
external requirements etc; and finally, the scale of that software and the
teams building it can vary from one person to literally thousands.

Articles like this one, and even more comments on HN and similar sites,
generally suffer from a perspective bias, with people overestimating the
frequency of their own particular circumstances and declaring something
outside of their needs as "niche" and generally misguided and "overhyped".

The reality is that various technologies and patterns -- microservices,
monoliths, Kubernetes, Heroku, AWS, whatever -- are tools that enable us to
solve certain problems in software development. And different teams have
different problems and need different solutions, and each needs to carefully
weigh their options and adopt the solutions that work the best for them. Yes,
choosing the wrong solutions can be expensive and might take a long time to
fix, but that can happen to everyone and actually shows how important it is to
understand what is actually needed. And it's completely pointless to berate
someone for their choices unless you have a very detailed insight into their
particular needs.

~~~
alexandercrohde
> Articles like this one, and even more comments on HN and similar sites,
> generally suffer from a perspective bias, with people overestimating the
> frequency of their own particular circumstances and declaring something
> outside of their needs as "niche" and generally misguided and "overhyped".

It's my experience the opposite is true. The blindness is people
overestimating their needs (or resume-padding) and using specialized,
overcomplicated tools meant for traffic in the billions (e.g. cassandra,
kafka, mapreduce) for 20-person startups that haven't hit rapid growth (most
of which never do).

~~~
BerislavLopac
I'm afraid you might be falling into the exact trap I have described.
Realistically, how many such cases have you seen? And of those you have, how
many did actually implement such a complex solution and ran it for a long time
without either closing down or transforming to something more suited to their
needs?

~~~
alexandercrohde
I kind of suspect you may be the one lacking experience.

I've worked at at least 8 different tech companies, mostly startups in SF or
NY. The vast majority used overcomplicated technologies that didn't fit the
needs of project (most frequently microservices and no-sql).

Off the top of my head I can't think of a single time such mistakes got
corrected. More often than not things would continue to be even more poorly
designed with the addition of new unnecessary technology.

In short -- I'm annoyed about this stuff because I've seen it first hand and
had to struggle with it for numerous years.

Your weird theory that people are inventing hypothetical situations to be
angry about... well I think you're the one inventing hypotheticals here...

~~~
RonanTheGrey
In 24 years I've only rarely seen scenarios that actually require something at
the level of complexity that k8s represents. I worked on Bing some years ago,
and it would definitely have benefited but MS rolled their own solution (which
has since been replaced by I don't know what).

I've seen k8s USED many times where it was wholly and completely unnecessary
and being pushed by juniors who wanted to go apply to Google in a year or two.

I am currently running a service that receives 3000 rpm spike and averaging
500k requests a day.

On a single server behind cloudflare deployed straight from Github.

We have a version of the service also running on ElasticBeanstalk with a
single server.

Neither experiences downtime.

People severely overestimate their needs.

Google, Facebook, Microsoft, Amazon? Are serving literally billions of
requests per minute. They have a need for that level of complexity.

Most of us here... do not.

------
SirensOfTitan
I disagree with the HN consensus here: I think managed kubernetes is really
useful for startups and small teams. I also commonly hear folks recommending
that I use docker-compose or nomad or something: I don't want to manage a
cluster, I want my cloud to do that.

We run a fairly simple monolith-y app inside kubernetes: no databases, no
cache, no state: 2 deployments (db-based async jobs and webserver), an ingress
(nginx), a load balancer, and several cron jobs. Every line of infrastructure
is checked into our repo and code reviewed.

With k8s we get a lot for free: 0 downtime deployments, easy real time
logging, easy integration with active-directory for RBAC, easy rollbacks.

------
igammarays
Overengineering is a real problem out there. I’ve seen k8s deployed for
internal back office apps that have literally 5 users - a raspberry pi
could’ve hosted it. Keeping things simple and reliable is often a harder skill
to learn than $BIGCO_TECH, and often confounded by political incentives.

~~~
Glyptodon
So if you do that another internal enforcement group will come and be like
"policy is that everything is cloud now, what's your migration plan?"
regardless of anything else, and the only answer is to have a plan that keeps
costs similar to the Raspberry Pi.

~~~
hedora
So, request budget to move the app to a cloud VM / container, and work out the
security issues, etc. The latter will be expensive, but the RPi deferred the
cost. Bean counters like deferred costs.

Then, ask finance to figure out the billing. It costs about $5 / month to rent
a raspberry pi equivalent, but multitenancy might reduce that.

------
WnZ39p0Dgydaz1
I'm getting tired of these "you don't need k8 posts". Sure, if you have a
simple web application with a REST API, don't use k8, unless it's for learning
purposes. But nobody does that anyway.

If you have something more complex with many moving parts that are separate
services, k8 is a great option. I've been using it in production for close to
2 years now - not a single service downtime, great fault-tolerance, and
absolutely zero management effort. Deploying complex applications, databases,
and monitoring systems is easier than ever before. I don't think using k8 is
overly complex. Yes, you need to invest some time to learn it, but that's the
case for every new technology.

~~~
bertil
We have a possible counter-example: our service is computing AB-test results
(essentially reading from a database, processing totals, writing significance
back). It have no non-system users, no dependencies, etc. All the test and
strangeness is handled internally.

We use k8s which is indeed over-engineered — it ran fine as a reminder and a
local script for years. But the rest of the company has a release process that
they like. We just integrate to it. Our service has a name in that space,
ressources, a schedule that other engineers can read. Our description looks a
little… film-school-credit-rolly because my name appears as the lead,
architect, project, emergency contact, etc.

I think the main oversight of those “you don’t need k8s” is that most projects
are part of a system and fitting in that system gives you legibility to your
peers that a nginx might not.

~~~
p_l
I have ~62 apps (that's applications, not instances) deployed on kubernetes
right now, for single client.

It started out on 2 VMs with I think 2-4 cpus, already running kubernetes. The
actual containers inside ran lighttpd and served static files while we fixed
up the sites that we have mirrored as static to run in containers.

If we had to run one, or maybe few of those sites, it would have been easy to
run a single Apache + mod_php + vhost. We would have some annoying work to do
on monitoring and logging side.

But we have 62. Some of them are mutually incompatible to run on "standard"
distro, as they have mutually exclusive dependencies (for example, PHP
versions). This meant we ended up with containers to manage this in somewhat
doable way (we are two people). We can't expend manpower to do a complete redo
of the apps, though we have ideas on that (to go to single common CMS system
for all of them).

K8s saved our sanity. Because those "simple apps" altogether made for hard to
manage setup, and the client doesn't like when they are not available, so we
brought up HA as well.

Doing this as separate VMs would be hard and expensive. Doing it on Heroku
_is_ expensive - I made a calculation, and our original setup would result in
somewhere around ~1800 USD a month.

Our 2016-2017 spend on GCP (GKE, Cloud SQL, a VM to host Gitlab + network
traffic and DNS) was around 1000/month.

~~~
earthboundkid
Do they need to be 62 though? Without having looked, my ignorant guess is that
it could be reduced by an order of magnitude, it's just that doing so would
take time that no one has, so the simpler choice is just to shovel the
complexity into K8 instead of dealing with the complexity of reducing the
number of apps.

~~~
p_l
The guess is a big miss, yes.

Assuming certain pruning is done that I can see, we would reduce it to maybe
40-50. They are all separate concerns, independent from each other, the
pruning would merge the most mergable elements back (those are, honestly, a
tech debt and I'd welcome replacing them with one common app).

BTW, those 62 apps? They map to ~260 domain names. Those domain names and what
shows when you go there are what the client is paying us for.

------
sethammons
We used to manually ssh to deploy our dozens of nodes, just a handful on
developers. git pull, restart service.

Then we got to hundreds of nodes. Chef, chef, and more chef. Deploys were
typically run with a chef-client run via chef ssh (well, a wrapper around that
for retries). With dozens of services and many dozens of engineers, this
worked well enough.

Then we got to thousands of nodes. And hundreds of developers working on a
multitude of services.

We've adopted k8s. It has been a lot of work, but the deploy story is
wonderful. We make a PR and between BuildKite and ArgoCD we can manage canary
nodes, full roll outs, roll backs, etc. We can make config changes or code
changes easily, monitor the roll out easily, and revert anytime. I still don't
_like_ k8s mind you - I don't think programming with templates and yaml is a
good thing. But I've come to terms with that being the best we will have for
now.

~~~
NickKampe
We deploy small clusters everywhere in the same pattern, I love argocd. This
article fails to understand the use case for kubernetes, and arguably doesn't
fully understand the cloud.

Kubernetes is revolutionary, to think it's not is foolish.

------
pnathan
Kubernetes solves very real problems in a way that handles a full suite of
them.

This is very complex because the problem set is complex.

If you're running a substantially smaller system, k8s makes less sense.

That said, if you're familiar with running and monitoring k8s, a gke deploy
will solve a lot of the pain a traditional LB + EC2 ASG will incur out of the
gate. Let me explain:

Notionally, we need 4 basic services operationally for a single typical
service deployment. 1 of FooService, 1 load balancer, 1 database, 1
monitoring/logging system. All of these should tolerate node death; this means
roughly 3 pieces of hardware for this notional system. This is complexity that
k8s covers, at a high cost of knowledge. If you're bought into AWS, the
Beanstalk system will do this decently well, last I checked.

I think there is room for a k8s-like tool that is good for teams with < 10
services, and less than 10 engineers. Even k3s
([https://rancher.com/docs/k3s/latest/en/](https://rancher.com/docs/k3s/latest/en/))
has substantial complexity at the networking layer that, I think, can be
stripped for the "Small Team".

So I agree with the author in theory that k8s is overkill. But also other
infra types can start getting difficult to deal with in time, and "just deploy
onto a single big box" doesn't cover the operational needs.

~~~
theseadroid
Would AWS Elastic Beanstalk fit that <10 services profile?

~~~
pnathan
yes, it would.

Costs start really getting heavy with EB at a certain point, since you're
spinning up 1+ ASG & LB per service (a tier is an ASG and a LB, possibly a
DB). I wouldn't build a microservice architecture against EB, at _all_.

I'd say EB probably is cost effective up to, IDK, maybe 3 services with 3
nodes per ASG. Then you're breaking even or worse with k8s ops cost, and now
you're looking at "how much time (= money) is it to manage k8s with KOPS" vs
"how much are we spending on EB". KOPS is a very low-effort solution once you
get it rolling.

------
partiallypro
Probably unpopular, but I am generally opposed to using Docker/Kubernetes for
~75%+ of projects. I've been in arguments over this, but containers being
unmaintained and the complexity of Kubernetes can cause major issues. It's
over engineering for smaller projects. That's just my opinion. I think a flat
VM is more appropriate most of the time. But there is no denying the
advantages of Docker when it's done right and used right.

A developer told me just a few weeks ago that you should "always" use Docker,
which I just found to be so ridiculous.

~~~
rantwasp
that’s not unpopular at all. everyone that has had to run k8s and keep it
uptodate or deal with unmaintained docker containers understands this 100%

~~~
kube-system
People who have issues with unmaintained docker containers are doing it wrong.
You still need to assess the quality of your dependencies for container images
just like you should be doing for any other dependency.

The issue is that docker lets some developers get in over their head very
easily. Many orgs have system admins to install and configure server operating
systems, but docker shifts some of those responsibilities back on to the
developer.

~~~
rantwasp
yes, but no.

usually I would agree with you, but in today's world where we curl install
stuff from the internet you'll always have someone pull a container to 'just
get it to work'. Once the prototype works it's production. People that have
the discipline to actually research the quality of deps or... god forbid...
actually build the containers that rely on from scratch will not get into this
kind of issue, but again: kids these days...

------
thiago_fm
It's not that hard to use Kubernetes and it makes the developer's life easy.
It's very easy to deploy helm charts and even though that there are many
gotchas and complex things, if you want to deploy something simple, it is easy
and completely doable to do it even solo.

(rant)

After over 10 years in development I've done and used literally all the things
people complain here a lot about: Virtual machines, Single page apps, docker,
microservices, FP and the list goes on. Even though I've struggled, I feel
very lucky to be able to try all those things and it's a joy to use and I've
shipped shitloads of great code that is making a lot of money to a lot of
people and improving businesses in general.

I don't mean you need to use K8S or even like it, but there is definitely
developers which know their shit very well, and can also make great Single
page apps using more than 3 different JS framework, also write good backend
code and so on. And also enjoy all of this and make companies definitely
successful. It sickens me a bit how so much posts of this kind get a lot of
attention and could be replaced by "yes, software, like everything in life, is
complex!!!11". I think the article itself is completely shallow to actually
touch the difficulties there is with using kubernetes and is mostly useless
information. There are at least 10 posts with a better and more structure
criticism, but it's just because it's cool to complain about new things, it
gets automatically traction in HN(which used to be a place where people like
new things...).

So... yes, you shouldn't use K8S everywhere(also applies to everything...),
but it is the new thing(well, not really new...). Should we just talk about
Apache mod_php? It's natural that people want to try new stuff and actually
enjoy working with software. Not everybody sees everything as problems. "Now
you have eight problems, hehehehehe!!11".

Am I the only one that found this post completely useless and at some degree,
toxic?

(/rant)

~~~
alexandercrohde
Well good for you that your SPAs work every time, and you never broke
production with kubernetes. Here's a cookie.

Now for the rest of us who work with engineers across all skill ranges and
experience levels, we actually do need to care about such factors.

The question is -- you hire a guy off craigslist to run your site, and every
minute of downtime costs $1,000. Are you going to want him to use Kubernetes
or a braindead simple hosted solution?

------
supermatt
I see the author is a proponent of docker-compose, which I use myself for
small projects. I have a docker-compose configuration in all my repos, and a
`docker-compose up` brings the app up on my laptop. I could use minikube in
almost exactly the same way. i.e. there is effectively no difference from a
development perspective.

If you are managing kubernetes yourself, on your own hardware, the moving
parts can indeed be a burden for a small team - but all of these pain points
go away with a managed kubernetes, as offered by most IaaS providers. i.e. if
you are using an IaaS provider, there is (usually) no difference from a
production perspective.

There are less moving parts in docker compose, and its easier to run on a
single VM - but it doesnt offer any of the dynamic features of kubernetes that
you would want at scale. The same containers can run on both.

If you need to dynamically scale your application, or grow beyond a single
machine (I disagree with the vertical scaling proposed by the author - thats
for a very specific use-case IMHO), then docker-compose is simply no good.
Then you need to use docker-swarm. At this point, you either need to manage a
docker-swarm cluster or a kubernetes one. Kubernetes is the obvious choice
here. Fortunately, there is a trivial migration path from docker-compose to
kubernetes.

~~~
mosselman
> there is a trivial migration path from docker-compose to kubernetes

The migration path of docker-compose to swarm is basically:

eval $(docker-machine env my_cluster) docker deploy --compose-file docker-
compose.yml PROJECT_NAME

I have looked into k8s and it wasn't as easy as this.

~~~
supermatt
Yeah, its not quite as easy as swarm - you basically need new configs - but
you can use the existing containers.

From experience, this was no more than a few hours work on an app consisting
of ~20 services - but I already had kubernetes experience so knew what I was
doing.

------
Hippocrates
There’s a lot of configuration to understand with k8s and even GKE. Badly
configured probes, resource budgets, pod disruption budgets, node affinities
etc. can have disastrous effects. I’m pushing my teams more towards serverless
since it takes out nearly all ops/scaling/rollout complexities. Right now
we’re seeing our serverless apps on GCF, GAE and cloud run outperform our GKE
apps easily in scaling, reliability, and simplicity (configuration and time
spent getting it deployed in a satisfactory manner)

~~~
spyspy
This is the lesson my last company learned hard. For anything serving less
than tens of thousands of requests a second, you just can't really beat GAE in
terms of simplicity and cost.

~~~
samblr
Im planning to use GAE in a production environment.

Can you share the specifics on how GAE managed to scale please ?

------
hypewatch
It’s interesting that this critique of kubernetes is on a blog called “python
speed” because my most recent project with kubernetes was deploying a large
dask cluster. For this use case k8s was really valuable. It made the devops
part so much easier than it otherwise would have been, so we could put most of
our time into application logic. In other words, when we wanted to achieve
substantial “python speed” kubernetes was very helpful. For data engineering
projects, even with a small number of data engineers, it can be a big
productivity booster.

Personally, I like kubernetes and find it easier to use than other devops tool
sets, so it’s become my go-to tool. Probably wouldn’t recommend it to someone
who doesn’t know it and has a simple app architecture.

------
FlyingSnake
I've taken over a project containing 6 DB entities. Instead of building a
monolith (or normal REST API), the Architects used 7 µServices based on k8s
and NoSQL DB. Now simple development tasks take extra time, and anything that
affects multiple µServices needs n times the development efforts. I wish they
had started with a simple monolith, and refactored to µServices if needed.

~~~
theK
Your problem, my friend, is not Kubernetes or anything else technological. It
is the people around you that call themselves architects :-)

~~~
FlyingSnake
True that, I must add that k8s added to the delay for features and
enhancements.

Like most enterprise projects, by the time I got my hands on this project, all
the Architecture Astronauts had already moved to their next planet :-)

------
michaldudek
I’m a very happy user of Rancher 1.6 for years. Simple, nice GUI, got
everything I need, works fast, can deploy as many apps /services as you wish,
no new concepts to learn (if you know Docker that is).

Used it in my previous agency to manage clients websites and use it now in my
startup to manage multiple envs with few apps (api, front end, workers) and
nice and easy deployments via GitLab CI.

------
pm90
Heh, it’s quite amusing to see the posts here arguing that “you can do the
same thing with multi az deployments on aws with VMs, packer and ebs.
Kubernetes needs you to learn so much shit” ... do you even read what you
write?

Kubernetes is not gospel. It’s an opinionated, incomplete framework for
orchestrating container workloads. There are other ways to do the same thing
which are fine too. It works well for the most part but has disgusting failure
scenarios. So do other techs.

People who use and like kubernetes are comfortable with its trade offs and
portability. You may not be. It’s fine.

Shitting on kubernetes just because you’re comfortable with another technology
just because you can: that’s not fine.

------
halbritt
> The more you buy in to Kubernetes, the harder it is to do normal development

This demonstrates the bias and perspective of the author. The best way I can
describe it is code-centric rather than system centric. If that's "normal"
then the article makes some very valid points. For example, I've seen quite a
few folks make the attempt to scale out badly when they could've scaled up
rather easily. Very many "bigdata" problems can be handled on a single machine
with a terabyte of memory.

If one shares that code-centric perspective, then yeah, k8s probably isn't for
you. The real benefit in overcoming the very validly criticized complexity of
k8s is the number of things that happen without intervention.

From a systems-level perspective, all these things are crucial. Services are
abstracted with endpoints by default. Liveness and readiness are built in.
Self-healing is built in. A consistent model by which apps are deployed is
built in. Logging, metrics, and SLA monitoring while not built in can all be
added and employed without intervention.

Ideally, these things abstract the infrastructure sufficiently well that it
allows developers to focus on development, rather than ancillary tasks like
deployment, monitoring, resilience, etc.

------
adieu
k8s is raw technology like linux kernel. You shouldn't use it directly which
will be hard to maintain. There are bunch of packaged solutions around k8s
like Google GKE or AWS EKS. By leverage them, you'll be working on a higher
level of abstraction and bring productivity back.

~~~
polskibus
What's the best package for running in-house?

~~~
pan69
Not sure about your use case but you could have a look at Minikube:

[https://kubernetes.io/docs/setup/learning-
environment/miniku...](https://kubernetes.io/docs/setup/learning-
environment/minikube/)

~~~
OJFord
I assume GP means Rancher (not a recommendation for nor against) or similar.

------
humbleMouse
Once you configure the kubernetes network layer with whatever hosting platform
you’re using, it’s really not difficult to administrate. It’s funny to me how
much kubernetes hate there is on hn.

~~~
turtlebits
Until need to debug an issue and you're way over your head due to all the
moving pieces.

~~~
humbleMouse
Then assign someone to debug it who understands that kubernetes is a wrapper
of common linux functionality.

It’s not that hard to debug issues in kubernetes. Check status of pods, memory
levels, storage mounts, network configurations, and the stacktrace you’re
debugging. Not that difficult.

I’m not saying there aren’t edge cases, but if you set up your system with
centralized logging (filebeat) and have some way to scrape metrics (jmx, built
in tooling) you’ll be fine.

------
fock
it seems too me like kubernetes aims to replace the existing service-mesh
consisting of de facto microservices (load-balancers, remote-logging, systemd,
xinetd, ...) bonded by unixy conventions with.... a monolithic, proprietary
system. Proponents are then advocating to build decoupled microservices on top
of this. Am I the only one who thinks this is shizophrenic?

(on the other hand: companies claiming to make the world a better place by
selling ads to the highest bidder are shizophrenic...)

~~~
theK
I’m positively attuned to the ... de facto Microservices bonded by unixy
conventions ... part. And it has worked well in the past. You had much more
freedom but everyone needed to pretty much roll their own.

Kubernetes is a compromise on that. You don’t need to build you ops framework
from scratch now and deploying something has very well defined APIs. OTOH you
now are in a world of finding which k8s extensions/plugins/whatever you should
use and which are just “fancy”. And answer questions like Is Knative V0.8 good
enough for our workloads? Because Ai Defi needs that for some reason...

EDIT: Kubernetes is open source though right?

~~~
fock
yeah it is open source, but half a million loc for a simple admin tool seems
like the perfect path to vendor lockin ;)

------
skrebbel
I'm curious what HN would recommend as alternatives, especially for
small/early teams that are outgrowing single-machine setups.

It seems to me that there's something of a gap between "for single machine
setups" (eg docker-compose) and "for 500-engineer teams" (eg kubernetes).

~~~
eitland
Often you don't need docker-compose either as long as you aren't deploying
things that have weird dependencies.

Single monolithic application? Can run on a VM.

Multiple smaller applications written in Java, C# or Go? Can probably run side
by side on a VM.

~~~
skrebbel
Fwiw docker compose is great for running stuff side by side on a VM :-)

------
battery_cowboy
For my personal projects, k8s is so useful that I wouldn't ever build a server
by hand again. I can spin up my blog or whatever easily on one cluster, and if
it becomes too expensive I can just move elsewhere, or if I want to reduce my
costs, I could just run a single-node-cluster (I don't need HA) on a DO
droplet or something and still get the ease of being able to destroy and
rebuild my apps anytime I want to. It might be "overkill", but so are most of
the tools I use each day. Of course, I never create my own clusters, but it
isn't that hard to follow a tutorial if I had to.

------
daitangio
K8s is complex. For this reason Cloud providers sell it as a service. K8s and
microservices are a trend topic, so it is true you must think with your own
head before creating microservices at will.

But I think the article is a bit too negative.

For instance, in my humble experience the application server is always a
bottleneck before the database (i.e. if the database is Oracle or PostgreSQL).

Microservices move a bit of complexity on the client side, ask for smarter
clients and offer a lot more resilience and fault tolerance.

The article focus only on scaling and forget the "Single Point of Failure"
problem.

~~~
cmhnn
It's sold as a service because it drives compute. Compute is one of the top
money makers. The more people who buy into things like k8 that are essentially
VM deployment launchers the better. The providers aren't emotional about this
like 90% of the posters here. If a new flavor of the week gets people to
launch VMs next year then that will be sold.

------
gdm85
> you can use docker-autoheal or something similar to automatically restart
> those processes

I consider it sloppy to accept that a process will crash and become
unresponsive as a normal fact of life, and that subsequently they have to be
automatically restarted. A process should stay put at what it was designed to
do. Reasons for the crash/unresponsiveness should be investigated (memory
leaks, race conditions etc) and not swept under a carpet with an automatic
restart.

~~~
hinkley
Restarts should go into your reporting too.

Erlang seems to take the opposite approach. Processes are cheap, when one
wears out you dispose of it instead of trying to fix it.

------
devy
It would be unfair to simply mention about a technology's "problems" without
mentioning even a single point of "features".

IMO, Itamar Turner-Trauring has misrepresented Kubernetes.

The sole reason of Kubernetes being popular and uprising is because there are
genuine pros and features that make many people's lives easier instead of more
miserable.

So let me point out the pros here:

[1]: [https://www.infoworld.com/article/3173266/4-reasons-you-
shou...](https://www.infoworld.com/article/3173266/4-reasons-you-should-use-
kubernetes.html)

[2]: [https://hackernoon.com/why-and-when-you-should-use-
kubernete...](https://hackernoon.com/why-and-when-you-should-use-
kubernetes-8b50915d97d8)

[3]: [https://opensource.com/article/19/6/reasons-
kubernetes](https://opensource.com/article/19/6/reasons-kubernetes)

[4]: [https://www.weave.works/technologies/the-journey-to-
kubernet...](https://www.weave.works/technologies/the-journey-to-kubernetes/)

------
zaro
I'll take kubernetes over the miriad of AWS services any day.

Kubernetes is de facto the cloud standard. If I have to know stuff about the
cloud, I would like my knowledge to be transferable to the other cloud also. I
understand that these clouds have to compete and so on, but why do I have to
pay the price to learn their particular ways of naming things and private
apis, and what not.

So of course devs like kubernetes.

------
mancerayder
We're being pushed to move to Kubernetes/EKS (or, ECS as a plan B) away from
Elastic Beanstalk.

Elastic Beanstalk does the job but it's slow to deploy things, it's
inflexible, and the Amazon AMIs that underly them are riddled with ancient
packages that infuriate security scanning security people.

Are we making a mistake moving to Kubernetes instead of re-designing an infra-
as-code using docker+terraform+ansible+a CI pipeline? We run a client-facing
app but it doesn't have super low latency requirements, although it does need
to be able to scale up to run jobs.

Something else. I have to be honest, the complaints here about "people are
doing it for their CV," while true, need to be understood in the context of,
it's extremely hard to move jobs without significant kubernetes and Docker
experience.

------
johnmarcus
Lol. I've been using k8 for 5+ years and am currently interviewing for more
'traditional' sys admin roles. The number of traditional problems I _don 't_
have by running even a small k8 cluster has become even more obvious as I go
through the process. For every alternative you may suggest, I can point out
dozens of problems you can run into that are solved with k8s. Just like any
other operations tool, yes, it takes time and effort to learn...that's why sys
admins exist. I always find these posts by non-sys admins humourous. Devs
often have so little respect for the knowledge base required to maintain a
stable, secure, scalable, and flexible system. It ain't nothing, no matter the
tool you use.

------
dan_quixote
> "You can get cloud VMs with up to 416 vCPUs and 8TiB RAM, a scale I can only
> truly express with profanity. It’ll be expensive, yes, but it will also be
> simple."

It's simple until you need to update. Good luck meeting any SLAs with your
fleet of singletons.

------
alexellisuk
We originally built OpenFaaS for Swarm, then moved to Kubernetes and support
both now. The complexity of K8s is harrowing, but by and large works well, if
you can keep up with the pace of change. Try running a controller you last
modified 12 months ago on Kubernetes 1.17.

Now we've spent time looking into containerd and trying to provide
microservices/faas on top of that instead - without the clustering
([https://github.com/openfaas/faasd](https://github.com/openfaas/faasd))

Something I do like about K8s is the ecosystem - in 5 minutes I can automate
TLS with LetsEncrypt on a managed cluster.

------
ryanthedev
Kube isn't for small businesses. It's for enterprises.

Most developers aren't running the clusters, they are 99% of the time managed
by the cloud provider.

Why would I ever order VMs and manually run a kube cluster.

But as an operator needing to manage clusters and multiple teams, I spend my
time coding automation.

I'm not saying kube is not simple, but it's a lot easier then managing
application runtime on VMs.

Before kube I was managing 260+ VMs (hyperv was internal managed DCs
central/east) for my products in-house app. I had to essentially build my own
poor man's orchestration platform to manage applications and deploying.

------
papito
I find the assumption of many companies that they NEED stuff like K8S because
they are going to be so "webscale", frankly, pretty arrogant. Most systems out
there can run on a 2010 laptop, if done right.

~~~
Saaster
In my previous company we had 20 million daily active users and we ran that on
4x M4.large EC2 instances. 4 instances not because it had any significant load
(probably ~10-15% sustained), but purely because of high-availability and the
ability to do a rolling release update.

------
k__
I'm coming from the serverless side of things where people always say that
only a few companies on earth even need K8s, like cloud providers.

How do you rationalize using it in your company?

For example, a learning platform that allows to integrate code for frontend
and backend into your lessons uses Docker containers for their product. They
can't offer runtimes for every programming language running on the frontend
and backend, so they allow the teachers to upload their own container. I'd say
they are a good example of a company that could need k8s.

------
mosselman
I use docker swarm (mode? I am not sure) in production and it works great.

~~~
whatsmyusername
I really like docker swarm for the on-prem stuff we have.

------
sascha_sl
EndpointSlice is a really bad cherry pick. I have written K8S controllers for
2+ years and only recently learned about it when I had to write an _ingress
controller_. Funnily enough I also had to learn about externalTrafficPolicy
because it turns out if you have a lot of pods and use an AWS ELB the traffic
distribution can be terrible, so a daemonset with local-only routing to them
and then round-robining to pods works wonders.

You need to know none of this if you're not even close to the level of scale
we are.

------
bdcravens
> there are wide variety of tools that will do just as well: from Docker
> Compose on a single machine, to Heroku and similar systems, to something
> like Snakemake for computational pipelines.

Even though it has the disadvantages of being vendor-dependent and not open
source, I've found ECS to be a very nice solution. Conceptually, it's very
similar to Kubernetes, with much of the plumbing that makes Kubernetes so
complex baked into the AWS platform (much of which you're already using if
you're on AWS)

~~~
akvadrako
I wouldn’t recommend ECS to anyone. It’s technically worse than k8s in every
way and tied to a vendor.

For me the biggest issue is speed of deployments. In practice it’s hard to get
deploys on ECS under 1 minute. With k8s, 5 seconds is easy.

------
TheKarateKid
Every time I feel secretly embarrassed for running my small projects on a
simple cloud VPS VM, an article like this comes along and restores faith in my
decision to not over-engineer things.

This has come up on HN before, and it's a great read - "You are not Google":
[https://news.ycombinator.com/item?id=19576092](https://news.ycombinator.com/item?id=19576092)

~~~
privateSFacct
You can get pretty far with docker and things like ECS / Fargate etc too.

------
kirbypineapple
What service would the HN folks recommend for someone who needs to run a few
dozen different docker services that require persistent storage? It would be
nice to just have a pool of compute resources tied to persistent storage and
be able to spin up docker instances at will. K8's has been suggested as the
correct solution but it sounds like a lot of overhead for services that
require no scaling at all.

------
snicker7
I feel that the pro-monolith / anti-microservice attitude has become something
of a cargo cult (at least here on HN).

~~~
hedora
Cargo cults were island nations that saw cargo planes land. They didn’t
understand why the planes landed, but they did want to trade with them. They
built makeshift runways and put out boxes of cargo in the hope of attracting
planes.

If anything, the 5 person startup with 20 microservices “because
Google/Netflix” and “we want to scale” is the cargo cult in this debate.

Not saying one side’s wrong or right, just that cargo cult doesn’t seem to
being used correctly here.

(I admit I’m skeptical of microservices, since they add so much complexity,
and even Netflix suffers partial service outages on a regular basis...)

------
nova22033
Most of the "You don't need kubernetes" posts should be "you don't need
docker".

------
AcerbicZero
There is something amusingly ironic about a python blog complaining k8's being
overly popular relative to how difficult it is to actually use. K8's is
extremely complex sometimes, but at least it maintains some semblance of
semantic logical consistency, unlike certain other tools.

------
yayajacky
Great tools make hard problems approachable. They also reduce impossible
problems to hard ones.

But, it takes experience (having solved things with easy and hard ways) to
prefer easy problems and easy solutions.

It's a good time to be a consultant where you get to solve the same problems
using different approaches.

------
tyingq
Its also more common now to add on 3rd party functionality like a service
mesh, various serverless implementations, secrets management, logging
frameworks, etc. Making it even more complex. Not disputing some of these add
value, but the number of moving parts is high.

------
martythemaniak
How about this rule of thumb when it comes to the question of "Should you
Kube"

How many developers in your organization?

0-10: No

10-100: Maybe

100-1000: Yes

1000+: Definitely

------
sub7
You are the man. Preach on.

I wonder how many teams with active users in the 1000s have fully
dockerized/Kubernetes/microservice type shit designed for 10000x load that
they will likely never get to because they didn't spend their time iterating
on product.

------
acd
I think the network part of Kubernetes is hard todo right and it’s very
complex.

Further more I ponder on the performance if you compare a local monolith
service on metal with full local cpu cache vs a distributed network requests
Microservices.

Disclosure I run Kubernetes in Production.

~~~
p_l
It's IMO much easier if you don't run complex CNIs, but unfortunately not
everyone can afford that :(

------
tdons
We use k8s at $JOB so I decided to look into it on my own time. A wasted
weekend trying to install it and a profanity laden #ragequit later I moved all
my stuff over to OpenBSD instead.

Never been happier.

In the case of k8s: ignorance truly is bliss. Keep it simple people.

------
adgasf
So if Kubernetes makes management simpler and more robust for teams of 500+,
but is overly complex for teams of 5, what solution _would_ people recommend
for teams of 5?

~~~
imtringued
Docker compose and a bunch of shell scripts?

------
Havoc
Yeah always seemed like madness to me. Docker compose seems to be the sweet
spot. Still sorta infrastructure as code via yaml without the fleet/swarm
logic overkill

------
AcerbicZero
There is something amusingly ironic about a python blog complaining k8's being
overly popular relative to how difficult it is to actually use. To be fair,
I've had nothing but mediocre experiences with Python, so I'm a bit jaded.

On the actual content of the article....well it gets worse somehow. There are
good arguments against using K8's (or any tool, really) but I don't think any
of them made it into this article. "Why scale with microservices when you can
just get a single massive VM" was probably my fav.

------
newcrobuzon
If your solution comprises only of one or few systems, and primary reason for
which you are considering k8s is just to tackle clustering/scalability/
service discovery then you can always just try to start by building simple
clustering into your system.

Here is how I built it into mine:
[https://www.titanoboa.io/cluster.html](https://www.titanoboa.io/cluster.html)

Obviously this will not always be the right solution, but in some cases might
be better fit than k8s...

------
rammy1234
Kubernetes local environment setup

Kubernetes debugging

Too many configurations to worry about

Ever evolving features

No good practices

All these apply for small teams who wanted to get the products out to the
market

------
zachguo
For our team with many containers scattered around ElasticBeanstalk and ECS,
K8S makes everything much cleaner.

------
i_dursun
Flash News! Software engineering is hard.

------
marcodave
So, Kubernetes is the 2020 version of the enterprise application server of the
'00s ?

------
awinter-py
they say kube is greek for 'pilot' but it also means 'dice'

are you feeling lucky punk?

~~~
earthboundkid
Because Greek y and u are the same, "cyber" is from the same word as "kube".
"Cybernetics" was supposed to be the art of piloting, but it was very loosely
defined and the term was deliberately overhyped.

------
znpy
the proper way to use kubernetes is:

\- if your org is big enough: hire/train your devops engineers to manage the
kubernetes cluster(s)

\- if your org is small or waaay too big: use some form of managed kubernetes
cluster (aws, gke, do-k8s etc)

\- don't

------
rawoke083600
I will in the future on my CV also have a "Will Not Work With Skillz-Matrix"
Kubernetes goes on first... Unless I'm applying to Google or something of
similar size.

------
peterwwillis
A lot of the problem with using Kubernetes is it appears to be the only option
for running microservices in a cloud environment. People choose it because
they think there's no other option (and they're somewhat right). But there's
Nomad, DC/OS, Docker Swarm (for a little bit anyway), ECS, GKE, etc. That's
still not a ton of options, but there are options.

That's just microservice orchestration. That's a small part of the totality of
things needed to implement a full-out SDLC. You can't just build Kubernetes
and think you're done; your code will need to integrate into a lot more stuff,
and you'll end up writing 10 layers of glue because that's just how many use
cases you have to support.

And it's weird that all that glue doesn't use standards. I mean, we have
TCP/IP & RPC & REST, we have pipes & filehandles, we have the OCI specs. That
gets us to a point where (at most) half of the stack of an architecture is
portable and interoperable with any system following those standards. But then
there's every _other_ component of the architecture that connects all the
pieces together, meaning you're writing glue that will only work for one
implementation. Change your implementation, and you have to change your glue,
and probably more stuff.

I think a lot of that non-reusable glue could be erased if it all followed
standards, such that the configuration and operation of each part followed a
standard interface, set of data types, etc. Tools and libraries could just
"talk container orchestrator" or "talk load balancer" or "talk object storage"
or "talk secrets management", and virtually any component could be integrated
into any other, by virtue of either a system-wide or application-specific
configuration.

You could argue we have something like that now with a "kubectl file" or
similar, but that's not only still platform-specific, but other tools don't
speak it, so K8s has to do everything, because it's the only thing that speaks
its language (config file/backend data store/IAM/secrets/roles/etc).

Rather than resign ourselves to those limitations, we could bundle everything
in an implementation-agnostic standard way with standard interfaces. The exact
same configuration (as code) could be used to run the same complete
architecture on a dozen different platforms, because every component would
speak the same language and handle all the other components in the same ways.
The backend services could all translate the standard based on how they were
configured, such that generic instructions are then translated into
implementation-specific actions. You could really write your architecture once
and run it anywhere, without the caveat of "anywhere _on this platform only_
".

I feel like we're not talking about doing that because we keep getting caught
up in "Fuck, Kubernetes is pretty hard" conversations. Yeah, it's hard;
building and operating an 18-wheeler is hard. But what about the roads? What
about the gas stations? What about the containers we put on the trucks? All
that stuff is standard, and so we don't have to worry about what
implementation of gas station or road we use. I feel like we still don't have
those things in the cloud, and it's just weird.

------
shoulderfake
Then dont use it. Like wtf.

