
Maybe You Don't Need Kubernetes (2019) - WolfOliver
https://endler.dev/2019/maybe-you-dont-need-kubernetes/
======
fermigier
Previous discussion:
[https://news.ycombinator.com/item?id=19467067](https://news.ycombinator.com/item?id=19467067)
(315 comments).

~~~
alpb
Also
[https://hn.algolia.com/?q=don%27t+need+Kubernetes+](https://hn.algolia.com/?q=don%27t+need+Kubernetes+)

------
dustinmoris
All these "maybe you don't need this or that X" posts die in an instant when
the user already knows how to do X (when the learning curve argument is gone).

Let's get it right:

Kubernetes is really really cheap. I can run 20 low volume apps in a kubes
cluster with a single VM. This is cheaper than any other hosting solution in
the cloud if you want the same level of stability and isolation. It's even
cheaper when you need something like a Redis cache. If my cache goes down and
the container needs to be spun up again then it's not a big issue, so for
cheap projects I can even save more cost by running some infra like a Redis
instance as a container too. Nothing beats that. It gets even better, I can
run my services in different namespaces, and have different environments
(dev/staging/etc.) isolated from each other and still running on the same
amount of VMs. When you caculate the total cost saving here to traditional
deployments it's just ridiculously cheap.

Kubernetes makes deployments really easy. docker build + kubectl apply. That's
literally it. Deployments are two commands and it's live, running in the
cloud. It's elastic, it can scale, etc.

Kubernetes requires very little maintenance. Kubernetes takes care of itself.
A container crashes? Kubes will bring it up. Do I want to roll out a new
version? Kubes will do a rolling update on its own. I am running apps in kubes
and for almost 2 years I haven't looked at my cluster or vms. They just run.
Once every 6 months I log into my console and see that I can upgrade a few
nodes. I just click ok and everything happens automatically with zero
downtime.

I mean yes, theoretically nothing needs Kubernetes, because the internet was
the same before we had Kubernetes, so it's certainly not needed, but it makes
life a lot easier. Especially as a cheap lazy developer who doesn't want to
spend time on any ops Kubernetes is really the best option out there next to
serverless.

If learning Kubernetes is the reason why it's "not needed" then nothing is
needed. Why use a new programming language? Why use a new db technology? Why
use anything except HTML 4 + PHP, right?

BTW, learning Kubernetes can be done in a few days.

~~~
candiddevmike
All of this glosses over the biggest issue with Kubernetes: it's still
ridiculously complex, and troubleshooting issues that arise (and they will
arise), can leave you struggling for days pouring over docs, code, GitHub
issues, stackoverflow... All of the positives you listed rely on super complex
abstractions that can easily blow up without a clear answer as to "why".

Compared to something like scp and restarting services, I would personally not
pay the Kubernetes tax unless I absolutely had to.

~~~
wpietri
Exactly. A year or so ago I thought, hey, maybe I should redo my personal
infrastructure using Kubernetes. Long story short, it was way too much of a
pain in the ass.

As background, I've done time as a professional sysadmin. My current
infrastructure is all Chef-based, with maybe a dozen custom cookbooks. But
Chef felt kinda heavy and clunky, and the many VMs I had definitely seemed
heavy compared with containerization. I thought switching to Kubernetes would
be pretty straightforward.

Surprise! It was not. I moved the least complex thing I run, my home lighting
daemon to it; it's stateless and nothing connects to it, but it was still a
struggle to get it up and running. Then I tried adding more stateful services
and got bogged down in bugs, mysteries, and Kubernetes complexity. I set it
aside, thinking I'd come back to it later when I had more time. That time
never quite arrived, and a month or so ago my home lights stopped working.
Why? I couldn't tell. A bunch of internal Kubernetes certificates expired, so
none of the commands worked. Eventually, I just copy-pasted stuff out of Stack
Overflow and randomly rebooted things, and eventually it started working
again.

I'll happily look at it again when I have to do serious volume and can afford
somebody to focus full-time on Kubernetes. But for anything small or casual,
I'll be looking elsewhere.

~~~
busterarm
At work we're building an entire service platform on top of managed kubernetes
services, agnostic to cloud provider. We had already had bad experiences
running K8s ourselves.

Going into it we knew how much of a PITA it would be but we vastly
underestimated how much, IMO.

Would not do again -- I would quit first.

~~~
pnako
Fire and Motion [https://www.joelonsoftware.com/2002/01/06/fire-and-
motion/](https://www.joelonsoftware.com/2002/01/06/fire-and-motion/)

Written 18 years ago, so obviously not about Kubernagus, but it does explain
the same phenomenon. Replace Microsoft with cloud providers and that's more or
less the same argument.

------
flowerlad
I am a one-person team running Kubernetes on Google Cloud. It costs around $62
per month before tax. (One n1-standard-1 node for $35.34 per month, HTTP Load
Balancing $18.60 per month and SSD persistent disks for $8.50 per month).
Kubernetes gives me the ability to scale up quickly when the need comes (which
is soon, hopefully).

I evaluated AWS Fargate & Kubernetes, Azure and GKE before settling on GKE.
Amazon charges $148 per month just for cluster management alone. Google and
Azure charge $0 for cluster management. This ruled out AWS for me. AWS appears
to be a reluctant adopter of Kubernetes. They seem to want you to use Fargate
instead. I tried it and found it to be crap--very hard to get things running.

I was able to get Kubernetes running on Azure and GKE fairly easily. There
were minor hiccups on both those clouds. With Azure initial creation of a AKS
cluster failed because some of the resource providers weren't "registered" on
my subscription. On GKE it was hard to get ingress working. Static IPs take a
long time to take effect, and in the mean time you are fiddling with your yaml
files trying to figure out what you may have done wrong, not realizing it is a
GKE issue.

The awesome part of Kubernetes is that I didn't need to learn almost anything
about Google cloud to get everything working. I only had to learn Kubernetes.
If Google raises prices I can easily switch to Azure without learning any
Azure technologies. My knowledge as well as my application is completely
portable. I can't imagine doing any of this as a one-person team without
Kubernetes.

~~~
mpmpmpmp
Second the comment about ingress. By the way, if you have any idea how to get
ingress to work without taking down the services that are in it while it
updates that would be amazing. I think it would obviously work if we used a
separate ingress/load balancer for each host, but that seemed kinda wasteful
since we are ok with doing downtime in off hours scheduling with our project.

~~~
nouveaux
I believe the Kubernetes/Docker way is to not update in place but to create
new instances. Can you spin up a new node pool/cluster and redirect traffic
there?

------
puhi
Kubernetes is here to stay.

What will change or what will be enhanced are: \- Minimum requirements to
actually run it (see k3s) \- More managed services (gke, azure and aws exists
but also digitalocean) \- More/better handling of stateful services \- Simple
solution for write once read many (relevant for caching and for ci/cd)

At that pace we are already with such a jung project, yeah this is great. This
is huge.

And no one needs to migrate already to kubernetes! But it already does a few
things out of the box which reduces the complexity: \- easy cert management \-
internal loadbalancing \- autoscaling \- green/blue deployment \- deployment

But you do see how the industry is struggling with certain problems: We are
now with kubernetes moving into a cloud native area.

Everyone know has kubernetes available. There was no mesos managed service
from google, azure and aws. There was no docker swarm from google, azure and
aws.

~~~
nailer
> But you do see how the industry is struggling with certain problems: We are
> now with kubernetes moving into a cloud native area.

We were were moving into a cloud native era way before Kubernetes.

------
Fiahil
I'm usually fairly buzzword-adverse, and I didn't follow the NoSQL/Spark/Big-
Data phase when it was hip. I was also hostile to kubernetes until I had to
work with it against will for a project.

Since then, I completely changed my mind about kubernetes. This is a very good
technology for only one reason -and it's NOT about container orchestration-.
Portability. K8s is the missing piece that allows you to create a network of
cooperating computers independently of hardware, OS, and even architecture.

If I have an home-made k8s cluster on my raspberry pies at home, it's not
because it's lightweight and easy to manage -k8s adds a significant overhead-,
it's because it gives me the ability to unplug one of them, take the sdcard,
format it and plug another board without any interruption or configuration. I
could plug my intel laptop to that network and have some pods running on it
without having to change a single line in my configuration. Finally, I can zip
a folder and email a bunch of yaml files to my friends (or have them git clone
the repo), and they will be able to replicate an exact copy of my home cluster
with all -or some- of the services. This is truly amazing.

~~~
pnako
I literally don't understand what you are achieving with your k8s raspberry
cluster thing.

~~~
optimuspaul
automatic failover obviously

------
schmichael
This post should probably have (2019) in the title.

I'm the Nomad Team Lead and can try to answer any questions. Since this post
was made the team has expanded, and the task/job restarting issue they link
has been (mostly) addressed. Also new since this post is our Consul Connect
integration which can accomplish similar goals to k8s network policies, albeit
opt in and with the actual discovery/networking code living in Consul/Envoy
respectively.

~~~
scarface74
I absolutely loved Nomad when I used it. We were stuck with both Windows,
legacy apps and command line scripts and no time to learn Docker or anything
like K8s.

We were already using Consul so bringing in Nomad to schedule _everything_ \-
batch files, .Net executables, some legacy programs, etc was a godsend.

Unfortunately, as much as I loved the Nomad+Consul combination, I really
couldn’t suggest it today. It is so much easier to find qualified K8s people
than Nomad+Consul people I couldn’t in good conscience recommend it.

But this is all a moot point to me. While if I were leading another on prem
project I would use K8s. We are all in on AWS+ECS+Fargate where I work now and
we really don’t care about the lock in boogie man.

Given a choice, I would still say at least if you’re on AWS, use the native
offerings. The value of hypothetically being able to migrate a large
infrastructure “seamlessly” is vastly overrated.

~~~
schmichael
> I absolutely loved Nomad when I used it.

Glad to hear it!

> Unfortunately, as much as I loved the Nomad+Consul combination, I really
> couldn’t suggest it today.

This is a fair critique and a problem any project living in a world defined by
a strong incumbent suffers. You made me realize we need resources to help k8s
users translate their existing knowledge to Nomad as many people looking at
Nomad will have k8s experience these days.

So thanks for this comment. Maybe with the right docs/resources we can at
least minimize the very real cost of using a non-dominant tool.

> But this is all a moot point to me. We are all in on AWS+ECS+Fargate where I
> work now and we really don’t care about the lock in boogie man.

This was me in a past lives/jobs! HashiCorp's entire product line (with the
exception of maybe Packer and Vagrant) become much more compelling for multi-
cloud, hybrid-cloud, and on-prem orgs.

~~~
scarface74
In my world, there are two types of “architects” and “consultants”. You have
the kind that are “Smart People” (tm) and tell their clients that “this is the
way that it should be done” and you have those that “will meet the clients
where they are”.

From my experience with Nomad and the little I know about K8s, Nomad is the
latter. If you can run it from the command line, you can run it with Nomad.
This in and of itself is a great value proposition.

But, Nomad does have the disadvantage, I posted about above and has to fight
the “no one ever got fired for buying IBM”. I was able to get buy in only
after I told the CTO, “it’s made by the same people who make Consul and
Terraform”.

------
errantspark
I feel like a lot of the problems K8s solves are fake problems that we have
because all our software links/imports so many libraries/packages it's
impossible to tell what's going on. It feels like have built an entire
infrastructure around hopefully capturing that one time we got it working and
being able to reboot to that state when things go wrong.

~~~
erulabs
You’re not wrong but containerization is pragmatic - not idealistic. You can
still write simple, clean software packaged in a SCRATCH container - but you
can also isolate whatever crap you have to work with the get the job done.
Both the elegant app and the horrible one now share a reliable init system.

------
vicjicama
I am very happy working with Kubernetes in a very small team, I really like
the abstraction of the different entities and the declarative ways to
configure them.

I think that the learning curve is not that steep if you already had to do the
same than with Kubernetes with other alternatives, in my on experience I have
been discovering a lot of features that are very helpful not just during
production but also during the development environments that were a real pain
before.

I am have a lab/cluster/blog running on Kubernetes, I am in charge of this one
alone, it is opensource [1], I version everything that goes to the cluster so
you can see the evolution of the kubernetes entities, the config, the
containers and the code. I started this from scratch and improving it feature
to feature, I think that this might be a big factor with my positive
experience with Kubernetes.

I wonder if an issue with adopting kubernetes is to try to migrate a big
system into kubernetes in a very limited time lapse and trying to push/force
features as they were working/handled in the previous approach?

[1]: [https://github.com/vicjicaman/microservice-
realm](https://github.com/vicjicaman/microservice-realm)

------
siffland
A good friend and old coworker has been hitting me up for how to do stuff with
docker since his job now uses containers. He has about 15 containers (on 2
hosts) that do not change to much and was asking about setting up k8s (it was
a buzzword his manager heard). i talked him into just setting up swarm.

It took all of about an hour (over text messaging) to get it set up and all
stacks/services running. They could not be happier.

It comes down to the right tool for the job. If you don't need all the bells
and whistles, then keep it stupid simple. I realize swarm is not a 100%
"enterprise" solution, however before they they were just issuing docker
command after each reboot.

~~~
seneca
> i talked him into just setting up swarm.

Not contesting the heart of your comment but, given the current state of
Docker, recommending Swarm to someone strikes me as bad advice. Nomad may be a
better call.

Mirantis has openly said Swarm's future is a two year sunset with a transition
path to k8s.

~~~
LoSboccacc
I agree, looked into swarm and routing was a mess, even discounting the bugs,
lockups and moments of total disconnections, you had to rely on ugly dns hacks
to know the live endpoints, making any kind of stateful service a nightmare to
set up, while kubernet just give you an endpoint api, and even then, there
were no guarantees for local services that swarm mesh would route the calls to
the local instances, while in kubernet you can control precisely how services
are resolved grouping them in pods, so that they don't needlessly clog the
pipes.

I find very hard to find swarm use case that a wan, vpn, private segment or
kubernet cluster can't handle better.

~~~
netsectoday
I've been using Docker Swarm in production for about 2 years now... processing
about 1TB of data / month across 30+ containers. The networking and routing
has been rock solid except for that one day that the Docker dev team, in one
release, accidentally added random hashes to the internal DNS names of
services. Ever since that day I've used the docker-compose network alias for
internal routing [https://docs.docker.com/compose/compose-
file/#aliases](https://docs.docker.com/compose/compose-file/#aliases)

Discovering bugs in a technology you just started "looking into" actually
sounds like the learning curve.

~~~
LoSboccacc
are you sure you're not mixing swarm and swarm mode?

~~~
mosselman
Ok, there is a difference? Do you have links to docs of both? This sounds hard
to search for online.

~~~
LoSboccacc
[https://sreeninet.wordpress.com/2016/07/14/comparing-
swarm-s...](https://sreeninet.wordpress.com/2016/07/14/comparing-swarm-
swarmkit-and-swarm-mode/)

------
pjmlp
Yep, still doing plain old VMs.

Just like NoSQL, BigData,..., in a couple of years k8s will be slowly
forgotten as everyone updates their CVs to the Next Big Thing™.

~~~
twic
Never bought into the VM fad myself. The physical hardware is chugging along
quite nicely over here.

~~~
aww_dang
Agreed. At this point HN could have a post about the merits of plain old
dedicated servers and it would seem novel.

~~~
hotsauceror
Just wait. In ten more years IBM and Deloitte will re-brand fat clients and
on-prem infrastructure as something else and they'll start selling it to
everyone. Replace "mainframe" and "dumb terminal" with "the cloud" and "web
app" and we had this same conversation 40 years ago. Then in another 20 years
we'll swing back to some version of consolidated+remote servers and
lightweight client access portals.

------
stevefan1999
Agreed. k8s is too bloated at this point, yet alternatives like nomad and
swarm are missing some fundamental features, so we have had to adopt to k8s
unfortunately.

For example, swarm still have no fault tolerance and nomad relies on Vault,
another product from Hashicorp and is also in the same limited state wrt.
documentation

~~~
carlsborg
Try ECS on AWS. Rock solid and reasonably simple and does everything you need.

~~~
anaganisk
I dont think, the correct answer is to change your stack to fit some vendor

~~~
carlsborg
Its just docker, your containers remain exactly the same only the
orchestration changes!

~~~
tekkk
You are pushing ECS quite hard, but why so? It's nothing particularly amazing
in my opinion, good for what it does but requires all your other infra too to
reside in AWS. Which is not that cheap if you're really concerned with cost.
Eg NAT is 5 cents per hour and you need at least two of them if you want to
use private subnets.

------
moshloop
I think this post misses the point of where kubernetes brings value the most:
multi-team environments. If you are a single-team, then nomad, non-distributed
docker, systemd units, etc is probably a better choice today due to the
operational complexity of running Kubernetes (GKE,AKS,EKS do lower the
complexity, but you still need kubernetes experts on call)

However if you are in an organization with multiple teams (as are the vast
majority of developers) then Kubernetes provides a common language for
deploying, operating and securing your applications which enables you to go
from a process which could takes days, weeks or even months to provision and
configure a VM to minutes or hours to provision a kubernetes namespace.

------
Thaxll
It's funny we use GKE and coudn't be more happy, once your CI/CD is setup you
don't have to think about it and makes your developper life much much easier.

I don't even think for one second to go back to uploading tar.gz or DEB / RPM
using scripts / puppets ect ...

------
jugg1es
I wish this article was titled 'Nomad is pretty great' since that was really
the point being made here. Nomad is not a true alternative to Kubernetes. It
does not support autoscaling out of the box, it does not have most of the
bells/whistles that Kubernetes has.

However, it is just a single executable and it just works. Combined with
Consul and Fabio, you can run a container orchestration cluster with very
little fuss that has service discovery, internal load balancing and cluster-
wide logging.

It is a feature-rich task scheduler with a pretty good CLI. I highly recommend
it, but if you need stuff like autoscaling, you need to use something else.

~~~
schmichael
(Full disclosure: Nomad Team Lead here, so I'm biased!)

> Nomad is not a true alternative to Kubernetes.

You're right! We explicitly try not to be a standalone dropin replacement for
k8s. Our comparison page goes into this a little bit (but now I realize it's
in need of a refresh!):
[https://www.nomadproject.io/intro/vs/kubernetes.html](https://www.nomadproject.io/intro/vs/kubernetes.html)

\- Nomad relies on Consul for service discovery

\- Nomad relies on Consul Connect (Envoy) for network policies

\- Nomad relies on Vault for secrets

\- Nomad relies on third party autoscalers -
[https://github.com/jippi/awesome-nomad#auto-
scalers](https://github.com/jippi/awesome-nomad#auto-scalers) \- although
there's more we can do to enable them.

\- Nomad relies on Consul Connect (Envoy) or other third parties for load
balancing

\- Nomad does not provide any cluster log aggregation (although logging
plugins are planned which should make third party log aggregators easier to
use)

Nomad still has many missing features such as CSI (coming in 0.11!), logging
plugins, audit logging, etc, but we never intend to be as monolithic a
solution as k8s. We always hope to feel like "just a scheduler" that you
compose with orthogonal tools.

------
luord
It's really odd that around 30% of the people who commented defending
kubernetes are merely echoing stuff that the article itself mentioned.

Also, it's weird that the argument of another 30% of people defending
kubernetes boils down to: "Using kubernetes is really easy, _just hire Google
/Azure/etc to do it for you_."

Can't begin to wrap my head around that one.

But what do I know, I prefer to KISS and I like nomad. In fact, I'd be using
swarm if its future wasn't spotty.

------
overgard
I agree with the sentiment, but I think the hardest part of kubernetes is
dealing the complexity of the resulting cloud. You end up having to manage
that complexity no matter what you're doing; the upfront cost of figuring out
kubernetes is that maybe it takes a couple days longer to learn than something
else, but now you have a huge ecosystem of technology and people that are
solving the same problems you have.

------
jaequery
docker compose and a jwilder proxy is all I need for most of my projects

------
spicyramen
I see Kubernetes also being used for ML workflows, yes...while you haven't
figured out how to run a ML pipeline yet, add K8s (KubeFlow recommendation) As
a ML researcher solutions like KubeFlow has been very painful as I don't want
to learn k8s, I need to continue focusing on ML work and figure out my
pipeline before I deploy K8s

------
chrismmay
There are lots of options out there. I'm currently exploring Cloud Foundry for
our platform, since our core API is written in Spring Boot. This page from
AquaSec has lots of links to articles comparing Cloud Foundry to Kubernetes.
[https://www.aquasec.com/wiki/display/containers/Kubernetes+v...](https://www.aquasec.com/wiki/display/containers/Kubernetes+vs+Cloud+Foundry)

I also found this book to be helpful
[https://www.amazon.com/gp/product/B07T1Y2JRJ/ref=ppx_yo_dt_b...](https://www.amazon.com/gp/product/B07T1Y2JRJ/ref=ppx_yo_dt_b_d_asin_title_o00?ie=UTF8&psc=1)

------
sgt
I think most Nomad users are well aware that Kubernetes can do all it can -
and vastly more.

If anything is wrong with Kubernetes, it would be the complexity of it and
that it has a steep learning curve.

It seems it's best to have a small team of people to manage it and to solve
solutions to problems that arise.

We started using Nomad last year and one thing I can say is that it's
relatively easy to use and works well for its intended purpose, especially for
us hybrid-cloud folks.

------
jordanpg
The part about all of this containerization technology I don't understand is
that it was originally sold as a way (among other benefits) to unburden devs
of having to think about ops stuff and just code.

Well, my experience with Docker, k8s, and related technologies, is that I now
need to be an _expert_ with these things just to get through my day. It's
exhausting.

------
vi-mode
Before I started with k8s, I've been checking Nomad many times but eventually
went with k8s. Nomad's learning curve didn't seem to be better, docs felt
sparse and the much bigger ecosystem with k8s shouldn't be undervalued.

------
segmondy
If you don't need container orchestration, you don't need kubernetes. If you
need reliability and instead of using k8s, orchestrate your container manually
you have issues.

------
irq
Given all the talk here about Kubernetes’ learning curve, does anyone have any
resources for learning it for those that haven’t yet? Whether blogs, books,
video courses, etc.

------
ehutch79
What are you talking about, my blog that has an astounding 2 visitors a month
(one or both of which are accidental clicks) definitely needs to be on
kubernetes!

------
kresten
I built a big globally scalable system without Kubernetes or containers.

It’s very easy to understand and is highly reliable.

------
gfodor
Personally, I've found Chef Habitat to be a sane alternative. I don't have any
experience with k8s, but if you are not using containers especially, Habitat
gets you:

\- packaging

\- deploys

\- configuration management

\- supervision

\- service discovery

\- isolation

in a more classic "VM" environment, using unix jails. For anyone who isn't
interested in learning k8s, but also wants a relatively modern, all-in-one
solution to these problems, I recommend checking it out.

------
nomadnomad
Nomad is awesome. Just like the original author, our tiny team went with nomad
for on-prem container orchestration instead of k8s. Since then we haven't
spent much time on any nomad/consul maintenance, and all that time we got back
- we spent on developing the product.

Basically, I agree with others that k8s is both complex and not fully mature
yet.

------
candiddevmike
Anecdotally, about the worst place I've seen Kubernetes used is for CI/CD
pipelines where every pipeline gets a fresh cluster. It's truly a special kind
of hell--it takes at least 10 minutes to get a cluster up and running
(depending on cloud provider), and it causes all kinds of indeterminism and
frustration. You wait for a cluster to be spun up, wait for all of the extra
services to be added, have one of them fail (completely unrelated to your PR),
and have to do the whole thing over again. It's truly the new version of the
"compiling" XKCD.

If you ever want to see how brittle and bizarre Kubernetes can be, use it in a
100% automated fashion and hope for the best.

------
frou_dh
I heard the word Kubernetes one too many times from listening to podcasts put
out by The Changelog. The overhype bit got flipped in my head and now I'm
predisposed against it. Marketing kills!

------
Eikon
I know it's an unpopular opinion but I don't see the point of kubernetes. It's
a complexity monster which is not providing any real advantage over _way_
simpler solutions.

Even pod to pod communication, which would be trivial to do using any sane
solution is a _huge_ pain in kubernetes.

I would actually be frightened by running kubernetes in production. The
simplest things are so hard to do that I don't even want to think about how to
fix a weird issue when something goes wrong...

~~~
q3k
> I know it's an unpopular opinion but I don't see the point of kubernetes.
> It's a complexity monster which is not providing any real advantage over way
> simpler solutions.

The main selling point of k8s over any other tool is that it provides a
single, unified, secured, multi-tenant API endpoint for managing all of
production. Your developers updating their production app use the same API as
a CI system that wants to spawn some worker containers, and the same API as an
operator service maintaining a Redis cluster. All of this results in a single
view into production. If things go well, the end result is that you swap daily
interaction with a handful of different tools with disparate states
(Terraform, Ansible/Chef/Salt/Puppet, shell scripts and proprietary tools)
into just managing payloads on k8s.

> Even pod to pod communication, which would be trivial to do using any sane
> solution is a huge pain in kubernetes.

How is it a pain? A pod behind a service provides a DNS name that allows
running requests to it - this handles the bulk of production traffic. If you
want to contact a particular pod that is not behind a service just use the k8s
API to retrieve details about it (like Prometheus does via k8s pod service
discovery).

------
nailer
Of course most software companies (ie, where most of HN work) don't need
Kubernetes. At this point it's become a meme for wasted engineering.

Are you creating a cloud provider?

\- If you are creating a cloud provider, then yes you might want Kubernetes.
If you are making your own cloud provider, you should question why you are
doing that.

\- If you're not creating a cloud provider (for example, you're a software
company), then use whatever VM / container / MicroVM / etc your cloud provider
gives you rather than layering your own unnecessary complexity that adds
questionable value on top of what your pay for from your cloud provider.

~~~
UK-Al05
Normally cloud providers provide k8s to their customers.

~~~
nailer
Yes, that's the biggest use of Kubernetes. Most cloud providers provide Xen
VMs, containers (via whatever orchestration mechanism) or MicroVMs. Most
people making software shouldn't care which. It's a box to run code.

------
nautilus12
I get really sick of these types of posts because it helps to fuel a weird
contrarian and/or fear based avoidance of these technologies. Then the company
goes down a rabbit hole of inventing their own thing that basically is an in
house version of kubernetes but worse. I think kubernetes is pretty easy, I'm
not sure why companies are so against it other than wanting to be "even cooler
than the cool kids".

~~~
superkuh
To start: not everyone is a company. Not everything is about companies. When
people bringing overly complex, mostly useless in this context, tech like
kubernetes to a personal environment it's silly. But whatever, it's what they
know.

But when they start telling other people to learn kubernetes because it's so
useful then it's annoying. And there are _lots_ of people advocating for using
kubernetes on small personal projects. When they get pressed they all fall
back to the justification, "You should use kubernetes in personal projects to
learn kubernetes." which is back to silly again.

That's where these kinds of critical articles come from.

