
Why Does Developing on Kubernetes Suck? - mooreds
https://blog.tilt.dev/2019/08/21/why-does-developing-on-kubernetes-suck.html
======
arpa
Ok, wow. Talk about mixed feelings towards the article... the author is
clearly knowledgeable, but it is so unclear why in the world does he opt for
this kind of torture.

1\. Why kubernetes for development? For most cases you don't need the whole
orchestration over multiple nodes overhead, docker-compose is just fine.
Incidentally, your push/pull problem is solved as you pull the dependencies
and BUILD your app locally.

2\. Want to have someplace to test your kube configurations? That's the ops
part of devops. Set up a proper testing cluster instead of trying to cram
everything in your devenv. Kube config should be in a separate project
anyways.

3\. No strace/bash. Look, man, this isn't even kubes' problem. That's how you
build your images. Have separate dev images that use prod image as a base.
Deploy dev images to testing env. Prod images to staging/prod. Build dev
images locally.

4\. Network debugging. That's difficult on any system. If you don't have
access to the underlying host, you are screwed either way - call your devops
for support. Anything else (internal ips, svc endpoints, ingresses) is trivial
to debug on kube. However, we're talking about development... just don't use
kube for development, period. Use docker-compose. Which, actually solves all
your problems... right tools for the right job, or suffer the self-inflicted
consequences.

~~~
dragonsh
1\. Kubernetes is when your service needs google kind of load which 90% of
systems don't. Just trade it for simple VM or use LXD containers. Don't jump
on next hype cycle. If something works for Google does not mean will work for
you.

2\. Don't spend too much on it just use traditional knowledge and, you can
still use bare-metal, VM or LXD container. Focus on application not on
programming a tool designed to solve problems for Google size deployment.

3\. Kubernetes is a completely different way of doing things. So if you want
to leverage old cluster and distributed system knowledge you can use VM or LXD
containers. Kubernetes is an overkill.

4\. Abandon kubernetes instead of complaining, just use simple systems.

~~~
OJFord
> Kubernetes is when your service needs google kind of load which 90% of
> systems don't.

This is the most frustrating but oft repeated nonsense about k8s.

Kubernetes is an amazingly helpful abstraction for any amount of load on any
larger than trivially tiny set of services/storage/etc.

When you, like 90% of systems, don't have Google kind of load, you don't need
Google number of nodes.

Machine count corresponds to load; abstraction doesn't.

~~~
yongjik
Kubernetes may be a useful framework, but "amazingly helpful" is not a word
I'd use for it. To me, it's a half-assed collection of five hundred nearly
identical (but all slightly different) yaml files masquerading as
"abstraction". And because they're yaml you can't refactor out the common
part.

Well, to be fair, yaml itself doesn't exactly have a stellar reputation, but
I've never seen fifty lines of yaml that could convey so little information in
any other place.

~~~
OJFord
Use a Helm chart (or alternative)?

When you're writing YAML you can absolutely include from elsewhere (it calls
it 'aliases', which is... Well, at least it exists) in multuple places -
though annoyingly not from other files.

When you're reading (partially) generated YAML, (e.g. kubectl describe or
edit) it's IMO a feature that everything is in its place with no indirection.
(And as far as I'm aware this is YAML, not kubectl.)

------
echopom
> Why Does Developing on Kubernetes Suck ?

IMHO because we are in a phase of transition.

Having worked for years in software industry , I'm convinced we are halfway to
a much bigger transformation for Software Engineers / SRE , Developers etc...

I work in a Neobank ( N26 , Revolut, etc...) , we are currently in the process
of re-writing our entire Core Banking System with MicroServices on top of
Kubernetes with Kafka.

Not a single day pass without having engineers needing to have an exchange
about defining basically all of the the terms that exist within the
K8/Docker/Kafa world.

\- What's a Pod ? How does a pod behave if Kafa goes down ? Do we really need
ZooKeeper etc....

Their workflows is insanely complex and requires hours if not a day to deploy
a single change... obviously let's not even talk about the amount of work our
SRE has in the pipe to "package" the entire stack of 150+ services in K8
through a single YAML file....

I'm sure this complexity is temporary. One tech will automate all of this away
and building , deploying and running will be much simpler. In it's current
shape , K8 remains me of mainframe ecosystem I used to deal with at brick &
mortars banks.

Powerful systems , but they requires a tremendous amount of work and experts
to be properly managed and take full advantage of it. Just like mainframes ,
K8 leaves very little room for "mistakes" or "approximation".

~~~
rapsey
> Their workflows is insanely complex and requires hours if not a day to
> deploy a single change... obviously let's not even talk about the amount of
> work our SRE has in the pipe to "package" the entire stack of 150+ services
> in K8 through a single YAML file....

One should always keep in mind the famous aphorism:

> "All problems in computer science can be solved by another level of
> indirection, except for the problem of too many levels of indirection".

You may have just hit the "except" part.

~~~
meowface
That's a fantastic quote. Don't know how I haven't come across it before. It
definitely sums up our profession in a nutshell.

------
rexarex
I think the author is doing it wrong personally. Just spin up a real dev
cluster and use that? Why putz around with setups on your laptop? That’s never
going to scale team-wise? And if you damn well insist on running your own
local k8s cluster for development then the whole thing should be automated
infrastructure as code anyways that you start with a command so that other
developers will get the same behavior.

Even after local dev the changes should be getting picked up and tested by a
testing cluster.

I think leaving it up to devs to come up with their own local testing k8s is
asking for bugs.

~~~
rooam-dev
Ability to run locally a production like setup (at smaller scale of course) is
a big plus imho. 1st, you know how it works, 2nd smaller iterations/feedaback
cycles (restart locally vs. push and wait for tests to run).

~~~
hmottestad
I agree with this. But usually you have both. A staging/test environment is
nice, but if you have 5 developers and one test env then devs will quickly end
up in line waiting for the test env to free up.

Remote debugging is also hard. Much easier when running locally.

~~~
LaGrange
The idea is that you have a dev cluster _per developer_. I've ran with
something similar way back working for a certain notable Perl shop way before
k8s/Docker became popular, and I have to say it has a lot going for it,
especially for more complex setups and more annoying database stuff. It was an
(on-demand one-click provisioned) _VM_ per service per dev, though, so the
biggest annoyance (no code reload, slow fs sync) was nullified by popular
preference for either Vim or Emacs — with my current preference for VS Code
I'd probably be annoyed by it. Also it's a bit expensive, I guess, but you can
stuff a lot of VMs into a single big server.

------
pythonwutang
What an immature and rude way to criticize a young open source project. I’m
disappointed that so many in our community appreciate this disrespectful
writing style criticizing our k8s supporting peers.

Secondly, developing on Kubernetes “sucks” compared to what? Mesos? Docker
Swarm?

Maybe this engineer is still frustrated with Tilt’s failure as a business
([https://www.fastcompany.com/3069164/how-tilt-veered-off-
cour...](https://www.fastcompany.com/3069164/how-tilt-veered-off-course)) and
Airbnb imposing changes on his workflows like how they deploy their apps. If
that’s true then I hope he finds more healthy and mature habits to manage his
anger.

~~~
ninkendo
> Secondly, developing on Kubernetes “sucks” compared to what? Mesos? Docker
> Swarm?

My thoughts exactly. I feel like most people who complain about these things
are just feeling grumpy about the overall experience, and aren't always quite
sure where to place the blame.

It takes a lot of introspection to know exactly where a system should be
better than it is... "this yaml is too complex", until you start thinking
about how you'd do it better, and then you realize all the problems each of
the similar approaches have, and that a lot of it was done for a reason, etc.

Or you start comparing k8s to completely different approaches like just
rsyncing some files to a remote webserver and SIGHUP'ing it, which is much
simpler but has its own host of reliability/testability/reproducability
concerns.

I think in reality, people are overwhelmed by what it really takes to adopt
best practices (declarative deployment, CI/CD, health checks, service
discovery, etc). Practices that have been hard fought and discovered over many
years of people trying things and failing. K8S IMO represents the state of the
art in a lot of them, but too often people place blame on k8s when what
they're really doing is questioning the best practices themselves. Practices
that are also shared across k8s's competitors like mesos and docker swarm.

------
pchico83
I share the vision of this article. We are working on the same problem at
Okteto Inc.

As an ex-docker employee, I love to work with docker-compose and I agree it is
usually a good enough abstraction for development.

But it is not only about the tool, I think the future of development is on the
cloud and Kubernetes is a perfect fit for ephemeral dev environments:

\- You have unlimited hardware and network resources but efficiently shared by
all your team.

\- You can share endpoints for fast validation and easier integration with
external systems and webhooks.

\- You reduce k8s integration issues.

\- You have access to infra services like service mesh, logs aggregators and
metrics from development.

We just need to improve our dev tools to reach a great dev experience on
remote Kubernetes clusters. I have been working like this for 2 years now and
I can say my dev experience is much better than before. I have replicable dev
environments, I can spin a new dev environment on a new namespace for each git
branch, switch dev environments with a single command, and deploy my changes
instantly.

And 5G is just around the corner and it will make the experience even better.

------
aeyes
The only time I develop on Kubernetes is when I develop something which builds
on top of the Kubernetes API. In that case I use skaffold. For all the rest
Docker Compose is a better fit.

You don't develop your full CI pipeline on your machine either. So why care
about the orchestration?

~~~
metzby
I'd love to talk if you use skaffold; I (disclaimer: building it) work on
Tilt, and suspect it's a better fit. Especially because it can work on either
k8s, like skaffold, or docker-compose, with a useful UI on top.

------
the8472
The one thing I have found tedious with k8s clusters is when you need to work
on the host system in relation to a specific pod. E.g. to diagnose kernel
issues or for profiling/debugging tools that need unrestricted root.

This can require jumping through several hoops. Find the node, get through
some bastion host, inject SSH keys into target node, build an SSH chain to get
into the host, then escalate from admin to root account.

It would be nice if kubectl offered an escape hatch for these kinds of things,
assuming one has sufficient permissions.

~~~
majewsky
Run a privileged container with the host's procfs mounted into it via hostPath
volume. Then you can use nsenter to break out of the container into the host
system.

~~~
the8472
I already used a host mount to inject SSH keys. I guess I can just skip the
rest and automate the steps to pivot fully into the host.

------
sp527
Local development on Kubernetes is in a pretty decent state imo. Minikube +
Skaffold. Helm to manage/configure charts. And override resources to lower
mem/cpu consumption when running locally. Works just fine for most situations.
It's reproducible/consistent across developer environments and is as close as
you can get to mapping 1-to-1 with prod.

~~~
mamon
So you install Minikube on each developer's machines? I thought that Minikube
was created solely for running tutorials/excercises for people learning
Kubernetes.

If you are developing microservices I think you're much better off with a
development Kubernetes cluster, and a single namespace for each developer
within it. This way the Kubernetes configuration can be identical to PROD, and
you do not need tons of CPU cores and RAM on each of developer's laptops.

------
mgliwka
I've found [https://www.telepresence.io/](https://www.telepresence.io/) to be
helpful. It allows to integrate a local running process into a K8s cluster
seamlessly allowing for fast iteration and easy debugging.

------
deboflo
It’s trying to do too much, just like Google Wave, Angular, Google Web
Toolkit, etc. Avoid anything that tries to do everything. Instead, build on an
ecosystem of independent services that integrate well together.

------
hendry
If iterations take longer than 10s, then I'm really not interested.

Typical Docker based CIs are ten minutes to build. Nevermind the deployment. A
ridiculous waste of time and energy.

~~~
the8472
If you have a fleet of services and currently work on services A and B you can
just start up A - G in docker-compose based on CI-built images but mount your
local build output into the A and B containers. Override their commands to
watch filesystem changes so they reload themselves when you build.

This can be easily managed with an personal override file on top of a
committed compose file used by your team.

~~~
hendry
I agree the local dev environment UX is pretty much there. I use volume mounts
myself.

But when you're deploying on a remote host. OMG.

Just sharing a Docker image internally is anguish.

Developing as a remote team using Docker / Kubernetes workflow is just crazy.
You're way WAY better off using serverless, where some implementations take
~2s to deploy my Go binary.

~~~
the8472
This can be hacked together in a similar way. Add a wrapper that watches for
filesystem changes to the container command, then add a _kubectl cp_ as last
build step locally. Or push an image, set the deployment to pull on start and
kill the pod.

For sharing docker images, assuming you have write access to some dev
repository, you can push to a tag tied to your development branch and others
can configure their docker setups to pull that on restart.

Dev images can be built faster by just copying build output into an image as
last step, that way other steps to prepare the image are cached. No need to
use whatever slow things CI is doing in its Dockerfile.

It only takes a few lines of shell or makefile to automate most of these
things. Of course local dev is still nicer.

------
ramanathanrv
For small setups, it is better to stay away from Kubernetes until it reaches a
good level of stability. We jumped early on in the K8S bandwagon (before AWS
EKS) and paid a steep price for that. We continue to pay. Our AWS cost has
doubled since the time we transitioned to K8S from AWS Elastic Beanstalk.

I really wish I could say what would be a good time to embrace K8S, but your
mileage might vary. When we made the switch, our service was at about 80TPS
during regular load and would go up to 300TPS during peak load.

------
ihcsim
The project that I work on depends directly on the Kubernetes API, admission
webhooks and API extensions. I do my daily development on a GKE cluster,
because local k8s environments are slow and inconsistent. What I really like
about tilt is that it enables the continuous development experience, where
code changes are continuously built and deployed to my remote cluster as I
made them in my editor. It helps to shorten my feedback loop by replacing the
series of `docker build`, `docker push`, `kubectl apply` etc. commands with
just a single `tilt up` command.

I can sympathize with what the author said. A while ago, I worked as the only
devops engineer on a team of 10 full stack developers. The tl;dr is that
management decided to migrate their existing workloads to k8s for reasons. The
developers were not too keen about all the new concepts, tools and techniques
they needed to learn as a result of that decision. They knew they had to own
their code from dev to prod, and couldn't just throw it over the fence
(because there was no fence). The issues that were brought up in the post
definitely reasonated with some of the concerns the developers brought up. I
do think a tool like tilt would have made the transition easier for that team.

Finally, I think the post could have been made simpler if it stays focus on
just one tool, and how that tool helps to solve the problems that non-ops
developers have, when writing code that needs to run on k8s.

------
aloer
What would you recommend for development of some private project on a local
server? No cloud cluster, bare metal and great opportunity to learn.

Something on the order of 20-40 services for even the most trivial things.
Development speed above all else

~~~
kkapelon
Tilt (the product created by the company hosted by the blog) is an implied
solution.

There is also Draft (Microsoft), Skaffold (Google) and Garden.io

Here is comparison of the last 3 [https://codefresh.io/howtos/local-k8s-draft-
skaffold-garden/](https://codefresh.io/howtos/local-k8s-draft-skaffold-
garden/)

(I have no affiliation with any of those solutions, I just co-authored the
comparison article)

------
solatic
> For example, a common set up is to only allow your developers to access to
> create/edit objects in one namespace.

Not best-practice. Best practice is to lock down access to production to grant
read-only permissions to developers and force deployments to occur through a
controlled pipeline. This controlled pipeline can ensure that changes first go
through a staging environment that is essentially identical to production,
verify that the changes work there, and only afterwards deploy to production.

> Testing things like NetworkPolicies is also fraught.

Test, staging, pre-production environments. Particularly since the developer
won't typically have the entire environment set up on their developer machine,
and permissive network access is a working default, usually more restrictive
network policies are setup by whoever has responsibility for the larger
environment and breaks are caught in later stages of the pipeline.

> ...test ingress changes, but even then changes can take 30 minutes to take
> effect and can result in inscrutable error messages.

Not Kubernetes's fault. Even then it's only partially the cloud provider's
fault (because of the inscrutability of said error messages) - infrastructure
doesn't provision itself in a split-second and that reality stands at odds
with kubectl's desire to return asynchronously, ironically enough, for a
better developer experience compared to staring at a console waiting for the
cloud provider to respond within a timeout.

> Maybe in the future, SSH will be as anachronistic as the floppy disk icon,
> but for now, I want to log in to a container, poke around, see what the
> state is

Oh man. Please don't. Just don't. Improve your logging and metrics first, then
we'll talk.

> It’s reasonable that this image doesn’t have strace, and kind of reasonable
> that it doesn’t have bash, but it highlights one of the Kubernetes best
> practices that makes local development hard: keep your images as small as
> possible.

That's not a Kubernetes issue, that's a containerization issue. And a large
part of the philosophy behind why it's OK in containerland is, there are only
three things you need to effectively observe containerized stateless
applications, and they are logs (including request tracing), exported metrics,
and network traffic statistics (which really are a kind of exported metric,
just not one exported by the service).

> Even if you can avoid going out to the internet when pushing an image, just
> building an image can take forever. Especially if you aren’t using multi
> stage builds, and especially if you are using special development images
> with extra dependencies.

So... use multi-stage image builds?

> What I want to do is just sync a file up to my pod... if your container is
> restarted for any reason, like if your process crashes or the pod gets
> evicted, you lose all of your changes.

By design. Your changes aren't going to propagate to staging, let alone
production.

> In dev, I want to tail the relevant logs so I can see what I’m doing.
> Kubernetes doesn’t make that easy.

[https://github.com/wercker/stern](https://github.com/wercker/stern)

> If we want to empower developers to create end-to-end full stack
> microservices architectures we need to provide some way to get their hands
> dirty with networking. Until then that last push to production will always
> reveal hidden networking issues.

If you need to alter multiple microservices to deliver a single feature, your
architecture is probably screwed up, or your development process isn't
sufficiently structured to build up individual backend services incrementally
and in a serial, rollback-able fashion. You can't fix a poor architecture or
SDLC by throwing tooling at the problem.

~~~
pojzon
My take on it was that the author was talking rather about the DevOps and not
only development. And ofcourse he did a pretty poor job at making his DevOps
work easy at the company.

------
orweis
With all the love for K8s - this post is spot on in many points.

Especially around accessing containers and observability. But ... That's where
modern solutions of virtual-logging come in: e.g
[https://Rookout.com](https://Rookout.com)

When your software connects back to you and you can instrument it on the fly
(Add logs line, non-breaking-breakpoints, etc.) most of these pain points are
resolved

------
CyanLite2
You guys do realize that Service Fabric solved these issues years ago, right?

------
foobar_
Because simplicity was an afterthought.

------
dilyevsky
Srsly? Is this meant to be satire?

------
elorant
So, how can the same story be submitted twice in a span of a few hours?

[https://news.ycombinator.com/item?id=20768531](https://news.ycombinator.com/item?id=20768531)

