
Will Kubernetes Collapse Under the Weight of Its Complexity? - tim_sw
https://www.influxdata.com/blog/will-kubernetes-collapse-under-the-weight-of-its-complexity/
======
hliyan
This whole image, to me, represents a big problem with software engineering
today:
[https://twitter.com/dankohn1/status/989956137603747840](https://twitter.com/dankohn1/status/989956137603747840)

The industry is full of engineers who are experts in weirdly named
"technologies" (which are really just products and libraries) but have no idea
how the actual technologies (e.g. TCP/IP, file systems, memory hierarchy etc.)
work. I don't know what to think when I meet engineers who know how to setup
an ELB on AWS but don't quite understand what a socket is...

~~~
tananaev
I have a mixed feeling about this.

On one hand I definitely agree that it's good to know lower level technologies
and that's something I always ask people in the interviews. I think it's
important because I know it.

On the other hand there is no end on how low you can go in the technology
stack. Do you need to know how sockets works underneath? Low level network
protocols? Do you need to know how hardware works because it probably
influenced decisions that were made on the low level of the software stack? I
know what socket is, but maybe the only reason I know it is because I'm old
and I started coding at the time when raw sockets was the only way to
implement network communication. When we have modern libraries and frameworks
do we really need low level knowledge?

~~~
ShroudedNight
Being able to go all the way down the software stack makes it much, much
easier to keep said stack honest. Debugging is often a desperate attempt at
establishing ground truth. Without a low-level understanding, you're always a
the mercy of your tools, and how they have decided to curate the information
they're feeding you. Even the most well-meaning curation can be so
frustratingly deceiving as to incite violence, and god help you if something
you're interacting with has a smug sense of knowing what's best for you.

There have been a number of cases where I've had to rely on the ability to
debug binaries directly at -O3, or to resort to Wireshark to get the dose of
reality needed to challenge my [our] flawed mental model. If I hadn't been
able to do those things, I'd probably still be there, pondering those defects,
if not in body [due to declaring outright failure], at least in spirit.

~~~
erikb
You can make sense of all the back and forth in k8s networking? That needs
more than package capturing I suppose. How do you translate a million packages
into something useful?

~~~
kazen44
most of k8s networking is actually not part of k8s? They don't really have a
integrated overlay network except for kube-proxy. (which is lackluster).

If you are running k8s, you should run a proper overlay network first.

~~~
erikb
"Running k8s" means also running a CNI of course, otherwise k8s doesn't work.

------
manigandham
Microsoft Word is also incredibly complex software with decades of development
and features, and yet it's just a word processor. Everyone uses a small subset
of the actual functionality which is why the entire system can be complex and
simple at the same time, depending on your needs.

It's exactly the same with Kubernetes. It's just clustering software that ties
multiple servers together to give you an PaaS-like workspace to run
containers, but there are 1000s of details you can use if you _need_ them to
build much more intelligence into operations. If you don't need them, you can
just run a single container and still benefit from simple declarative
deployments and automated oversight.

~~~
MBCook
But it’s trivial to start typing in Word. Spell checking is easy, as are basic
formatting operations. Loading/saving work the way you’d expect.

Yes you can write a dissertation with a ton of support from Word to make you
life easier, but doing simple things is simple.

It sounds like that’s what missing from kubectl. Even for a small start it
takes a lot of knowledge.

To continue the Word analogy, that sounds like LaTeX. It’s very powerful, but
no normal person is going to get started for the first time very fast for
basic tasks. Certainly not compared to Word.

The Rails analogy from the article seems very apt.

~~~
manigandham
Doing simple things in Kubernetes is also simple. It's 1 line to get a pod up
and running:

    
    
      kubectl run appname --image=yourimage
    

You can then graduate to a basic YAML config file with a few lines and update
with:

    
    
      kubectl apply -f yourfile.yaml
    

At some point, if you want to build a distributed application, you need to
know the concepts involved. Looking at it relatively, you needed to learn and
deal with many more low-level details before Kubernetes existed, so it's
actually quite an improvement to what we had just a few years ago.

~~~
vidarh
You skip over setting up Kubernetes. And keeping it running.

~~~
manigandham
If we're talking about the actual installation of a distributed system, then
installing a distributed database isn't "easy" either, and also requires
knowing the concepts so you know what you're doing.

Kubernetes is not going to be simpler or easier than the software that it's
designed to run on top of itself, but it's not that hard anymore either. The
installers work well, there are several distros with varying capabilities, or
you can just use the public clouds like most where it's 1 button.

~~~
discodave
Compare to setting up a Lambda application with a DynamoDB or Aurora database.
Very simple, has limitations but you get a scalable distributed system for
(almost) no time investment.

~~~
pas
You can buy hosted k8s, one click, and your cluster is ready.

[https://cloud.google.com/kubernetes-
engine/](https://cloud.google.com/kubernetes-engine/)

~~~
hedora
But that’s expensive.

Plugging in numbers for my $1000 synology, which runs a half dozen dockers, a
vm, and is off-site backed up (for $120/yr), it tells me that I’ll pay
$135/month to run comparable kubernetes at google.

Even if you pretend my $75/month broadband connection is only used by the
synology, and include power costs, the synology still wins by $10’s/month.

(I included 3TB of storage for kubernetes, and 6TB usable for synology, with
3TB used, and sized for the high-memory 4 core machine, since dram is the
synology bottleneck, even with a low wattage cpu. The synology is
intentionally over-sized and is basically idle, so “but scalability!” isn’t a
valid complaint.)

Also, the 9’s I’ve observed were trouncing amazon for a while, though they may
be catching up, thanks to comcast...

~~~
pas
Yeah, it is. But that wasn't the question. MS Office SaaS or Adobe Creative
Cloud is expensive too. AWS/GCloud is already quite pricey as it is, and every
cloudified thing they roll out is just a money pump for people who don't know
better.

But if you want to roll your own, then you can, it's FOSS after all. kubeadm
works very well.

Comparing Synology with anything Google is silly, but useful. You can rent
dedicated machines easily. Leaseweb is nice. And then you can build your
infrastructure for cheaper.

------
hharnisch
I've gone to the last few KubeCons and given talks at two of them and I'd also
consider myself to be more of an app developer than ops. The tone has been
very much that Kubernetes is deeper in the stack than most developers want or
need to be thinking about. Mantras like "kubectl is the new ssh" have become
super popular. So Kubernetes ends up being the platform you build your tools
developers deploy their applications with -- if you work on ops. The problem
seems to be that there's not a lot of agreement on what those tools actually
look like. What Kubernetes does end up doing is providing a consistent API to
deploy workloads across (many, but not all) cloud providers. Over time we'll
see better and better developer facing solutions built on top of Kubernetes,
rather than part of Kubernetes.

~~~
flor1s
The problem with saying "kubectl is the new ssh" is that it is simply not true
in my opinion. Something more akin to "kubectl is to controlling a cluster as
ssh is to controlling a server" would be more accurate I think. The point the
OP makes about "Most Developers Don’t Have Google-Scale Problems" is true, I
don't think you should use Kubernetes if your app consists of just a website
and a database. But do people working on such (relatively simple) apps really
consider using Kubernetes?

~~~
geerlingguy
IMO, only if you're working on like 100+ of them. If you just maintain a
website using a traditional LAMP/LEMP stack, Rails, Node.js, or something like
that it still makes more sense (unless you want to be 'trendy') to stick to
primitives or use more managed hosting.

But if you're maintaining a fleet of independent sites, Kubernetes' scheduling
can make sense, despite the inherent complexity (TBH, you're going to have a
similar level of complexity managing the same kind of scale with any other
tool).

~~~
merb
K8s also makes sense if you have more than a single server. it's not as easy
to keep all your servers up to date, without some automation.

~~~
sidlls
You don't need K8s to automate maintenance of a small number of servers.

------
pat2man
Running OpenShift has felt like early Rails to me. You can get started with a
hosted version, switch to on AWS easily and dive deep down into Kubernetes
whenever you feel ready. It is also opinionated so it’s easy to get started on
the golden path and modify it to suit your needs. The only real frustration
has been upgrading clusters which has gotten easier each new release.

~~~
keir-rex
I’m new to Openshift (about a month in) after a year of low level (hand rolled
HA cluster) kubernetes experience. I don’t rate the experience in Openshift.
It seems like they are trying to tack things on which are superfluous to most
teams requirements, loosely defined, and not well advertised.

I’m constantly trying to figure out what it’s hiding from the Kubernetes layer
or what it is being manipulated to provide its behaviour.

I personally wouldn’t recommend Openshift->Kubernetes but the other way round
would be a better approach once you know you need the additional
functionality.

(Edit: fix typo)

------
bigdubs
Kubernetes isn't supposed to be simple; it's supposed to be a box of tools
that you pull from to represent literally any workload.

Once you know what tools to ignore, and build scripts around the ones you
need, it's very powerful.

This line of thinking is like faulting the golang stdlib for having a lot of
useful stuff in it.

~~~
rorykoehler
Everything should be as simple as possible. It's the mark of good design

~~~
geerlingguy
Minikube—and by extension, basing all the starting tutorials off minikube—is
approaching this ideal, IMHO. A year ago, the first time I tried it, it was
frustrating to even get up and running. This year I was actually able to get
some examples running locally... and that's progress :)

~~~
gant
As someone who uses minikube every day to work on an aggregated apiserver, my
impression is that minikube is incredibly fragile. I have to reset the VM more
than a few times a week. Which isn't that bad considering getting back up and
running is pushing one big yaml file down kubectl, but still. It could be much
better.

Same with kubeadm. It's pretty okay for a test cluster, but it can't even do a
HA setup out of the box. That's an absolute must-have if you have a project
big and serious enough to warrant using kubernetes.

------
erikb
Exactly what I'm always saying. It's also nearly impossible in an Enterprise
IT environment to get Kubernetes working on your laptop. Minikube and Docker
Edge both seem to fail way too often.

As a developer one wants to spin up a system to work on, then work on it, then
push results to some repo. And this loop simply isn't possible (yet?).

Also what the author didn't mention is that even the vanilla k8s stuff is
already super complex. Let's assume you manage to setup a cluster somewhere
and it continues to work for more than 2 days (rarely seen in real world work
environments). Then you are faced with deploying your hello world app to work
on. Just for a simple single-server nginx deployment with no files and no
config, you already need to understand mutliple objects: deployments,
replicasets, pods, containers, nodes, hosts, services, nodePorts, port-
forwards, maybe even ingresses and controllers.

That means until you feel well, even in a perfect environment, you need
several days or weeks. And the documentation is not really helping you there.
Yes, it's better than most enterprise grade documentation out there, but still
it assumes a lot of stuff upfront. For instance, why should a developer even
know what an ingress is and that it might be something he needs?

Combine that huge learning overhead with nearly impossible network debugging,
beta level stability, and the low possibility to "just use it", then you have
a system that most developers will never touch.

Docker itself might survive though, and one of the good things of this CNCF
world is that other alternatives to docker also get a chance to improve on the
existing system.

~~~
pas
> It's also nearly impossible in an Enterprise IT environment to get
> Kubernetes working on your laptop.

huh? What does that mean? Maybe that means the environment is broken, not the
software you want to use/work on.

> For instance, why should a developer even know what an ingress is and that
> it might be something he needs?

Then probably that dev shouldn't work with k8s. At all.

If the dev wants/needs a hello world on a domain/IP, then they need a web
hosting service provider. (ghost.org) Or a PaaS (Heroku), or they can spin up
a VM on DigitalOcean and follow any of the thousand Ubuntu Nginx Website
Hosting tutorials on HowToForge.

If they already have dozens of VMs, scores of containers, and they are
fighting with monitoring and config management, then they might need k8s.

And a lot of folks do need this level of infrastructure and infrastructure
management (automation, abstraction, standardization, etc).

~~~
erikb
> > It's also nearly impossible in an Enterprise IT environment to get
> Kubernetes working on your laptop.

> huh? What does that mean? Maybe that means the environment is broken, not
> the software you want to use/work on.

That is what the word "Enterprise IT" means. If you ever work in a company
that makes more than a million USD per year, you will find an imperfect
network in a nearly unknown state, with proxies and firewalls making your life
hard, and automatic reconfiguration tools/scripts/antivirus-software resetting
everything to "not working" the moment you take your eyes of the config files.

People and Software who really want to make money in Enterprise need to be
able to handle that somehow. If you develop software on your macbook in an
environment with the complexity of a Starbucks Wifi nobody can actually use
your software in Enterprise.

Btw. did I mention that Windows+Outlook+Lync is the high standard of
Enterprise laptops? Forget Enterprise users if you are not developing ON
Windows.

~~~
pas
... but ... but ... no one really cares about that. k8s is targeted at
startups who will do the sales dance with the big Enterprises and they'll do a
SaaS that's backed by k8s managed infra. (Or the Enterprise will use k8s on
their Linux servers. Maybe hosted on VMware, maybe on HyperV, maybe in Azure
maybe at AWS.)

k8s is a project, a lot of people find it useful. A lot of big corps have IT
R&D groups (basically all Fortune 500 have), and they have their own test
network. Or they test on their own rack, or on their own cloud, or on their
own AWS account.

I think I don't really understand what your belief with regards to k8s is, but
I'm interested, so could you give some details?

------
jrs95
The complexity of Kubernetes largely reflects the complexity of the problem.
Nobody has delivered anything significantly simpler that hasn’t had a much
smaller scope, and those tools approach the same level of complexity when
composed with others to get the same level of functionality. But setting up
Kubernetes is both well documented and automated on multiple cloud providers,
whereas something like Nomad & Consul doesn’t really have a good end to end
walk through to get you to the same destination as Kubernetes. I suppose if
you’re fine with pushing that complexity into service clients you can avoid
the need for a lot beyond what Nomad and Consul give you by themselves — but
then you end up with the downsides that something like the Netflix
microservices stack gets you. Fat clients ultimately leave the developer with
more complexity, and Kubernetes helps you eliminate that in favor of more
SRE/Ops/whatever-you’d-like-to-call-it complexity. Since cloud platforms can
take a lot of the edge off of Ops complexity, that’s my preferred approach.

~~~
atombender
I would also argue that Kubernetes is less complex than it seems at first
glance.

Yes, if you look at _all_ the possible parts, and at the current monolithic
codebase, there's a lot of complexity. It also supports umpteen cloud
providers, volume providers, networking stacks, etc., and comes with a whole
swathe of bootstrapping tools for various environments (e.g. AWS).

But if you strip it down, Kubernetes is "simple": There's a consistent object
store made out of JSON structures, and then there's a bunch of controllers
listening to changes to that store to make stuff real. That is the core.
Everything is, in principle, controllers mediating between the data model and
the real world. Very elegant and orthogonal.

You also have an API, a scheduler, and a thing called Kubelet that runs on
each node to manage containers and report node-specific metrics. And of course
you have Docker, though with 1.10 you can more easily run dockerless via
containerd, which is a great thing indeed.

The complexity comes from the operational part, when the pieces come together.
And as you say, there's not really any way around it.

~~~
hueving
>Everything is, in principle, controllers mediating between the data model and
the real world. Very elegant and orthogonal.

If you distill k8s down to this model alone, k8s becomes nothing but a pattern
that has existed for decades. Maintaining "desired state" and "operational
state" as separate things is not new.

~~~
lobster_johnson
You missed my point; I didn't say it's new, I said it was simpler than it
might seem, and that thinking of it as a state machine makes it easier to
understand what the core of Kubernetes really is.

And of course "nothing but a pattern" is nonsense. Pre-container systems like
Puppet and Chef -- which are also, vaguely, based on converging real state
towards desired state -- are firmly rooted in the traditional Unix model of
mutable boxes. You can't implement a consistent reconciliation loop if your
state can't be cleanly encapsulated (as with containers).

------
reacharavindh
I'm a sysadmin and I attended KubeCon recently. I came back with a similar
thought flow in mind. This one anecdote nailed the problem in my opinion -
"Kubernetes makes simple things hard, and hard things possible." So, if you
dont have things that you think are impossible, just don't pay the complexity
tax.

Real-life example :
[https://www.reddit.com/r/devops/comments/8byasq/is_kubernete...](https://www.reddit.com/r/devops/comments/8byasq/is_kubernetes_worth_it/)

Paraphrasing for discussion: Poster: A rails project with deployed to 6
servers running in production currently.

Poster: During the asset compilation process, the servers often freeze.

Poster: I need to manually remove servers from the load balancer and deploy
one by one.

Poster:I looked a lot into kubernetes and production containerization lately,
and as far as I read it, it should solve the deployment and uptime issues. I
imagine it'd be a lot easier to just switch containers instead of deploying
with capistrano. I also really like the self-healing capabilities a lot.

So, he/she hopes that Kubernetes will magically solve his problem(asset
compilation freezes the server). I suppose in his/her mind, Kubernetes is the
snake oil.

Things that he/she failed to put thought into (and rather got revved up about
Kubernetes):

* Could I setup CI with a script that will perform the asset compilation once on one server and just rsync the final result to the prod servers?

* Could I spend a couple of hours understanding the asset compilation process and find out why it freezes the server?

* Could I learn more about load balancing, rolling deploys?

I think this is the real problem in the tech field. People are running after
shiny tools and hope to through tools at their problems all the while ignoring
the basics.

In this particular case, I think if they had a grey beard sysadmin who was
grumpy to the devs, and enforced a strict release process, everyone would've
been happier.

------
EngineerBetter
I was at KubeCon, and had a similar experience. Lots of engineers excites
about all the technical possibilities, and less discussion of developer
productivity.

It reminds me a little of in the 00's everyone thought their company should
write its own CMS. I think we're in danger of every writing their own PaaS.

This is why things like Deis and Cloud Foundry exist. Most app developers
should not have to understand the full depth and breadth of Kubernetes.

~~~
Jyaif
Amen. Use PaaS as much as possible and fallback to running VMs as a last
resort.

------
outside1234
> kubectl scaffold mysql --generate

This exists, its called helm, which in fact delivers the productivity gains
the author is looking for.

~~~
GiorgioG
I have Kubernetes in Action on my desk and I haven’t cracked it open yet
because Kubernetes seems monstrously complex. Sure helm gets you up and
running. When something goes wrong in prod at 3am, what do you do?

~~~
barrkel
K8s is actually fairly simple and self-evident once you understand about 3
core ideas: etcd being the repository of state, in particular the spec;
controllers with control loops bringing status into line with spec (the core
mechanism in k8s, this is key); and a familiarity with the options on pods &
deployments, for initialisation, service discovery, liveness, readiness, etc.
that let the system make decisions globally while you only worry about local
status (this is most of what you need to know as a dev deploying a service).

Don't buy the FUD. There's a lot of it about. K8s commoditises cloud
providers. It's a strategic weapon against AWS lock-in.

------
mverwijs
"The number of things you need to create and worry about are a barrier to
starting a project and iterating on it quickly."

As an SRE / Ops person it is my take that Kubernetes addresses many of the
complexity your startup _should_ be worrying about, but doesn't because "it is
not a feature".

Yes, it is massively complex. That is because Ops is massively complex.

------
enos_feedler
Kubernetes is the Android of the datacenter. Android OEMs are Samsung, HTC,
Huawei, etc. Kubernetes OEMs are Google, Amazon, Microsoft etc. With Android,
a large % of consumer market gains access to an app ecosystem by choosing
Android. Similarly, businesses will _eventually_ gain access to an ecosystem
of B2B applications by adopting kubernetes. Kubernetes will be the OS for
business.

~~~
dfee
For a number of reasons, I think this metaphor fails.

K8s appears more open than Android, is not largely controlled by one
corporation, doesn’t come crippled with vendor services, etc.

~~~
enos_feedler
No metaphor is perfect. I highlighted where I thought it makes sense (in the
ecosystem sense of apps and modules being built on top of it).

------
hennsen
Basically agree that simple things should be simple and complex things
possible - and that’s from my experience not yet met with standard k8s.

And having no „end users“ (app developers) on a conference about tools that
should serve exactly these is an interesting observation to investigate
further.

Having to install one mire tool to get ready for production apps installed
with helm in one command is not asked too much, though.

Then, slighty unrelated, but it comes to my mind:

i wonder if this happy path thing works with influx, where the author is
working.

Can i have a simple single command and install everything i need to look at
logs from my app and db server, see most important performance stats and
http/ip access logs, geaphical as well as with notifications if certain,
easily to be entered ( and in case of cou, io, ram and diskspace reasonable
defaults like 80% or so) thresholds are met?

Can i do that with only the free open source tools as the author expects it
from the k8s ecosystem? Or do i get it when buying influx‘s professional
service?

So maybe it’s the job of, and an opportunity for, commercial companies to
develop and sell such simplifying tools. At some place, developers time to
develop all these things must be paid for - if millions of developers just use
the perfectly polished open source tools - and a high percentage doesn’t even
help in development with big reports, not to think of patches, what are the
developers going to live from while doing the polishing/ simplifying?

~~~
pauldix
Author here. We have more work to do to make the happy path with the set of
Influx tools (the TICK stack) more turn key and easier to run. The entire
feature set is available in the open source versions. The thing we keep
commercial is HA and scale out clustering (either we operate for you on AWS or
you buy on-premise). Our work on 2.0 of the platform should make the happy
path much easier, but software is a process of continuous improvement. So I'd
expect 3.0 to be even better and so on.

~~~
hennsen
Thanks! Sounds great and I‘ll look into it for the next project/use case!

------
Pyxl101
Is Kubernetes really scalable to a meaningful extent? I feel like if I was
going to set up a Kubernetes system, I'd need to plan from the beginning to
have multiple clusters anyway, and then the utility of all the scheduling
features would be considerably diminished since I'd have to plan to load
balance applications across the clusters in some custom way. Yuck.

The Kubernetes website (1) currently claims that it supports clusters of up to
5000 nodes, which is a decent amount but not enough to avoid having multiple
clusters. Does anyone have experience operating multiple production clusters
in a single territory as partitions for scaling reasons? What's the experience
like?

(1) [https://kubernetes.io/docs/admin/cluster-
large/](https://kubernetes.io/docs/admin/cluster-large/)

~~~
manigandham
5000 nodes has proven to be a high enough ceiling for basically all workloads,
especially given the size of individual servers available now in clouds.
Easier to run smaller numbers of bigger servers, as always.

Also Kubernetes does support federation for cross-cluster deployments (now
named multi-cluster). Some cloud services even support ingress load balancing
across these, or you can do that part yourself by simple sticking with the
same ports, but it all works fine today. Nothing custom needed.

------
gant
I think people misunderstand why Kubernetes exists. It is the reverse
OpenStack. Kubernetes has the potential to be the one unified API of the
cloud. A middleware for proprietary cloud APIs. A few resources, like load
balancers, are already at a point where you barely have to care about the
underlying cloud provider. With operators and aggregated API servers
(especially if they'll be offered as a service) provisioning resources could
follow one well-known standard. Calling it now, within the next 2 years cloud
database providers like Compose will offer a way to CRUD resources via
CRD/apimachinery-compatible services. Few more years and we'll have a generic
yaml spec for these resources that work out of the box on multiple providers
(probably with a bunch of annotations that are vendor specific).

------
mwcampbell
> available 99.5% of the time with decent alerting for operators to kick it

An operator should never have to "kick" a service. It should repair itself,
except for the occasional hardware replacement if one is working with bare
metal. And for anything that's being sold as a product, as opposed to an
internal tool, I think 99.9% availability should be the minimum.

But I don't know enough about Kubernetes to say whether it's overkill at the
scale of just a few servers.

~~~
smudgymcscmudge
> An operator should never have to "kick" a service.

Have you ever been a sysadmin? There are very few services that don’t need a
kick every now and then.

~~~
mwcampbell
Yes I have. And if a service ever needs a manual kick, I consider that a bug.
At least when running in a public cloud.

~~~
randallsquared
Sure, sure, but... everything is buggy, by that metric. I think that's what
the parent comment was getting at.

------
tootie
I think [https://draft.sh](https://draft.sh) is trying to address this but
it's still early in development.

------
jtwaleson
Heroku / Cloud Foundry offer exactly what the author points towards: a very
simple user interface for developers. InfluxDB will of course needs stateful
applications so it's not a good use case for them.

The Cloud Foundry community has started exploring a switch from their own
container management system to K8s. If that becomes real, CF would "just"
become a nice user interface on top of k8s. The right move imho.

~~~
troytop
The cloudfoundry-incubator/eirini project is where the CF+K8s scheduler work
is going on.

Related to this, SUSE and IBM have already released distributions of CF that
run on Kubernetes.

I'm biased (I work on one of these) but I really think this is the most
expedient way to enable PaaS features on K8s.

------
zzbzq
The way the author's 'scaffolding' idea should work is that you start but not
using kubernetes at all, rather than using a easier version of it. Of course,
'not using it' already exists.. But what doesn't exist is the sweet way to
transition.

In particular, you can start off with Azure AppService, AWS Elastic Beanstalk,
Google AppEngine. You could also go serverless. All these approaches allow
rapid development and deployment with low ops overhead, and they'll actually
scale and heal well. Ultimately, the services are doing the k8s type of stuff
for you. To state that inversely, running kubernetes is like trying to run
your own PaaS. (When put that way, it sounds dubious that so many people are
trying to jump into k8s, but I'm not an expert on the $$ economics of devops.)

The next gen evolution of the cloud platforms could really take this migration
from PaaS to IaaS to a whole new level than it's at right now.

------
spockz
Firstly, I had the same experience on at kubecon this year, most people I
encountered were ops engineers or infra engineers. Maybe the application
developers were there but they were less vocal. I can also imagine the
subjects being rather specific and deep for your average application
developer.

Secondly, aren’t solutions like Helm supposed to take away the need for
scaffolding? The problem with scaffolding is staying up to date with the new
templates and usually results in the deployments not being updated anymore.

Additionally, I have to say that getting started with K8s was quite easy
because we already had experience with Docker. OpenShift was similar and has
source2build which is very convenient. So I don’t percieve K8s to be hard to
start to use. To use it ‘correctly’ and use all potential, yes that is harder
but that holds for any product.

------
flor1s
Isn't one of the appeals of Kubernetes to have a portable cloud environment,
which means I can easily switch between Windows Azure, Google Cloud, Amazon
Web Services, on premise, and even Minikube (localhost). Is there any simpler
alternative for that?

~~~
bdcravens
For many use cases, I'd imagine an abstraction atop of Kubernetes (like
OpenShift or Rancher) is a good fit.

------
jgr447
Not having used it in any meaningful way, after a lot of reading I am
sometimes still unclear with the value proposition.

The value imho could be in being able to package distributed applications and
deploy across cloud providers or on prem, seamlessly.

I don't think this is true though short of doing a lot of effort to abstract
access to a gcp/aws/azure managed service (say, a db), which is probably a bad
idea.

If you take that away, then a lot of the replication, autoscaling, load
balancing, failover etc. can be implemented using cloud providers without
having to manage the complexity of k8s.

Hope to be proven wrong here.

------
cmorgan8506
As an app developer, I can vouch for finding k8 to feel frustratingly complex.

My current client work has recently shifted to using k8. I took the time to
get minikube working locally to get a better understanding. It definitely
helped, but I find the layers of abstraction hard to grok after not working
with it for a while.

I can see the value the tool offers, but I get the feeling it's supposed to be
reserved for higher degrees of scale than the average 2-8 node app.

Black box is how I feel about it sometimes. Hopefully I'll get more one on one
time with it in the future. It seems like a really cool technology.

------
superzadeh
This is exactly one of the reason we picked kontena.io in our startup, and
never regretted it. Also super excited about their approach to run on top of
k8 with pharos.sh.

------
gtirloni
TL;DR; Kubernetes needs to continue to focus on the developer experience but
it's good enough for InfluxData's new cloud offering.

The project has been listening:
[https://github.com/kubernetes/community/blob/master/sig-
apps...](https://github.com/kubernetes/community/blob/master/sig-
apps/README.md)

~~~
hueving
Forming a SIG is just paying lip service. It remains to be seen if anything
will really change or if kubernetes will suffocate itself with its own bloat.

------
stormbeard
> Scaffold generators for common elements would be great. Need a MySQL
> database? Having a command like [...] to create a stateful set, service and
> everything else necessary would go a long way. Then just a single command to
> deploy the scaffold into your k8s environment and you’d have a production-
> worthy database in just a few console commands. The same could be created
> for the popular application frameworks, message brokers, and anything else
> we can think of.

Rook sort of does this. You deploy a Rook operator, then just one other
kubectl command to get an object store, database, shared filesystem, etc...

[https://blog.rook.io/rooks-framework-for-cloud-native-
storag...](https://blog.rook.io/rooks-framework-for-cloud-native-storage-
orchestration-c66278014df7)

------
hardwaresofton
This blog post starts under a false premise. Kubernetes is not for app
developers, it is the substrate on which applications, databases, and other
workloads run. Just like you wouldn't want an application developer SSHing
into machines in production (assuming you have ops people), you don't want
them to use kubernetes, except kubernetes has done one better -- it's
abstracted so well (especially with the introduction and widespread use of
Custom Resource Definitions AKA CRDs) that you _can_ let them write resource
definitions, which are declarative representations of the resources they will
need for their application, and run those.

Coming from someone who gave a talk at Kubecon I'm very surprised to read
something like this. Maybe I'm the one with the misunderstanding, but I'm
going to try and refute the things this article said/is implying.

1\. Kubernetes is complex

This is kind of right, but it's also kind of not -- Kubernetes is
_essentially_ complex, given that it encourages write-once solutions to all
the problems it faces. Here are the pieces that make a basic Kubernetes
"cluster":

\- apiserver => you send commands to this to change/query cluster state

\- controller-manager => works to make your ensure that the cluster is in the
state you want it to be (making workloads replicate/restart/etc)

\- scheduler => figures out where to put workloads

\- kubelet => runs containers -- one on each node that can do work

\- kube-proxy => maintains the routing infrastructure necessary to enable
containers on any node to hit a container on another one.

All of those pieces are needed -- the only concession I would make is that
they could all be in the same daemon (one executable), but that's actually
worse at scale, and harder to debug -- all of these services can produce a lot
of logs.

2\. Application developers can't use kubernetes as it is

Application developers can use kubernetes as it is. Learning to write a
kubernetes resource definition is not any harder than figuring out the
conventions and configuration you have to write for Heroku, or AWS
ElasticBeanstalk, or AWS ECS. In fact, I would argue that it's simpler.

We've touched on another problem here -- the competitor for kubernetes is not
SSH, it's not heroku -- it's tools like CloudFormation/ECS. I don't know if
you've CloudFormation, but it's kind of a clusterfuck, hard to set up quite
right, and the dynamic yaml approach they've taken is enough rope for one
clever developer to hang you and your whole team with.

Bold prediction, but I think AWS is going to abandon CloudFormation and ECS in
favor of Kubernetes resources once it stabilizes.

OK, let's say you disagreed with everything I've said up until this point --
at the very least, you can deploy tools like the following to your kubernetes
cluster:

[https://gitkube.sh](https://gitkube.sh) => heroku workflow

[https://helm.sh](https://helm.sh) => cloud-formation/elastic-beanstalk
workflow (with kubernetes primitives)

And presto, you have a completely different interface to your cluster, WITHOUT
changing anything fundamental underneath.

3\. Developers who only focus on the application-level are the goal

Why would you even want this? Not only is it basically impossible to hide the
underlying infrastructure so well that the application developer doesn't have
to know about it, it's arguably not even a good idea.

Take session management -- if you want to handle it in the context of more
than one frontend running at a time, you generally outsource that state to a
cache like redis. An application developer who grew up in this imaginary world
where app developers never touch infrastructure is not who I want solving this
issue, assuming there isn't a qualified ops person. If you needed to optimize
even further, app-local caches could be deployed, but this requires knowledge
of "sticky sessions" \-- this very much is a deployment/infrastructure
specific question, again, that app-only developer is just about useless here.

I'm no hiring manager but the desire to stay an "application" developer who
only worries about that part of the stack when the "application" as a whole is
so much more would be a red flag for me. Even if you were delivering a desktop
application, the developer who worries about underlying OS-specific
enhancements (for example knowing how to optimize the app for MacOS) is the
one I want, the one I want to pay the big bucks for.

4\. It's hard to deploy the usual app+backing store+caching+worker pool
structure

The author touches on this a little bit with the "maybe operators and helm
charts solve this", and that's exactly what the operator pattern (Custom
Resource Definitions, AKA CRDs, plus custom controllers) were meant to solve
-- now you can actually give declarative specifications of what you want your
Redis/Postgres/Celery/whatever cluster to look like, and `kubectl apply`, and
the platform handles it. There's arguably no difference here between how you'd
use this and a tool like heroku.

Also, for the record you can trivially extend `kubectl`:
[https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-
plug...](https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/)

~~~
pcthrowaway
> "the only concession I would make is that they could all be in the same
> daemon (one executable), but that's actually worse at scale, and harder to
> debug -- all of these services can produce a lot of logs."

This is already a thing, sort of, called hyperkube:
[https://github.com/kubernetes/kubernetes/blob/master/cluster...](https://github.com/kubernetes/kubernetes/blob/master/cluster/images/hyperkube/README.md)

The caveat is that each daemon has to be started separately still

~~~
hardwaresofton
hyperkube was what i used when I got my first cluster up and running (this was
when coreos still hosted kubernetes baremetal setup guides not just pointing
everyone to tectonic), I remember it fondly :)

I didn't include it due to that caveat, and the general feeling that the
processes really are meant to be started separately. I haven't seen anyone try
and run these processes with some sort of supervisor but it seems like that's
not "the way" and wouldn't really even offer any benefits.

------
yeukhon
One major reason OpenStack is a messy project today is the amount of companies
involved in thr foundation early on. It was a big problem when RedHat vs HP vs
Rackspace vs XYZ.

------
austinshea
These articles seem common, recently.

I don’t really understand what this sentiment is about.

It’s a really useful orchestrator, in some cases. In other circumstances, it’s
unnecessary complexity.

------
nicodjimenez
The funny thing about Kubernetes is that I read somewhere that it's not
actually widely used within Google (please correct me if I'm wrong). Given
that's it's a complicated piece of software and that Google has extremely
complicated requirements, that's pretty concerning. To me, we've barely
starting to solve deployments, the idea of Kubernetes seems a little ahead of
it's time. If you're using GPC I do hear it's really great. But then, what's
the huge benefit of Kubernetes in the first place? A tiny bit less of vendor
lock in?

~~~
atombender
Google uses Borg internally, and Kubernetes is really their third container
orchestration system. After Borg came Omega, which was never deployed, but
ended up being a test bed for a lot of innovations that were folded back into
Borg. But Borg is a decade old and has a lot of warts (according to its
designers), and with Kubernetes they aimed to learn from their mistakes and
improve on the design.

As far as I can tell, the aim with Kubernetes was never to replace Borg at
Google -- Google is far too invested in Borg, and it would take a considerable
engineering effort to migrate away from it. Rather, developers at Google saw
an opportunity to create an open source version based on what they had learned
and help the world along in adopting the same engineering principles as Google
has long practiced. Not all altruistic notions, of course -- Google benefits
from the commoditization of containers indirectly, by undermining competitors
such as AWS (where containers are still not well-supported) and making their
own cloud the best fit for Kubernetes.

Google does run stuff on Kubernetes, via GKE. As I understand it, new products
are encouraged to run on GCP. I don't know how many applications they run,
however. Maybe someone from Google can comment.

~~~
erikb
The devs totally forgot that in a normal environment you don't have the 10.000
other internal Google tools, though.

------
mancerayder
This is my personal experience on the matter as a DevOps consultant who
periodically interviews in the traditional manner (as opposed to getting gigs
from people that already worked with me).

Despite having 15+ years of *nix experience, including internals, having a
track record of building large scalable infra and knowing a few different
programming languages, what happened to me was this: I was getting filtered
out because I didn't have Docker and Kubernetes and even (at one point)
Cloudformation and/or Terraform. No problem - I learned those things (minus
Kubernetes, so far) quite quickly. Much more quickly than the grueling trial-
by-fire years of Unix administration. I like to know how things work, not just
how to use them.

So if you wonder and worry about the state of enterprise IT some days, look no
further than hiring managers themselves, who will pick a 25 year-old who
writes YAML for some abstraction-of-an-abstraction system that does
infrastructure under the hood, infrastructure that people kind of don't really
try very hard to understand. After all, it's disposable thanks to
infrastructure-as-code, right?

How do I know this? Well, I've seen shop after shop that's suffered a
spaghetti infrastructure, using all the latest and greatest, from AWS and
Kubernetes and Docker and other abstraction layers above AWS. And what happens
is that it gets so complex that no one knows what's really going on, and at
the very least two common symptoms arise: people are terrified during releases
and they take hours, with many people on a call together very late at night;
they spend a fortune on extra instances (in the case of AWS) because they
haven't properly worked out environment separations (they had trouble keeping
them the same, or one of many other problems).

A talented dev manager I used to work with used to complain that they had
trouble hiring people who knew Javascript well, but they had expert after
expert of some fancy JS framework try to interview, unable to answer the
fundamentals-types of questions. I think it's similar with enterprise
infrastructure.

I don't know what to say. I hope things go full swing and people who know how
things work under the hood can charge consulting dollars for fixing the
fuckups. It's not enough to know YAML, you also need to have wisdom in
maintaining complex infrastructure, understand the delicate balance between
change and stability, and be able to troubleshoot when it goes wrong WITHOUT
just 'rinsing and repeating' where you learn absolutely no lessons at all.

[edit:] One theory for all of this. Some of the big shops (Google, FB,
Netflix, etc.) did it right, and now everyone is trying to copy the style of
infrastructure management, except doesn't have the talent or wisdom to do it
well.

------
bitL
Kubernetes looks to me like one of those prototypical technologies where LEGO-
style usage of Deep Learning in helping to set it up for whatever scenario is
needed would be already doable and beneficial. I am wondering if Google is
working on it already.

------
nimish
kubernetes is a classic case of a tool designed for consultants and companies
to sell consulting services (Including cloud services, which is why every
cloud provider leapt onto it).

In like 90% of the cases when someone used Kubernetes, Docker Swarm would have
easily sufficed.

~~~
chrisweekly
Docker Swarm is a hot mess.

~~~
tuananh
isn't docker swarm declared dead by docker team?

~~~
BretFisher
nope [https://www.bretfisher.com/is-swarm-dead-answered-by-a-
docke...](https://www.bretfisher.com/is-swarm-dead-answered-by-a-docker-
captain/)

------
packetized
tl;dr: author believes “No”.

Betteridge’s Law still applies.

~~~
baxtr
Copied from Wikipedia because I didn’t know:

 _Betteridge 's law of headlines is an adage that states: "Any headline that
ends in a question mark can be answered by the word no." It is named after Ian
Betteridge, a British technology journalist, although the principle is much
older. As with similar "laws" (e.g., Murphy's law), it is intended to be
humorous rather than the literal truth._

