
The Kubernetes Kustomize KEP Kerfuffle - gk1
https://gravitational.com/blog/kubernetes-kustomize-kep-kerfuffle/
======
mfer
Since I was at the core of what the article wrote about I feel the need to
jump in. Some of the quotes are from me, after all.

kustomize vs Helm is a bit if an improper comparison. Kustomize and the ways
to use kubectl are often about configuration management and Helm is about
package management. This is an important difference.

For example, take Ansible doing configuration management and using Helm to
install a package in much the same way you might use an apt package elsewhere.

The key question that has to do with configuration management is should
kubectl provide configuration management of things outside of the Kubernetes
API, such as when you manage the config in git? This is both about scope (is
this a Kubernetes concern) and in how to build things (a monolithic CLI?).

If kubectl has a configuration management opinion outside of the Kubernetes
API what does that mean for configuration management projects like ansible,
armada, chef, and the oh so many others? And, who working on Kubernetes gets
to choose the opinion on configuration management it has outside the API?

At this point imperative vs declarative as a conversation point comes up. In
all cases the configuration is declared to the Kubernetes API and stored by
Kubernetes in a declarative manner. It becomes a question of how far outside
the Kubernetes API do things need to remain declarative. After all, me typing
into yaml into a keyboard is imperative.

As for the KEP process as a whole and the graduation criteria that was
mentioned here, we know it needs work. There are efforts going to get it into
better shape with better example guidance. There is actually a PR being worked
on for this right now.

Projects with a healthy ecosystem and people of strong opinions have to deal
with issues of scope. What do you hold onto and where should it go? How do you
curate a healthy ecosystem? It's not always easy.

Having a place to do some experiments outside of a project like Kubernetes is
part of the reason the CNCF added sandbox projects. Without being forceful
people can work together on something and see how it fairs in the market.
Experiments in the ecosystem can happen.

In any case, if people have questions please feel free to ask.

~~~
marcc
Helm is a good package management tool for application level configuration
items. The possible configuration options that a cluster operator might want
to configure is fixed. The application packager knows what they may be and can
template them into environment variables or write them to configmaps. It
doesn't matter how they are delivered, the values.yaml is a great place to
collect this data, and Helm manages it nicely.

However, when it comes to cluster operators, or "runtime configuration", the
application packager cannot possibly know all possible configuration changes
that a cluster operator might want to make. Some are obvious, and Helm
ecourages putting them into values.yaml. For example, most charts include a
way to change resource limits or imagepullpolicy. But what if a cluster
operator wants to change a config map to be deployed as a bitnami
SealedSecret, because it's more secure and more compatible with the
infrastructure? Or what if a cluster operator wants to deploy the Cloudflare
Argo Tunnel Ingress controller which requires more: not only an annotation,
but also additional fields. This cluster operator often ends up forking the
Helm chart or deploying these parts out-of-band. Differentiating between
runtime config and application config is why both Kustomize and Helm are both
useful, together.

I like Helm as a package manager. But to keep everyone using the upstream Helm
chart, wouldn't all Helm charts eventually have to look the same, and just be
a templated version of the entire Kubernetes API with all CRDs included?

I like the sandbox projects and how the CNCF is managing them. I'm actively
working on a different project (Open Policy Agent), and it's much younger than
Helm or Kubectl. I agree with your concerns about the KEP process, and how it
should be more well defined and followed at all times.

~~~
mfer
Ya know what's great about debating tools? Because we have a separation,
different things can innovate at different rates and people can do different
things.

You can move from deploying to AWS to Kubernetes all while your CI pipeline
and config management stay the same. Maybe at another time you change your CI
system.

In all of this we can even have tool fights. Jenkins vs CircleCI. vim vs
emacs. And new players can come along like vscode.

Different departments in the same companies, sometimes billion dollar
companies, can even do this.

This is one of the reasons I personally like a separation of concerns with
projects.

There is one thing about Helm...

> But to keep everyone using the upstream Helm chart, wouldn't all Helm charts
> eventually have to look the same, and just be a templated version of the
> entire Kubernetes API with all CRDs included?

I don't think so. Without being long winded, this is about user experience and
encapsulations. Kubernetes is hard to learn. Then do learn the business logic
for installing all the things you use. To use the old wordpress example,
should someone installing wordpress into a cluster need to know the business
needs of MySQL to create the k8s config for it? Most app devs don't want to
learn all the k8s object configs... it's a lot to learn.

As for CRDs, that's about dependency management.

But, if we make Kubernetes really hard, with lots of CRDs we have to start to
deal with what CRDs are in what clusters? And, what experience does that
provide for end users? How do we not make the UX barrier to entry to high for
the people who need to use it?

Are users interested in Kubernetes or are they interested in their apps and
core business logic?

~~~
marcc
> people can do different things

That's one of my favorite things about that composability of the Kubernetes
tools. I don't believe there's a reason for a "tool fight" in the Kubernetes
ecosystem. There might be disagreements about best practices, for example how
to manage code in a cluster (gitops vs kubectl apply vs helm install vs ...),
but that's not a tool fight as much as a methodology difference. And there's
room for more than one pattern to emerge.

I think that any tool that is able to deploy an open source project to my
cluster, while minimizing the amount of operational overhead I need to assume,
is the tool that I want. I don't care if that is Kustomize, Helm, Ksonnet, or
anything else, as long as it mets the requirements of a) it has to work with
my environment and b) it shouldn't introduce unecessary operational overhead.

You also mention that Kubernetes is hard to learn, which is absolutely a
problem. Adoption is growing, but it's getting harder to learn as more
features get merged in. And you are right, nobody should have to learn
Kubernetes YAML to deploy a standard, off-the-shelf Wordpress installation.
But what about more complicated software that needs "last mile" customization
done to work in a specific environment? This blog post
([https://testingclouds.wordpress.com/2018/07/20/844/](https://testingclouds.wordpress.com/2018/07/20/844/))
shows a great way to combine the power of the Helm community and chart format
with the last-mile tooling that Kustomize can provide to help keep charts
simple but still flexible. That feels much better than forking the chart and
maintaining a separate copy of it just to make a few changes that are specific
to a single use case.

I have mixed feelings about Kustomize getting merged into kubectl. I don't
like the idea of Google "crowning" a winner, and I hope the sig-architecture
group and Google teams remain diligent to prevent that from happening.
Kustomize is not a replacement for Helm, it's a very good tool to handle
specific use cases that often involve Helm charts at the source.

------
jacques_chester
I've been on both sides of this sort of divide, as both insider and outsider.

It's easy to see it as politics. I do, pretty much all the time, but that's
because I have a background in politics. I'm primed to assume the worst.

I think the explanation is simpler. If you are a Googler, you mostly talk to
Googlers, you mostly come to quick consensus that way, and it all makes sense
to you and the Googlers you had casual whiteboarding sessions with. So you
just go ahead with the to-you-obviously-best decision and get surprised when
non-Googlers are mad at being presented with a _fait accompli_.

There is no ill-will and no conspiracy required. Some of the personnel
allocations are driven by corporate strategy, but folks engineering on the
line are all reasonable human beings. It's just simple personal dynamics. You
talk mostly to the people closest to you unless sustained effort is made to
keep stuff in the open.

~~~
justicezyx
You missed the key part.

This is why it was open sourced, as a balance between control leverage and
adoption.

~~~
gouggoug
> This is why it was open sourced, as a balance between control leverage and
> adoption.

I feel like I read this kind of statement all the time and I don't understand
the reasoning behind it.

Google came up with k8s. They could have kept it for themselves, or, open
source it. They open-sourced it. It definitely benefits them, but, it also
benefits us a lot.

Saying they "open sourced, as a balance between control leverage and adoption"
is so cynical. So I'd like to ask, what's the alternative? Not open-sourcing?
Who would that benefit?

What would be the non-ill-intentioned alternative that Google supposedly could
have chosen?

~~~
mfer
Why did Google create Kubernetes and open source it? Why did the people who
created Kubernetes all leave Google? What was the business strategy behind it
all?

After all, the created omega but it never did replace borg. Kubernetes wasn't
meant to replace borg, was it?

Google didn't create and release Kubernetes out of the goodness of their
hearts. Have you ever pondered the strategy behind it all? Maybe they couldn't
have kept it to themselves because that would have defeated the underlying
purpose in the first place?

~~~
thockingoog
> Why did the people who created Kubernetes all leave Google?

Umm, we didn't?

> created omega but it never did replace borg

It did have material impact on Borg.

Sometimes the strategy is simply "we know how to do this, and we'd prefer not
to see you go through all the same pain to figure it out".

~~~
aberoham
Timmy!! When will knative and istio be handed over to a foundation? Does
LF/CNCF have enough carrots in Jim Zemlin's bag of tricks??

'tis fascinating to dig in to open source serverless on Kubernetes options
only to realize that there's probably 100+ engineers working on istio+knative,
~10x more than any prior alternative.

When is KnativeCon coming?

~~~
jacques_chester
I don't think he's involved very closely.

Since folks get so worked up about allocations unofficially shaping decisions,
I suggest[^] that Knative be homed with the Cloud Foundry Foundation, where
the rules are written to give votes based on how many fulltime contributors
you assign. That way it's all out in the open.

Disclosure: I work for Pivotal, which is in both the CFF and CNCF. I was aware
of Knative relatively early.

[^] in a ha-ha-only-serious way

------
grantlmiller
Kustomize is definitely a huge step forward for configuration management, more
info on Kustomize here: www.kustomize.io

~~~
etxm
Kustomize is a great tool. I feel like it’s so much easier to reason about
environments. I’ve seen some terrifying helm templates.

------
aberoham
Author here. This was an extremely hard topic to write about. Happy to answer
questions but really hoping those from sig-cli and sig-architecture chime in
here. Is this Kerfuffle the root of the sudden resurgence of KEPs?

~~~
the_duke
Your writing style is hilarious, bordering on poetic.

It is somewhat hard to follow the actual information though, and non-native
speakers will probably struggle quite a bit.

~~~
qlk1123
Non-native reader/kubernetes learner here. I know only keywords from this
article. It's a pity that I cannot get the full meaning of it, but still it
provide a chance for me to learn some upstream news.

Thanks for speaking out for us!

~~~
aberoham
Hi! If you're up for it, I'd love to do a recorded video/voice call to listen
to your feedback and perhaps even try to fill in the topic areas that non-
obvious to learners. Talking to folks about 'nettes is the highlight of my
day. Send a note to the email in my profile?

------
einstand
For me -- as I am outsider -- it's hard to understand all the non-technical
arguments and social dynamics.

From technical point of view, I really like the composition-based/declarative
approach of Kustomize.

I am testing different type of bigdata clusters in k8s (Hadoop, Hdfs, Spark,
HBase, Hive, Ozone, etc.). I need a method to easily create/install different
type of clusters with a set of predefined components. Sometimes I need Spark,
sometimes not. Sometimes I need HDFS HA sometimes not. With or without
kerberos.

With Helm chart it's almost impossible: all the charts would be full with
different conditional branches and which hard to be maintained (and hard to
handle dependencies: for example if both hdfs/yarn cluster would like to add
something to the core-site.xml configuration).

The composition based approach works better for my use-case, but I found
Kustomize very difficult to use. It has strict opinion/limitation how the
thing should be defined/worked [1], which are very smart, but sometimes not
for my use-case. Also it requires very verbose configuration.

Finally I just created my tool which is more flexible (shameless plug, it's
here: [2]) which can do anything with the help of composition and more easy to
use.

I respect the Kustomize developers and development and I am very happy that it
exists. (And I know that it's way more mature/smart than my tiny tool)

I am just not sure if it it's possible to cover all the use-cases (or majority
of the use-cases) with one existing tool (neither with Helm nor with
Kustomize). Therefore I would prefer to keep all these tools in separated
repo. Personally I would prefer to have more similar tools with new ideas and
prefer to have better and better (and more flexible) Kustomize.

But this is just my personal preference.

[1]: [https://github.com/kubernetes-
sigs/kustomize/blob/master/doc...](https://github.com/kubernetes-
sigs/kustomize/blob/master/docs/eschewedFeatures.md)

[2]: [http://github.com/elek/flekszible](http://github.com/elek/flekszible)

------
shaklee3
This is an awesome article, not just for the content, but the writing style is
great. There's a ton of information packed in links throughout, and even if
you're not interested in the topic, it's still fairly humourous.

------
warp_factor
If I understood correctly the article, the claim is that Google can introduce
features more easily into Kubernetes just because they are Google?

And I would answer: Why are you surprised? I worked a bit with Kubernetes and
it's mostly dominated by Google or by a top-level set of contributors that are
all friends and think alike. This is also probably the reason why Google
released Kubernetes as open-source. It allows them to indirectly control this
project.

