
Show HN: konfd – Manage Kubernetes secrets and configmaps with Go templates - kelseyhightower
https://github.com/kelseyhightower/konfd
======
gabrtv
More great stuff from Kelsey...

My tldr; konfd writes out k8s configmaps based on other k8s resources like
secrets, configmaps, etc. Really useful for writing out complete config files
into a pod namespace without relying on external config backends.

Question: While looping on a syncInterval is certainly clean and
understandable, it feels suboptimal when all the templates sources are
themselves watchable with a k8s client. Benefits of switching to a watch
model:

1\. Speed: It'd be nice to have the template rendering fire immediately after
a source secret changed versus waiting for syncInterval.

2\. Resource Utilization: Switching from syncInterval to a watch should save
significant cycles by avoiding reprocessing templates when config hasn't
changed.

~~~
kelseyhightower
This is great feedback. I'm still on the fence about a watch mode for the
following reasons:

1\. The template configmap will not change as often as the secrets and
configmap key/value pairs they reference. This means you'll always need a sync
loop to regenerate the template and compare it with the current generated
version. This is how most configuration management systems work. Maybe we can
watch the data and uses changes as triggers?

2\. Watch is a bit racy when a template references multiple secrets and
configmaps. It's really hard to prevent a partial config. One idea is to scale
the konfd replicaset to 0 before making changing a bunch of secrets and
configmaps, then scaling konfd back to 1 to trigger the processing of the
templates to pick up the new values. The other option is to wait the "sync-
interval", but you'll run the risk of partial configs or needing multiple runs
to generate the complete config.

One possible solution to prevent 1 and 2 is to only reference one external
configmap or secret for each template resource. I'm still testing these
patterns in real life and will update the docs on which patterns work best for
each situation. With that said, I think I'll still add a watch mode so others
can test and provide feedback.

~~~
andrewstuart2
The second one has plagued me with a couple kubernetes clients I've written,
but something about your restatement of the problem struck me with (IMO, of
course) a nice idiomatic solution: a "debounce" fan-in (ish) channel between
the ResultChan() and consumer that bundles up watch notifications after some
configurable wait period. And actually, it's a common enough use case (I'd
definitely use it too) that I'll bet it would be useful to have in client-go
itself.

And secondly, I would _love_ to see this kind of concept upgraded to a first-
class citizen in kubernetes. Essentially, I think an equivalent of
`/etc/environment` for your cluster would be a powerful way of generalizing
app components. It would be great if just placing resources in your cluster
allows them to pick up at least some of their required configuration, much
like customizing `/etc/environment` does for *nix machines.

------
otterley
Kelsey, I (and others) would kindly appreciate it if you'd stop calling
anything in Kubernetes as it exists today a "secret" and writing new code to
further encourage its use. People should be aware in no uncertain terms that
the "secrets" store in etcd is totally unencrypted and insecure.

If you'd like to find a place to help, I'd suggest focusing your efforts on
connecting Kubernetes to Hashicorp Vault, which is truly secure, and
deprecating the old unencrypted etcd-backed implementation.

~~~
kelseyhightower
First, thanks for taking the time to offer feedback regarding what we call
"secrets" in Kubernetes. As you've rightfully pointed out the current
implementation of "secrets" are stored unencrypted in etcd; something like
Hashicorp's Vault [1] would be a much better choice for the given use-case.

I've already began exploring how an integration between Vault and Kubernetes
would work [2]. There are some things to work out, but those discussions are
under way [3]. The current prototype/example demonstrates one way of
leveraging Vault from applications deployed on Kubernetes, without changes to
the Kubernetes core.

While the vault-controller [2] works, we can do better. One idea is to
consider what a deeper Vault integration looks like. Ideally we can modify
parts of Kubernetes, mainly the kubelet agent that runs on every node, to
handled the secure generation and renewal of unique Vault tokens for each Pod
during the Pod creation phase.

Another idea would be to rethink "secrets" altogether and rebuild the entire
feature on top of the Vault API. We can "hide" Vault under the current
"secrets" API, which would let us remain backwards compatible. That would be
phase one. Phase two could introduce new Secret extensions, which would enable
users to declaratively manage Vault tokens, backends, and secret renewals
through the Kubernetes API.

[1] [https://www.vaultproject.io](https://www.vaultproject.io)

[2] [https://github.com/kelseyhightower/vault-
controller](https://github.com/kelseyhightower/vault-controller)

[3]
[https://github.com/kubernetes/kubernetes/issues/10439#issuec...](https://github.com/kubernetes/kubernetes/issues/10439#issuecomment-263954184)

~~~
otterley
I appreciate your efforts here, Kelsey. What I'm looking for (and I realize
I'm not your manager; this is just my opinion borne of a year of experience
with K8S) is a focus on strengthening the core first, then going back to the
veneer. Great edifices start with a strong foundation.

Also, the engineering teams need to do a better job of integrating community
PRs that address implementor demands. For example, Consul K/V support has been
asked for since 2014 [1], yet nobody seems to be in a hurry to integrate the
(generously-provided) functionality [2].

Finally, I'd like to see deadlines and decision-makers appointed for making
decisions such as the one you discuss, so we can avoid endless debates and
make forward progress quickly.

[1]
[https://github.com/kubernetes/kubernetes/issues/1957](https://github.com/kubernetes/kubernetes/issues/1957)

[2]
[https://github.com/kubernetes/kubernetes/pull/31622](https://github.com/kubernetes/kubernetes/pull/31622)

~~~
kelseyhightower
We all want a strong foundation and will continue to work to create one. In
parallel, many of us like building and sharing things, and given that most
things in Kubernetes are just abstractions, the things we are building today
will continue to work tomorrow, even if the "secrets" backend is reimplemented
on top of something like Vault.

I'm hopeful people find things like konfd useful, even if you only use
configmaps and avoid secrets entirely -- a use case supported by konfd.

~~~
Terretta
Also consider Consul/Vault so you can leverage the remarkably uncoupled
portability of __envconsul
__:[https://github.com/hashicorp/envconsul](https://github.com/hashicorp/envconsul)

// PS. To help avoid confusion, I wish K8s (not really) secrets examples
didn't use the word "vault" unless using, you know, "vault".

------
fcantournet
Hi kelsey, this is pretty nice ! Is there any way to bribe you into slowing
down on the awesome sauce delivery pipeline so people can catch up !? asking
for a friend.

If you find the time can you elaborate on why you elected to use ConfigMaps
for the templates too instead of building a 3rd party ressource ? This way you
wouldn't have to use annotations and the definition might be a little more
terse (or not)

Is there something about configmaps that make implementation easier ? or
provides additional behavior wrt to pod lifecycle or something like that ?

~~~
kelseyhightower
ConfigMaps offer everything I need for a tool like konfd, and ConfigMaps work
with the majority of clusters deployed today. While I could have used a
ThirdPartyResource, that would require me to design a custom scheme and write
a little more code up front.

Also, there are a few issues [1] with ThirdPartyResource objects including the
lack of validation and inconsistencies when interacting with
ThirdPartyResource objects through the Kubernetes API [2]. That's not to say
ThirdPartyResource's should not be used, but it's not a decision take lightly.

Now, if konfd were to grow or add more features, then I think a
ThirdPartyResource would be the way to go -- mainly because ConfigMaps require
the use of "reserved" keys and annotations to configure behavior.

[1]
[https://github.com/kubernetes/kubernetes/issues/22768](https://github.com/kubernetes/kubernetes/issues/22768)

[2]
[https://github.com/kubernetes/kubernetes/issues/29542](https://github.com/kubernetes/kubernetes/issues/29542)

~~~
fcantournet
That answers my question. Thank you for taking the time. :)

------
kozikow
I propose an alternative to templates: Use go objects to define your config.
Better dynamicity, readability and typesafety. See my post
[https://kozikow.com/2016/09/02/using-go-to-autogenerate-
kube...](https://kozikow.com/2016/09/02/using-go-to-autogenerate-kubernetes-
configs/) .

------
tonyhb
Aren't secrets stored unecrypted in kube? I'd hesitate to call them secrets,
though config looks good.

