Hacker News new | past | comments | ask | show | jobs | submit login
Kubernetes at GitHub (githubengineering.com)
413 points by darwhy on Aug 16, 2017 | hide | past | web | favorite | 137 comments



Love to see more Kubernetes success stories.

I work for an ISP and we are trying to write another success story ;) As an ISP, we have tons of constraints in terms of infrastructure. We're not allowed to use any public cloud services. At the same time, the in-house infrastructure is either too limited, or managed via spreadsheets by a bunch of dysfunctional teams.

For my team, Kubernetes has been truly a life saver when it comes to deploying applications. We're still working on making our cluster production-ready, but we're getting there very fast. Some people are already queuing up to get to deploy their applications on Kubernetes :D

What I especially love about Kubernetes is how solid the different concepts are and how they make you think differently about (distributed) systems.

It sure takes a lot of time to truly grasp it, and even more so to be confident managing and deploying it as Ops / SRE. But once you get it, it starts to feel like second nature.

Plus the benefits, in almost any possible way, are huge.


Red Hat's OpenShift makes it a lot easier by providing all of the infrastructure around it (docker registry, docker build from Git, Ansible integration and so on).

Best docs of all open source projects I've seen.


I second this. Have been PoCing OpenShift for a couple of months now and it's been a joy to use.


3rd for sure. We're a RH partner and specialize in OpenShift work. Tons of excitement with customers on using OpenShift. I love it.


Thanks! Hearing this makes it all worthwhile. We love helping make Kube awesome.


From the ops side, I would also suggest to take a look at Mesos, DC/OS and Marathon. Kubernetes, like Docker, is more developer-friendly; DC/OS is more ops-friendly. DC/OS can use Kubernetes natively.


>DC/OS can use Kubernetes natively.

Although this is possible. I've never actually heard of anyone using this in production. Would be curious to know what kind of issues ppl run into with this in production.


No way I am running ZooKeeper AND etcd.


You might have a hard time avoiding both.


> During this migration, we encountered an issue that persists to this day: during times of high load and/or high rates of container churn, some of our Kubernetes nodes will kernel panic and reboot.

Considering that Kubernetes doesn't modify the kernel, this issue sounds like is present in mainline and kernel devs should be involved.


I would be interested to know what storage driver they're using for their nodes. High container churn puts a lot of stress on the VFS subsystem of Linux, and we've seen cases where customers have trigger lots of mount/umounts which results in filesystems causing panics. At SUSE, we do have some kernel devs debugging the issues, but the workaround is almost always "rate limit all the things". There are a few other kernel areas that are stressed with high container churn (like networking), but VFS is the most likely candidate from my experience.

While on paper containers are very lightweight, spawning a lot of them exercises kernel codepaths that probably haven't been exerted to that type of stress during development.


Hey, this is Aaron from GitHub. We're using devicemapper w/ LVM backed pools. Would love to hear about your experience there. We definitely see this problem during periods of high container churn.


That's funny, we have an internal bug open right now about a kernel panics that happen with devicemapper (with XFS as the base filesystem). We found that the issue was exacerbated if you used loopback devices, but on paper it should still happen in non-loopback mode (the current theory is that it's a bug in XFS). Our kernel team is still investigating the issue, but they cannot seem to reproduce the issue with direct-lvm (and loop-lvm is inconsistent in reproducing the issue).

If you can consistently reproduce the issue, would you mind providing the backtrace and/or coredump? Is it possible for you to reproduce the issue on a machine without needing to be hit by GitHub-levels of traffic, and if so can you provide said reproducer?

For reference, our backtraces show that the kernel dies at Xfs_vm_writepage. Though of course different kernel versions may have varying backtraces.

You can reach me on the email in my profile, or asarai(at)suse.com.


My schroot tool used for building Debian packages could panic a kernel in under five minutes reliably, when it was rapidly creating and destroying LVM snapshots in parallel (24 parallel jobs, with lifetimes ranging from seconds to hours, median a minute or so).

This was due to udev races in part (it likes to open and poke around with LVs in response to a trigger on creation, which races with deletion if it's very quick). I've seen undeletable LVs and snapshots, oopses and full lockups of the kernel with no panic. This stuff appears not to have been stress tested.

I switched to Btrfs snapshots which were more reliable but the rapid snapshot churn would unbalance it to read only state in just 18 hours or so. Overlays worked but with caveats. We ended up going back to unpacking tarballs for reliability. Currently writing ZFS snapshot support; should have done it years ago instead of bothering with Btrfs.


In my work identity, we saw a similar problem in our testing, where blkid would cause undesired IO on fresh devices. Eventually, we disabled blkid scanning our device mapper devices upon state changes with a file /etc/udev/59-no-scanning-our-devices.rules containing: ENV{DM_NAME}=="ourdevice", OPTIONS:="nowatch"

Alternately, you could call 'udevadm settle' after device creation before doing anything else, which will let blkid get its desired IO done, I think.


Yes, we did something similar to disable the triggers. Unfortunately, while this resolved some issues such as being unable to delete LVs which were erroneously in use, it didn't resolve the oopses and kernel freezes which were presumably locking problems or similar inside the kernel.


A known (and now fixed) kernel issue affects the scheduler and cgroups subsystem, triggering crashs under kubernetes load (fixed by 754bd598be9bbc9 and 094f469172e00d). The fix was merged in Linux 4.7 (and backported to -stable, in 4.4.70). So if you run an older kernel, maybe you are hit by this?


Any particular reason you didn't choose something like overlay2?


Assuming they are using RedHat, they only recently announced overlay2 will be fully supported in newest release 7.4


If you're running Kube on AWS, make sure you install the proper drivers! For Ubuntu, that's the `linux-aws` apt package.

https://github.com/kubernetes/kops/issues/1558

Missing ENA and ixgbevf can be a real performance killer!


FWIW, the stock kernel (and HWE/HWE-edge kernels) recently picked up current ENA drivers. ixgbevf, unfortunately, doesn't look like it's been updated in-tree so it still lags behind Amazon's recommendation (currently 2.14.2, whereas Xenial's in-tree driver claims to be 2.12.1 and Trusty has 2.11.3).


Is this used by vanilla docker / ECS, or just k8s?


that advice holds regardless what software you're using, so long as the software does network traffic.

It holds for just running plain old nginx websites.

It doesn't really matter for small instance sizes where your networking is already rate-limited by amazon so much that ENA drivers won't matter, but on beefy instances it's always good advice to make sure you're using ENA supported driverse.

See https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced...


> Several qualities of Kubernetes stood out from the other platforms we evaluated: the vibrant open source community supporting the project, the first run experience (which allowed us to deploy a small cluster and an application in the first few hours of our initial experiment), and a wealth of information available about the experience that motivated its design.

It's interesting that the reasons they cite for choosing Kubernetes over alternatives are entirely driven by 'developer experience' and not at all technical. It shows how critical community development, good documentation, and marketing are to building a successful open source project.


I believe developer experience on being introduced to a tool is paramount to its success. It gives a lot of confidence in what you're doing and keeps things moving forward. To me appear that the application is built on a solid simple concept instead of a convoluted complex architecture. Some tools sin on the opposite though, very simple to setup but very complicated to understand how to scale.


Really exciting stuff, happy to see the Github team launch this.

Kubernetes is becoming the goto for folks needing both their own physical metal presence and cloud footprint too. And the magic of Kubernetes is that it has APIs that can actually give teams the confidence to run and reuse deployment strategies in all environments. Even across clouds.

If you are like Github and want to use Kubernetes across clouds (AWS, Azure, etc) & bare metal and do deploy/customize that infra using Terraform checkout CoreOS Tectonic[1]. It also tackles more of the subtle things that aren't covered in this article like cluster management, LDAP/SAML authentication, user authorization, etc.

[1] https://coreos.com/tectonic


I'm still utterly perplexed as to what Tectonic actually -is-. I kinda get that it's a kubernetes setup, but is it a GUI over the top of it? The website is pretty confusing and I think I gave up really quickly when trying to set it up.


Tectonic is Enterprise Kubernetes. We start with pure upstream Kubernetes at the core and install it in a production ready setup with the Tectonic Installer[1] across clouds or bare metal. On top of those basics Tectonic provides things most organizations need:

- Authentication backed by LDAP/SAML/etc

- One-click automated updates of the entire cluster

- Pre-configured cluster monitoring/alerting

There is a bunch more in there and in the roadmap too but that gives you a taste.

The other thing is that we provide professional services, training, and support to customers on the whole stack from the VM or machine on up to the Kubernetes API. We have done neat collaborations with customers like the ALB Ingress Controller[2] too.

[1] https://github.com/coreos/tectonic-installer [2] https://github.com/coreos/alb-ingress-controller


I'm currently deploying Tectonic flavoured Kubernetes at a large organisation and I can vouch for how great you guys are at supporting users (who are not yet customers) at any stage of the process - love that, and can't recommend you guys for that more. However, as the comment above says, Tectonic (and Quay for that matter) documentation is... just horrible and if not the engineers support, I'd be pretty much stuck on quite few things. Why don't you push your docs to a public repo, so I could do some writing and send some PR's? ;)


Happy to report that the Tectonic docs are open source and we would love to review your PRs: https://github.com/coreos/tectonic-installer/tree/master/Doc...

Any topics that stick out as needing the most attention? Glad you're enjoying your interaction with our engineers :)

(Product manager for Tectonic)


The bare-metal scaling documentation needs to be scrapped and rewritten from scratch. With very limited knowledge about terraform, it's faster to start over with nothing than trying to get terraform (apparently you need the installer bundled one) to work with your assets.zip (which is not even mentioned in the installation documentation).


We're currently looking at moving our applications to k8s, and was wondering what deployment tools people are using? This week we are evaluating spinnaker, helm and bash wrappers for kubectl. There is concern over adding too many layers of abstraction and that KISS is the best approach.


At SAP, we're using Helm [1] to deploy OpenStack (plus a bunch of extra services like a custom Rails-based dashboard [2]) on baremetal Kubernetes. For image building, testing and deployment, we use Concourse CI [3], and OpenStack assets (like service users, service projects and roles) are deployed with a custom Kubernetes operator [4].

Our charts are at [5] if you want to take a look.

[1] https://github.com/kubernetes/helm

[2] https://github.com/sapcc/elektra

[3] https://concourse.ci

[4] https://github.com/sapcc/kubernetes-operators in the "openstack-operator" directory

[5] https://github.com/sapcc/openstack-helm and https://github.com/sapcc/helm-charts (two different repos since we're in the middle of modularizing the original monolithic OpenStack chart in the first repo into per-service charts in the second one)


We also did some evaluation and then decided to stick to KISS an chose kubectl commands combined with cat and kexpand. Really simple approach to allow dynamic kubernetes deployments.

Example command can be

   cat service.yml | kexpand expand -v image-tag=git-135afed4 | kubectl apply -f -
The service.yml contains the full deployment configuration, service definition and ingress rules. So this works without preconfiguring anything in kubernetes when deploying an new service.

An engineer only has to create the service.yml and Jenkins does deploy it automatically on every master build.

*kexpand is a small tool which does something similar to sed, but in a simpler and less powerful way (keep it simple): https://github.com/kopeio/kexpand


We basically did the same thing except we used Ansible for our templating. This allows us to store all our shared "environment configuration data", e.g., name of RDS for services in prod environment, name of backup S3 bucket for services in dev environment, in an Ansible role then just pull that information into our templated deployment manifest file. So far, it's worked out pretty well for us.


I do something similar, but envsubst does the job.


+1 on envsubst, it's the minimal solution to the problem of templating kubernetes (of course YMMV, we are a small team and don't need more complex stuff)


Ditto me too to get my git sha into my image names and deploy with deployments.


We (ReactiveOps) use a combination of CircleCI and some scripts wrapping kubectl https://github.com/reactiveops/rok8s-scripts


We're using helm, but that was chosen mostly based on gut feel. It's an official project, and has momentum. We didn't want to spend too much time choosing a tool until we knew our requirements, and we don't really have a firm grasp of requirements until we've used something for a while. It seems to be working well for us so far, but it's still early.

There are lots of answers here that aren't helm, so I'm curious if there are any particular reasons that people ruled out helm?


We ran into a number of issues with Helm when deploying - failures leading us to have to rollback, with rollbacks then failing, requiring manual changes to unblock.

I think that for third-party packages and related templating (which seems like the original use-case) it works well, but I would be wary of using it for high-res deploys of our own stuff.


Are you doing what Gitlab claims below "We (GitLab) use it mostly as a templating system as well" or really using it to manage complex apps? I think it doesn't really offer much for normal microservices.

I do use it for things like nginx ingress but for stuff I've built a service.yml / deployment.yml are fine.


I recently spoke about the approach we use at Ticketmatic: https://rocketeer.be/articles/coreos-fest-2017/


That must have been an awesome talk. Thanks for the write-up!


If you're using our GitLab consider using Auto Deploy. Our CTO recently made a quick start guide for it https://docs.gitlab.com/ee/ci/autodeploy/quick_start_guide.h...


FWIW, the next version of GitLab's Auto Deploy will use Helm under the hood (and let you bring-your-own-chart).


A lot of folks are using Helm, but I find it very opaque to debug when templates go wrong (and I feel quite strongly that we shouldn't be writing untyped templates for our models). Also I found writing reusable spec components to be very difficult, e.g. a reverse proxy that I add to a number of pods.

I use pykube (also worth looking at the incubator project client-python) to write deploy scripts in Python; client-python is particularly nice as it uses OpenAPI specs to give you type hinting on the API objects/fields that you're writing. Much more civilized than typing bare yaml into a text editor.

If Python isn't your thing you can generate your own client from the OpenAPI specs, though I've found the client generation process to be a bit buggy.


I've got a simple setup that makes use of YAML files, Rake tasks, and raw kubectl. I've yet to take a look at helm or spinnaker but it's on my list. You really can go a long way just with K8s' own tooling.


I've started with creating Rake tasks. Then I found https://github.com/CommercialTribe/psykube which is opinionated. Then I've ended up creating my own simple ruby tool to manage kubernetes with my own directory structure and configuration.


We've written a bunch about Spinnaker & K8s at http://go.Armory.io/kubernetes ; hope that's helpful!


I feel you. About a month ago I was fighting with the same feeling. In the end, decided to use Kubernetes only for a single piece of infrastructure so it's all pretty manageable through scripts. Managing secrets in particular is a pain in the ass.

One route I started checking but didn't commit to was using Ansible. They have a relatively good Kubernetes playbook and a facility to store secrets. That said, every damn task needs to be pointed to the K8S API endpoint, which is not the greatest.


Agree - we've been talking about how we can more natively tie the inventory into clusters, contexts, and apps. The host focus of Ansible doesn't always map to other domains, but I think it has a real chance with Kube.


I'd recommend looking into openshift. it's basically kubectl + cool deployment features. there's also free, paid, and dedicated online hosted options.

disclaimer: I work on openshift


You had a big announcement about openshift.io. Everyone on HN signed up, but it's been months, but I'm still 'awaiting approval'

Have you let anyone in? What's the value in all the marketing hype if you then don't let people in.


Remember that OpenShift.io is just one (newer) way to work with OpenShift, currently focused on deploying to the hosted OpenShift Online.

But if you are looking at doing self-hosted Kubernetes, you really should give OpenShift a look. It does a lot to give sane defaults for things like ingress and building containers, while simplifying the deployment story.

For just getting started and kicking the tires, you can use minishift to spin up a local VM with everything installed and configured: https://www.openshift.org/minishift/

Disclaimer: I also work for Red Hat. We're everywhere :-)


I can't speak much for OpenShift.io, which is a different product and as another commenter mentioned is just one way to work with OpenShift, with added team-based features on top of OpenShift.

For instant Online hosted access, our Pro tier will immediately allow you to create an account if you sign up today. On our Starter (free) tier we are working hard to provision as many new user accounts as we can (this week we were able to allow in 2,000 new users). Both of these are accessible at openshift.com

And of course, there is the possibility of hosting your own OpenShift cluster for free (the project is open source), which was more my intention of recommending a kubectl wrapper for the above poster.


When the Developer Preview of OpenShift Online closed, the next news I heard from OpenShift was about OpenShift.io. Before and after that, I have only received comms about webinars from the mailing list, and I've been watching for news that hosted options were coming back.

If the new $50/mo Pro Tier and free Starter account has already been advertised in any way, my experience should tell you that it was ineffective... (just so you know)

Maybe this was done on purpose because you can only allow a limited number of users in while remaining within capacity (so telling all of your old Developer Preview users they can now go ahead and hit up this new free tier all at once is maybe a recipe for exactly what situation you're trying to avoid)

But until I found out otherwise in this thread, I was still under the impression that the only remaining ways for me to use OpenShift (after the Developer Preview was ended) were, to spin one up by hand and host it for ourselves, or to pay something like $10,000 for a managed cluster on AWS.

Edit: I just found an announcement of OpenShift Online Pro Tier in an e-mail dated July 31. So it looks like I'm not actually too far behind the curve; and it was announced, I just didn't read it...


Seconded. The last e-mail I got from OpenShift.io indicated they haven't let anyone in. I've already practically lost interest. It was evidently only "coming soon" but that announcement really looked like "coming tomorrow."

The original OpenShift Developer Preview made you sign up, but you would be allowed into the platform within hours or days.


OpenShift.io is pre-beta but we have begun onboarding in order to get early feedback. So far ~400 people and in and we are slowly adding more as we stabilize and build out the product and underlying SaaS infrastructure.

We will be communicating our progress more openly and consistently with those on the waitlist. We really do appreciate your patience and are working hard to get people onboard.

Disclosure: I'm a PM in the Red Hat DevTools BU.


I don't mean to spout off in a public forum, don't think I don't understand how much work is involved in putting out something that we're not going to be disappointed with!

But I have been interested in OpenShift, and it is a good sign for me that so many people who obviously are involved in OpenShift are representing on this thread.

I will definitely give Openshift another look soon. We're building a group of experienced devs to help onboarding our new dev employees as they come on, and we're going to have some standard-setting capacity when it comes to showing off the tools we use.

(We are currently well behind in the K8S space imho, but getting better and I think making a more serious push for it is going to come soon. Our institution is large and like all large institutions Byzantine bureaucracy results in some extreme levels of inertia; well we haven't really dove into containers near the level I had at my last position, which was still only about token level – most core infrastructure still not containerized or even touching any containers.)

So, blank canvas!

I'm very much hoping to have an array of tools that I can share with new devs, so they can organically decide what works best for their own selves. I had kind of discounted oc based on not having used it much myself, but I do remember having some joy at reading the documentation that it was more centrally located and easier to "grok" as well as feeling more complete.

Anyway both things that will be super helpful to anyone who is new at Kubernetes or Dev stuff in general. There is just so much to learn and the ecosystem is ofc constantly evolving!

I will keep openshift origin in the toolbox and take another look at pro offering.

Thanks for introducing yourself to me!


OpenShift.io is a different product than hosted OpenShift on its own. If you were to sign up for the Pro tier at openshift.com your account would be immediately provisioned. We are working on provisioning enqueued users for our Starter tier as well but again, these are separate from OpenShift.io


Yeah, OpenShift Pro is also quite expensive... starting at $10k for 5 nodes right?

I just checked out your pricing page and I'm 100% wrong about this. You have a $50/mo tier now! That's fantastic, thanks for pointing me at that.

You should definitely spam everyone on the OpenShift.io waiting list and let us know about your new pricing /s

No seriously, I would have liked to get some spam about this. Maybe you sent it and I missed it. I am a lot more interested at $50/mo than I would have been at $10k/yr!


We wrote an internal tool that wraps Helm and GPG. But we're really using Helm as a glorified templating system; since we deploy from git, Helm's release management is useless to us, and is even somewhat in the way. We might decide to drop Helm at some point, I think.


We went down a similar path and ended up using helm-template [0] to render our helm charts without tiller.

We also use an internal tool that:

- maps applications to namespaces within clusters for different environments (since we have 1 cluster per environment)

- does some magic around secrets to make them easier to interface with

[0] https://github.com/technosophos/helm-template


That's a good point. We (GitLab) use it mostly as a templating system as well. It's a step up from piping `sed` output to `kubectl`. But we have our own tools for managing redeploys and rollbacks.


We are using ecfg (from Shopify) and Jenkins+kubectl. We use ansible for a couple of things but it's largely only because of some parameters of our architecture for a legacy monolith.


You should take a look at Deis Workflow.

I say this in spite of the fact that it was announced last week[1], the next release of Deis Workflow will be the last (under the current stewardship, and probably under that name.)

It's just such a solid system, I would even more strongly recommend the (already EOL'ed early last year)[2] Deis v1 PaaS, except that you've already indicated you're moving to K8S, and Deis v2 is designed for K8S. I still recommend the v1 PaaS for people learning about principles of HA clusters. (Another disclosure: I have published[3] about how to do this, a work on how to do a cheap HA cluster using Deis v1 PaaS.)

I have a strong suspicion that Deis will live on after March under stewardship of new leadership from the community.

In the mean time, you have roughly 6 months of support from Microsoft, maybe I am overstating to say that they have committed to keeping the lights on for that long, but they have committed to merging critical fixes for that long (and we hope that in 6 months, Kubernetes will have solidified enough that we don't have to worry too much about breaking changes from upstream release mongers anymore.)

Personally I don't buy commercial support and it would not be the deal maker or breaker for me.

[1]: https://deis.com/blog/2017/deis-workflow-final-release/#futu...

[2]: https://deis.com/blog/2016/deis-1-13-lts/

[3]: https://deis.com/blog/2016/cheapest-fault-tolerant-cluster-d...


I'd advise against choosing core infrastructure components (that have a clear EOL deadline) based on strong suspicion.

Even more so in a landscape that's constantly changing like Kubernetes. You have zero guarantees that it'll be maintained and will keep up with new breaking changes.


You know it's open source, right? I have zero guarantees that any of my open source projects that I use for business critical infrastructure aren't going to pack up shop and quit maintaining their stuff tomorrow.

You should know how your infrastructure works well enough to maintain it for yourself. I (personally) will be maintaining this one in the future, if necessary! We're working it out now. What do you mean by "strong suspicion?"

Please don't downvote because you read a few words you didn't like, I was upfront about this EOL date because I don't want it coming back later that I was dishonest about it, but my perception is not that "EOL" means it's dead, it is that "EOL" means it's done. Stability is a good thing. Microsoft also EOL'ed MSPaint.exe, and I remember how the community reacted. I think the quote was about "works for 99% of users and has been stable for over a decade? sounds like a good candidate for deletion!"

The project is cancelled because it's not strategically important to Microsoft, not because it's not viable or having technical issues. The core devs have chosen to work on more kubernetes-native tooling. They aren't abandoning Kubernetes, and I'll bet you don't have a competing product you can show me that has guaranteed to keep the lights on for the next 6 months.


Unfortunately, neither it being open source nor its technical prowess is reason sufficient for some people. That's a simplistic analysis. Most people using Linux don't know it the kernel code "well enough" to maintain it in face of hardware changes and other external requirements.

I don't mean anything by "strong suspicions", you do: "I have a strong suspicion that Deis will live on after March under stewardship of new leadership from the community."

It seems you're implying someone is going to pick it up and offer a level of support that will justify it as a viable option. I don't have insider information to make that judgment but maybe you do.

I'm not making any judgment on its technical merits or the reasons that let Deis to sunset it.


OK, fair. It's still worth a look! That was my point. If you compare it to other alternatives and find it to be the best thing, it would be a shame to put it on the shelf instead when there are no alternatives that are as technically strong.

If you don't look at it, you have no basis to compare it to the alternatives that you're evaluating. I would rather pick a really good solution than a supported one. If I have to pick a supported solution, then I would rather at minimum compare it to a good one, so I can be honest about what shortcomings it has. It is telling that most of the recommendations in this thread are Helm.

Helm is great, but it ain't no Deis.

Deis Workflow is rock-solid and worth evaluating, even if support is a requirement for you. Other competing solutions are not as good. I don't even know any other that are really comparable. Maybe OpenShift, but it is not "really K8S"


> Maybe OpenShift, but it is not "really K8S"

Can you elaborate more? What makes you think that this is true?


It has incompatible resource types. They built OpenShift to handle authorization and permissions before Kubernetes RBAC was fully baked. So there are OpenShift solutions that don't exist on K8S, and vice-versa.

The BuildConfig and ImageStream for starters.

It's not a substantive difference that makes OpenShift much harder to learn, but it is a difference that means "if RedHat decides to 'Deis' OpenShift," we're stuck rebuilding everything for the better-supported K8S-proper mainline tree, and large parts of our tooling are going to need to be replaced because Kubernetes does not do ImageStream, and OpenShift does not do Helm.

Maybe the chances of that happening are low, but there are enough differences that from my understanding, I should not ever expect Kubernetes projects to be directly portable to OpenShift without modification (or vice-versa.)

It's also very expensive for an open-source project. Granted you are paying for support and cloud hardware, but I can take Kubernetes and spin it up anywhere. Try installing RedHat OpenShift onto arbitrary releases of CoreOS, Debian, and Amazon Linux like you might be able to do with kubeadm or kops.

That was one of the core promises of Kubernetes, to run anywhere that you can run Docker. My experiences with OpenShift were anything but that. (If I want to run OpenShift Origin, I'll be setting up a latest release of Fedora or RHEL to do it on, I guess.)

I will take Deis for my dev environments at least, because I think the chances that Kubernetes core devs are going to break the APIs in a way that makes it impossible for Deis to be kept alive by a ragtag bunch that figured out how the CI scripts work, pretty much nil. I can take Deis to any cloud provider on any operating system that can do Kubernetes, or onto my own metal (or on Minikube, or on Localkube, or ...)

You get the point by now... Kubernetes brings an ecosystem of options, and OpenShift narrows the scope and range of that ecosystem substantially.

When Deis v1 was EOL'ed, I got into a bit of an argument on HN with Bacongobbler about whether Deis v2 was a different product or not. I argued that it was, because it runs on a different platform now (K8S) and does not support running on the old platform anymore (Fleet).

Technically not true because you can run K8S on Fleet, and Deis v2 on that. But for a sysadmin, it was different, because I knew the rules about making Deis v1 with Fleet "High-Availability" and the rules were all different for Kubernetes, so I argued that it was different.

But for a user, the APIs are all compatible, and they may even bring API integrations such as deisdash.com with them to the "new" platform. (Deisdash is the only API integration for Deis that I am aware of, but when Deis v1 was EOL and Deis v2 was production-ready, you could use Deisdash with either.)

I've now fully eschewed Deis v1 (my old place of employment still has one standing, but it runs such a small amount of infrastructure that I could replace it with a severely less sophisticated setup and nobody would notice until it failed.) I'm on Workflow now, and I have approximately no regrets about it. I can take it anywhere that I can take helm and K8s.

I'll be looking forward to see what the Deis/Azure team bring out in the future that's going to obviate the need for me to be on unsupported EOL Workflow. Because according to Deis team lead @gabrtv, they are still just getting warmed up:

https://twitter.com/gabrtv/status/891096179089342464


> Maybe the chances of that happening are low, but there are enough differences that from my understanding, I should not ever expect Kubernetes projects to be directly portable to OpenShift without modification (or vice-versa.)

It's true that openshift goes a lot further to disabling things that are dangerous or not ready. Ie preventing root containers, or not enabling third party resources until it went to beta. But everything that runs on Kube runs on openshift that depends on a beta feature or higher.

Re: other OSes - a large part of what we do at Red Hat is making all the other stuff work - Docker, filesystems, selinux, security, NFS, volume drivers, network, etc. A lot of times it's not worth the extra effort to track five distributions of anything, but instead to focus on making something actually work. The behind the scenes work outside of Kubernetes is just as important as the Go code, and so we focus on those few operating systems and making it all work together.


The fact is, most Kubernetes projects I know are installed by Helm, and (it might have been you, personally who) explained to me that Helm is incompatible with a multitenant environment. I think they've made some strides since RBAC has gotten a little more polished, ... but please correct me if I'm wrong, OpenShift permissions model and RBAC are more compatible than I think.

The last I heard, you just can't really use Helm on OpenShift unless you go to some lengths to lock it down to a single namespace.

It would be amazing if someone could publish a Helm on OpenShift guide! Hmm, it seems you maybe already did: https://github.com/kubernetes/helm/issues/2517


Helm isn't incompatible, it's just not currently set up for dealing with different tenants. You can use Helm in a single tenant fashion on OpenShift just like you can use it in a single tenant fashion on Kube today.

Starting with OpenShift 3.6 (on Kube 1.6) all RBAC roles between Kube and OpenShift are treated equivalently, and from OpenShift 3.7 onwards the OpenShift RBAC rules are just a compatible API shim on top of Kube RBAC. The out of the box rules on OpenShift are more restrictive simply to ensure that full multi-tenancy is possible, but they can always be lifted.


Awesome. This makes me feel more optimistic about OpenShift, especially given that I probably can't realistically take Deis Workflow to production now.

(I don't know how much you've looked at Deis, but I couldn't think of anything better to compare it to than OpenShift. I could probably switch from Deis to OpenShift without too much hassle. Now I'm going to have to go ahead and try Deis _ON_ OpenShift, though :)


Hmm. It looks like[1] I'm wrong about one more thing, that being where you can run OpenShift. The current installation docs describe a single all-in-one binary that you can use to run OpenShift origin on any current Linux kernel, or Mac:

[1]: https://docs.openshift.org/latest/getting_started/administra...

I don't know if you need to have Docker for Mac installed, but I would guess you don't (it would be crazy to try to interface with arbitrary versions of Docker, it probably runs its own docker inside of a virtualized layer.)

I'm going to have to look at this again in some more depth. Looks better than when I saw it last time! (That's to be expected, I guess, but again it is encouraging!)


> I'd advise against choosing core infrastructure components ...

To be fair, I'd advise against choosing core infrastructure components based solely on a recommendation in an HN thread.

You should do your own research, obviously! I am comfortable enough with the Deis brand to say that it is a solution that does not even remotely rival the Linux Kernel in terms of complexity. Kubernetes has API stability, and you can count on APIs that are not marked "alpha" or "beta" to be around in the same form as long as it is still called Kubernetes.

Deis is made of small, totally understandable parts. It is not a monolith that you need to weigh heavily in your conscience whether to allow it into your infrastructure, in case you can't find support for it at some future date... you can mix and match components, if you find one part does something that your infrastructure was lacking!

I'd be glad to elaborate on my strong suspicions, but tl;dr the last few days I've been scrambling to figure out how I'll get my issues resolved, now that the Microsoft dev team I had working on them for free is going somewhere else.

I've been raising every issue I can think of so I can get eyes on it before it's too late. And so far, I don't see anything that I think I can't solve for myself. My short list of issues, I've been able to solve almost all on my own! I have a lot of knowledge about Deis, I've been around for a few years, but I am not a core dev and I have never contributed any commits.

My impression of the codebase is that it is uncompromising and extremely comprehensible. There are at least about half a dozen of us that it appears will be sticking around; we don't want to undermine Deis and Microsoft's EOL notice, because "no breaking changes" will make our job easier as maintainers into the future. The first person to say the wrong thing, may wind up responsible for a fork. It is a delicate time, but I hope to convince some people that it should not be completely discounted because of this news.

But I don't really know what's going to happen to the project. It could be that development continues on Raspberry Pi and the project loses its focus on cloud platforms, because it's cheaper to do the development on RPI. And that might be fine for cloud users (or, it could be fine until your cloud provider makes a breaking update! More likely I think than K8s breaking APIs that are marked stable.)


I'm still waiting for anyone who downvoted to tell me what better alternatives to Deis Workflow exist. I would rather have a better solution than a supported one.


I'm still waiting for anyone who downvoted to tell me what better alternatives to Deis Workflow exist

The kind of people that prefer to downvote you because you said something they didn´t agree with are by definition not the kind of people that would have anything meaningful to contribute. Downvoting on HN is increasingly getting in the way of the utility of HN. I used to come here for the interesting discussions. I don´t bother that much anymore.


Yeah I see a lot of downvotes, but I must be getting almost as many upvotes because I don't seem to be gray text...

Drive-by downvoters definitely bother me! I try not to let it bother me though.


> We enhanced GLB, our internal load balancing service, to support Kubernetes NodePort Services.

Everyone does this - because Kubernetes Achilles heel is its ingress. It is still built philosophically as a post-loadbalancing system .

This is the single biggest reason why using Docker Swarm is so pleasant.


Any load balancer can be configured or modified to target routable Pod IP addresses and skip node ports altogether. You'll have to integrate with the Kubernetes Endpoints API[1] and support dynamic backends. Another option would be to leverage Kubernetes' DNS and the SRV records[2] backing each service.

The reason node ports are used in the Cloud today is because most Cloud load balancing solutions only target VMs, not arbitrary endpoints such as containers, a limitation that will go away over time.

[1] Envoy with Kubernetes Endpoints integration: https://github.com/kelseyhightower/kubernetes-envoy-sds

[2] https://kubernetes.io/docs/concepts/services-networking/dns-...


Hi Kelsey, thanks for replying. You are definitely k8s secret weapon ;)

I know about this. However, ultimately the question is that Kubernetes is not a gradual scale up solution for most people. I have to be prepared to deal with building my own load balancer.

Basically, I cannot do an on-metal deployment very easily. Most of the questions on k8s slack for metal deployments were - how do I set this up with few tweaks like ssl pass through and source ip preservation.

It is not easy.

Either you build your own load balancer or you use a cloud provided one. Now, Ingresses are not pleasant. I'm not sure about the state of source ip preservation, but last I remember that the nginx ingress had still not surfaced ssl_preread_server_name to the ingress configuration.

Now, what would have been nice is if it was ingress-all-the-way-down : ingress with something like istio/linkerd, maybe it is possible.

Tl;Dr - I'm not github. I can't build my own load balancer. Give me something that works out of the box. Yes, I know it may go down - I'll survive. Docker Swarm does this.


If you use a statefulset, there is no load balancing, regardless of bare metal or cloud. Every pod has a DNS record you can use to address all other pods, and it's carried over in case of a pod failure. Are you looking for a load balancer or not? If you are, as the other person mentioned, you can use the nginx one.


For nginx. load balancing to pods from outside kubernetes is one solution that i think is quite elegant on premise. https://github.com/unibet/ext_nginx


Since Kelsey won't pitch his book, I'll do it for him. Coming out soon:

https://www.amazon.com/Kubernetes-Running-Dive-Future-Infras...


Sorry to turn this into a different direction but I wanted to ask you about packaging + deployment.

I am about to try using helm for packaging my Kubernetes configs to make use of its templating. Being able to include Kubernetes changes in the release makes it less common to forget some new env variable etc.

The only thing I don't like is that it replaces kubectl in a way. And some comments here speak of problems during deploy and rollback which makes me wary.

What are some best practices around that topic? I was also thinking of deploying with spinnaker but could not confirm it works with helm.

Any info is appreciated!

P.S. have you thought about making a redis-cluster example repo? Redis 4.0 has a new `cluster_announce_ip` setting with which I made it work, but I still don't like my setup 100%.


This is a question you should ask in #helm-users in kubernetes.slack.com. Lots of Helm users are more than happy to help answer questions like this. :)


This is an apt point. Kubernetes models Borg, and Borg has no concept of ingress. That's an entirely different problem space.

Obviously that doesn't fly if there isn't an equivalent open solution, so we did what we could with the system to make it not terrible. We can do more.

The point about Swarm is interesting, and has been much on my mind. Some of Kubernetes' perceived complexity is because we go to great lengths to avoid ever having two users collide, with escape hatches for the people who really need "unfriendly" features. This is because, again, Kubernetes models Borg. Borg clusters are giant, shared, multi-user, multi-app animals, where the users are in different business units and chances of collisions are high.

Swarm, on the other hand, thinks of a cluster more as an application construct. Sharing is not a big problem, and coordination is easy and local. This allows them to make different tradeoffs. I doubt very much that you can run a large number of similar apps in a single swarm without having collisions on things like ports.

I still believe the large-shared-cluster model is right in the limit. There are so many efficiencies to be had. But there are legit reasons it is hard to achieve right now.

I'm very interested in ways to make Kubernetes easier to use, ESPECIALLY in this regard. Real user feedback is critical.


that's an interesting perspective - however, Docker Swarm also does that. Docker Swarm secrets have been GA for longer thank k8s. The new UCP mechanism in the Docker Datacenter product is fairly interesting (has not made it to swarm yet). Its a paid product but makes RBAC pretty simple - https://success.docker.com/Architecture/Docker_Reference_Arc... .

One very interesting tool that Docker makes available is https://store.docker.com/community/images/docker/docker-benc...

I think the issue with k8s is that it is competing with the "Ruby on Rails" of frameworks viz Docker Swarm. I think the pluggability of critical pieces like ingress and secrets was taken too far.

> I doubt very much that you can run a large number of similar apps in a single swarm without having collisions on things like ports.

I dont think that is true, it does manage its overlay networks pretty well. Which is FWIW, another place where k8s took the non-opinionatedness too far. I think the number of bugs on "my stuff doesnt work with flannel but works with calico" should tell you that.

To be honest, Docker Swarm has some of these issues as well - https://github.com/moby/moby/issues/25526. But the fixes are included in the "batteries". On kuberenetes, I have to run behind upstream projects with heterogenous configuration (nginx vs haproxy ingress. or flannel vs calico configuration) to try and fix it.


> Docker Swarm secrets have been GA for longer thank k8s

I don't think that's true. Kube secrets were introduced 2015-02-17 and was considered GA in Kubernetes v1.0

> I think the pluggability of critical pieces like ingress and secrets was taken too far.

I think the pluggability is not the concern but the lack of an included solution. Part of the problem is that SOME platforms have an included solution - e.g. Google Cloud, and some need 3rd party code like nginx.

> I dont think that is true, it does manage its overlay networks pretty well

Overlays are a waste for most people. I get that making it simple is attractive, but it's (IMO) not something everyone wants or needs. Again, we could/should have had a built-in option.

Last I looked (admittedly a while ago) Swarm had a pretty deeply rooted notion of exposing ports on all nodes in the swarm, which means that if you have multiple containers that need to expose the same port, it was a problem. Kube takes extra complexity here, to make it possible to share arbitrarily.

Anyway, it's not my intent to bad-mouth Swarm or try to convince you that you're wrong. Different trade-offs were chosen for the two systems. Your feedback is noted and appreciated. :)


Can you elaborate? I found Kubernetes ingresses to be one of the most pleasant parts - we use the `nginx-ingress` from helm and it works very well. Not exactly an industrial github-strength load balancer, but it will get you a long way surely?


I think he means on a global scale (multi datacenter/region/cloud)


I have to pitch our solution to this problem also, a fork of the nginx ingress controller, External nginx ingress controller.

https://github.com/unibet/ext_nginx

Basically just handling nginx.conf from information in k8.

We run in production with ECMP in our routers to load balance stateless over any number of nodes. Easy to understand and very scalable.


We use Kubernetes on AWS - it creates the ELBs for us when we provision a new service. We've never created a load balancer after deploying a service.


Can you elaborate? What are the issues you're seeing with Ingress?


The service I'm working on porting to kubernetes has bandwidth requirements that far exceed a single machine, and though we're on AWS, we can't use ELB for various reasons. We ended up with a ReplicaSet of haproxies using HostPort, and a separate app that watches for haproxy service changes and updates Route53 round-robin DNS. We're somewhat fortunate that we only have a single service that needs to use the http/https HostPort.

We could have just left our haproxies outside of kubernetes, and may eventually end up doing so if the network performance doesn't meet our needs. As it is, it all works but there are a ton of sidecar services all over the place.


This sounds like what Deis Router was created to do.

It sounds like you've already got it nailed down, but maybe like to have a look at this: https://github.com/deis/router

(It's probably tightly coupled to Deis Controller, but something to look at anyway!)


What does Swarm do differently here?


swarm is a batteries-included system. you can use it in the way that kubernetes is used... or (if you dont have all these sophisticated load balancers), you can allow it to load balance for you.

https://docs.docker.com/engine/swarm/ingress/#publish-a-port...

What it means is that when you create a docker swarm - it starts working.


I don't really know much about Swarm. Can you describe how that is different than NodePort with Kubernetes (which is the default for Services)?


Thanks for the link, but this sounds the same as Kubernetes' Service NodePort.


I think I hashlinked directly. I wanted to post the main page. Ingress also use nodeport, the difference is not in nodeport..But in the ingress setup itself.

Docker Swarm's inbuilt ingress is now trying to build in proxy protocol and ipip mode for default usage.

Fwiw, you can use Swarm's inbuilt ingress with an external load balancer as well.


The link you posted highlights my exact concern with Swarm. If I use port 8080, then nobody else can use port 8080. That is a different tradeoff than Kubernetes is willing to make.

Functionally, we have this in NodePort, but because it is exclusively managed, you're very unlikely to have a meaningful conflict. BUT it depends on traffic ingress being managed.

If you just want to map port 8080 on every node into your kube Service, it's more complicated than Swarm. Granted. That's because we don't think you should do that - it doesn't scale.

If you just want to map a port on every node (and you don't care which port), kube Services have you covered. Swarm's model is a nearly-direct clone of this.


swarm is not limited to working with nodeport. I think i was not able to express what i was saying correctly.

Swarm has an ingress mode that is "built-in". It works like the kuberenetes ingress (with currently the same limitation of source-ip mapping). Let me re-emphasize built-in.

Swarm also has a load balanced nodeport mapping (called host mode). That is also built-in.

In fact, what I'm trying to say is the opposite of what you are perceiving - swarm is not superior. It is actually similar to kuberenetes.

However, swarm has saner built-in defaults - both ingress and overlay networks (no more flannel vs calico vs weave incompatibility issues). Kubernetes has an escape hatch for defaults - kubeadm+kompose. I keep praying that kubeadm+kompose becomes the keras for kubernetes tensorflow. Lots of defaults that lets you get started quickly.


Are people happy with a built-in L7LB? I find most people are very particular about which LB they use.


Any favorted training for learning Kubernetes?

I found this one so far: https://classroom.udacity.com/courses/ud615

But any extra courses/trainings is always appreciated



I'm got that Pluralsight one earmarked for 'soon'. I thought Nigel Poulton's Docker courses were excellent, so looking forward to it...


I'm interested in this as well. I found Hepito https://www.heptio.com/support-services-and-training


How hard (and how realistic) it is to actually get a reasonable understanding (and then stay up-to-date) with Kubernetes internals? Is there any go-to reading material?

We had ran another large-footprint container management system (not K8s, but also popular), and when its DNS component started to eat all the CPU on all nodes, best I was able to do fast,was just scrapping the whole thing and quickly replacing it with some quick-and-dirty Compose files and manual networking. At least, we were back to normal in an hour or so. Obvious steps (recreating nodes) failed, logs looked perfectly normal, quick strace/ltrace gave no insights, and trying to debug the problem in detail would've taken more time.

But that was only possible because all we ran was small 2.5-node system, not even a proper full HA or anything. And it had resembled Compose close enough.

Since then I'm really wary about using larger black boxes for critical parts. Just Linux kernel and Docker can bring enough headache, and K8s on top of this looks terrifying. Simplicity has value. GitHub can afford to deal with a lot of complexity, but a tiny startup probably can't.

Or am I just unnecessarily scaring myself?


I wouldn't say that you're unnecessarily scaring yourself at all. Kubernetes is extremely complex. I've been running it for a few months and I'm just starting to get my hands around it. Things will just stop working for what seems like no reason, and there are so many places to investigate you can easily burn most of a day troubleshooting.

It's a great system, but it's also relatively new, and most issues aren't well documented. You'll spend a lot of time in github issues or asking for help in the (very active, and often very helpful) community.

If you have a valid use case, I wouldn't steer you away from it, but your fears are well founded.


> Enhancements to our internal deployment application to support deploying Kubernetes resources from a repository into a Kubernetes namespace, as well as the creation of Kubernetes secrets from our internal secret store.

Would love to hear more about this was accomplished. I'm currently exploring a similar issue (pulling per-namespace Vault secrets into a cluster). From what I've found, it looks like more robust secrets management is scheduled for the next few k8s releases, but in the meantime have been thinking about a custom solution that would poll Vault and update secrets in k8s when necessary.


One thing I would have liked to have seen addressed in the article is whether the new architecture requires additional hardware (presumably) to operate and if so how much more.


We're on AWS, so this is tangentially related. At my company, we moved to k8s because we have quite a few low-usage services. Before k8s, each one of those services was getting its own EC2 instance. After k8s, we just have one set of machines which all the services use. If one service is getting more traffic, the resources for that service scale up, but we maintain a low baseline resource usage. In short, it's resulted in a measurable drop in our EC2 usage.


I've only dabbled in K8s and it strikes me that using it in production is a long term investment and, as it stands currently, a long term project to implement properly. You'll want to do exactly what Github did: setup a "review lab" or similarly comprehensive dev and test environment until you are absolutely comfortable with it in production. This will lead to the provisioning (and cost) of quite a bit of hardware - and when it is finally in production it'll likely be over-provisioned for quite some time until norms can be established and excess cut.

So basically its a traditional devops migration. But you get quite a few goodies and arguably much better practices at the end of it.


I agree very much, and I'd like to add one point: When you build a lab environment for testing Kubernetes deployments (and verifying Kubernetes upgrades), make sure it's on the same hardware as your production environment.

When my team did the first Kubernetes deployment, we made the mistake of building a lab environment that did not match the anticipated production environment. (Two reasons: The BOM for the production environment was not yet decided upon at that time, and the lab was frankensteined together by taking hardware out of existing labs.) We learned the hard way that, just because the Kubernetes upgrade worked in the lab, it need not work on the production hardware.

Right now, we're stuck on last year's (i.e., ancient) Kubernetes 1.4 release because no one dares to upgrade production. (There's light at the end of the tunnel, though. A new lab is being built up in the datacenter around now.)


If you're on AWS you can use kops (https://github.com/kubernetes/kops) to significantly reduce the amount of time to get a cluster up. It took me about 1hr to get a basic k8s cluster up and running with it.


I'd be interested in hearing what kind of autoscaling system they use for their Ruby pods.

We're running a few (legacy — we're moving to Go) Ruby apps in production on Kubernetes. We're using Puma, which is very similar to Unicorn, and it's unclear what the optimal strategy here is. I've not benchmarked this in any systematic way.

For example, in theory you could make a single deployment run a single Unicorn worker, then set resources:requests:cpu and resources:limits:cpu both to 1.0, and then add a horizontal pod autoscaler that's set to scale the deployment up on, say, 80% CPU.

But that gives you terrible request rates, and will be choking long before it's reaching 80% CPU. So it's better to give it, say, 4 workers. At the same time, it's counter-productive to allocate it 4 CPUs, because Ruby will generally not be able to utilize them fully. At the same time, more workers mean a lot more memory usage, obviously.

I did some quick benchmarking, and found I could give them 4 workers but still constrain to 1 CPU, and that would still give me a decent qps.


At GitLab we recommend to use CPU cores + 1 as the number of unicorn workers https://docs.gitlab.com/ce/install/requirements.html#unicorn...


How do you configure that? A pod doesn't know what machine it's running on ahead of time. You can create nodepools and use node selectors to pin the pod to that nodepool, but I'm not sure I love the idea.


Our entrypoint configures the unicorn workers before starting using a chef omnibus call. So we grab the memory and cpus using Ohai: https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/fil... and do cpus+1

^This is pretty much we do by default for all our regular package installs, but in some of our Kubernetes Helm charts we instead statically configure the pod resources and unicorn workers. (Defaulting to 1cpu - 2workers per front-end pod) eg: https://gitlab.com/charts/charts.gitlab.io/blob/master/chart...

As someone mentioned in this thread, using the downward api might be a cool way to configure the workers.


Using the downward API, your application can get the number of CPU cores or the millicore value via environment variables or volumes. This would let you configure your number of workers based on what resource limits you set on your container.


Curious, being a RoR app, did github ever run on Heroku? (Obviously googling "github heroku" is just a million tutorials on how to integrate.)


EY, Rackspace, and then our own metal.


I think I remember hearing that they were on EngineYard back in the day, but never Heroku.


hey! we at bitmovin have been using k8s for quite a while for our infrastructure and on premise deployments. In case you're interested in how we do multi stage canary deployments, check out: http://blog.kubernetes.io/2017/04/multi-stage-canary-deploym...


I'm curious to what this means for the existing puppet code base, is it now irrelevant, or are there still usages for it in the k8s world?


They could easily still use standalone puppet to handle the config management for individual container images. I currently do this with salt-minion. It reduces the burden on the Dockerfile itself, and lets you embrace a declarative configuration state at build time.


It definitely seems like the wrong approach to me to have puppet manage your base images. They're not VM's, they shouldn't have multiple services, they shouldn't require any complex configuration management, they should just be the minimum requirements to support your application's local runtime dependencies, and that's it.

From previous experience migrating from a puppet setup to one that used containers, puppet's vestigial use case ends up being to get the orchestration control plane itself setup (ie. kubernetes, networking configs, etc) and that's about it.


There's nothing inherently about Puppet that means it has to manage multi-service "whole OS"-like installations. It can just as easily be put to the task of a Dockerfile: install dependencies and deployables for a single application. Its robust ability to manage things like user accounts, packages, scheduled jobs (e.g. for alerting, though you would have to install at least a second service for this: _crond) and the like makes it vastly superior to Dockerfile shell scripts for complex tasks.

Think of puppet more like a way of simplifying your Dockerfiles to have fewer crazy shell commands in total, rather than hiding the craziness in layers and hoping it all composes properly. If you do use lots of layers, Puppet can make your life much easier, since it can be better at detecting previous layers' changes and working around them (think redundant package install commands. Even the no-op "already installed!" command takes time; if you're installing hundreds of packages--many people are, for better or worse--that can eat up build time).

Puppet isn't a VM provisioner; it can also be used as a replacement for large parts of your Dockerfile, or a better autoconf to set up the environment/deps for your application to run in.

Edit: syntax.


The point about layer complexity is a great one I didn't even consider. Your "config" step is no longer a mish mash of dozens COPY/RUN/etc directives (resulting in N new intermediate image layers), it just results in a single atomic layer where you run the Puppet bootstrap.

Obviously you could accomplish this with shell scripts as well to constrain your config step into one docker RUN directive, but I prefer the declarative state approach to the imperative one in this case.


1) I think you missed my point entirely here, I probably didn't do a good job of explaining it. I was trying to say that you run Puppet once at build time to bootstrap the configuration for the image, that's it. You could even uninstall it at the last build step if you want to reduce final image size. The primary distinction here is declarative vs imperative configuration management.

2) The one process-per-container dogma isn't necessarily the only way to run a successful docker stack. For example, I don't see anything wrong with using supervisor to manage whatever process you're running in your container.


This introduces a dependency on which pod runs on which host, unless you have puppet write the config for every service to every host.

People tend to use Puppet less to configure their applications as they move into containers, as a configuration change can just be made by rolling out a new image.


Needa bootstrap your k8s servers somehow ;)




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: