For folks not familiar with Helm, it's basically apt-get for Kubernetes, but with the ability to deploy complex multi-tier systems. It has now graduated out of the Kubernetes incubator.
And their Workflow product (also open source), is basically the smallest piece of software that lets you run Heroku buildpacks on top of Kubernetes. So, you can get a 12-factor PaaS workflow, and still have the full Kubernetes API underneath if and when you need it.
Update: And I left out my all-time favorite piece of marketing collateral, their Children's Illustrated Guide to Kubernetes (available both as children's book and video): https://deis.com/blog/2016/kubernetes-illustrated-guide/
(Disclosure: I'm the executive director of CNCF and Gabe has been a super valuable member.)
I use Helm on a couple different projects as an integral part of CI/CD for templating config files and Ingress resources. Coupled with a good CI/CD system (I'm using GitLab), Helm templating is pretty critical in my workflow for creating demo applications (one per PR) at some specific endpoint/dns but then after integration, moving the updated code out to multiple DNS endpoints, etc.
Congrats to the team, and I look forward to seeing where Helm can go from here!
I'm right now working on porting my manifests + kubectl manual k8s workflow to helm+gitlab-ci(to automatically build and push imagines and then deploy to cluster), but I'm only just at the stage of creating the helm charts, so any literature about how to set up a ci to build the Docker images, push them, bump the chart version, deploy to staging environment for every PR and so on is something I'm very interested in.
Admittedly, I'm probably only using 50% of Helm's capabilities, because I'm primarily interested in its templating abilities rather than managing a centralized repo of apps that can be installed.
My current pattern (which I'm still iterating on) is that I have a `chart` directory which contains the Helm chart and templates, and TBH I don't really bother updating the version in `Chart.yaml` because I'm managing it in-repo with git tags, branches, etc. Then my gitlab CI scripts just `--set` a few variables so that the right docker image, DNS, and ingress path values are used for the branch.
This does mean that we're only using a small portion of Helm, and I wonder if a simpler tool wouldn't work just as well. All you need is templating, really.
I also don't understand why Helm requires "packaging" and "publishing" anything into a repo. If I commit the chart files to a Github repo, why can't Helm just go there and get them? I shouldn't need to run an HTTP server to serve some kind of special index. Git is a file server.
What I do like is the idea of Helm as a high-level deployment manager. Kubernetes deployments have nice rollback functionality, but you can't roll back multiple changes to things like configmaps and services, which I think Helm does? But Helm still wants those numbered versions...
In my opinion, Helm ought to ditch the "package management for Kubernetes" approach, and instead focus on deployment. Right now it feels like it's straddling the line between the two, and coming up short because of that. Perhaps what I want should be a separate tool.
Edit: Two more things: First, I don't like how Helm abuses the term "release". To me, and also the rest of the world, a release is a specific version of an app. You upgrade to a release, you don't upgrade a release. I think you should rename this to something like "deployment", "installation", "instance", "target" or similarly neutral.
Also, I have to say that templating ends up being a bit boilerplatey. There are things you typically want parameterized across all apps — things like the image name/tag, image pull policy, resources, ports, volumes, etc. The fact that every single chart has to invent its own parameters for this — which means your org has to find a way to standardize them across all apps — isn't so cool.
Edit: One more thing: Release names seem to be global across namespaces. That very much violates the point of namespaces. Name spaces. :-)
Sorry for the confusion!
To use a URL, I think you'd either have to push up the packaged chart as a .gzip file in your repo (which is annoying), or you'd have to package up the chart and create a GitHub release with the resulting gzip file to be able to reference it that way. In GitLab you might be able to point to a pipeline artifact after packaging the chart using GitLab CI.
In one experimental project, I'm using GitLab Pages to package and publish the index. It works out surprisingly well, but has some shortcomings.
We're looking at making it easier for people to bring-your-own-helm-chart and have GitLab deploy it. There's value in keeping the chart in the project repo, and there's value in decoupling the chart and putting in a different repo. I'm fascinated to see which becomes best practice.
I can't say I'm thrilled to put deployment-specific values in the repo. Maybe the solution is to keep the chart itself, with its defaults, in the repo, and then have a configmap or third-party resource in Kubernetes that contains the values. To deploy anything, you use the Kubernetes API to find the app's values and merge them into the template.
I wouldn't be surprised if one day Kubernetes might support that kind of templating. I also wonder what kind of system Google uses internally. They have been rather silent about offering good advice here.
I'm not planning on using all of helm either, and specifically, the whole chart repo feature is something that I don't see myself using at the moment.
The workflow I'm envisioning is one where I have a Project-A. Inside Project-A, there is a chart directory containing the project chart, so each project contains their own chart instead of having the charts in a centralised repo. The chart will be used for deployment, but something like versioning will not be bumped manually but instead be bumped by the CI after (if) tests succeeds, at which point a new docker image is built, tagged with a version-bump (or build number) and pushed, and then the chart is updated to use the new docker image and getting it's version bumped (or build number) and then at last deployed to staging environment. There should also be a next step for taking a chart deployed to staging and deploy it to production, but I'm not sure how that should be done yet.
I'm also not sure how or what the correct solution is in the case of actually updating the chart in the repo in the CI, or just supplying runtime variables (or what it's called) to helm as you deploy the chart that then overwrites the values hardcoded in the chart, but in any case, I'm envisioning a workflow where you for each PR and version-tag you make to a repo, a new docker image is produced and pushed, and a new chart is produced and deployed, so that you have a complete history of images and chart for each pr and version.
This gist of mine may help you get started with the CI portion (CI and k8s is interesting, there is a lot you need to account for as you're not given errors if a pod fails to deploy, etc). It could easily be converted to helm upgrade instead of raw kubectl - I've kept it kubectl so people know what's going on.
I'm hoping the monthly "Town-Hall" style Zoom meetings will continue, but if so I will also expect them to likely start getting much larger quickly, now with this news.
Obligatory disclaimer that I'm an engineer @ Deis working on Workflow daily.
It's true they'll get a decent amount of money, that, from now on, they have infinitely deep pockets, that they'll have some of the best keyboards and mice, but it's also true their wiki will end up in Sharepoint and their e-mails in Exchange.
Possibilities off the top of my head:
- Tighter integration of Helm, Workflow and Steward into Azure Container Service seems like an obvious one.
- Integration of Helm into Visual Team Studio Services?
- Option to deploy your app from Visual Studio to Azure Container Service
- Better container tools for Azure CLI
Something like Office or Windows, well, that's outside my wheel house of knowledge.
(But I'd guess that's probably enough to shoot it down as a tool for Microsoft to ever use, right there. Not rightfully, but likely in fact... maybe that's the old Microsoft though.)
>"the Windows codebase has over 3.5 million files and is over 270 GB in size." //
Except for helping with Windows Server support for kubernetes and supporting kubernetes in Azure Container Services
When the Skype buyout was announced, my friends called and laughed at me. The tech evangelist who worked for Microsoft at the time that I ran across while in Stockholm for business was super stoked.
I know and lived the days, I see the stories about the various "reporting back" -- that is outside my purview -- but when it comes to engagement with the OSS community, including Kubernetes, some simple searching will reveal just how much we've been involved in such.
One engineer working for the Azure Container Service team personally wrote most of the Azure Cloud Provider. An engineer from Redhat contributed the initial persistent volume on Azure support. An engineer in DX/TED is helping improve persistent volume support in 1.5.3 and 1.6. These things are easily revealed looking at Github PRs and other activity.
It's easy to post the "knee-jerk" post, I know I see something and think such, but sometimes a bit of searching will reveal surprises.
No cause effect on dokku. Both are separate projects managed by separate entities.
We do, however, wish them luck and are happy to see them succeed.
- Source: I am one of the primary Dokku maintainers.
Instead, we switched to running normal VMs using CoreOS and running Kubernetes on top of that. Same features and stability, but with the auto-update benefits of CoreOS.
I'm very familiar with docker, which we've been using for over 2 years.
But now, we're trying to get k8s running, with either kargo, kubeadm, deb packages, etc.:
They all failed with different bugs on different set of clouds / settings. (Trying to stick to running it on Ubuntu xenial).
Not sure if it's because 1.6.* just came out of the oven when I started...?
Thanks to Minikube, I understand how powerful k8s can be, and actually find kubectl quite simple to use, but I'm confused by how fragile and complex installation and setup seems to be.
I'm unsure how someone is supposed to maintain this system considering how (overly?) modular it is and the bugs I've encountered. Knowing that docker has a LOT of bugs, and k8s builds on top of it, I'm a bit scared. And there is no clean documentation on how to install it, with sections for all your choices, in a generic/agnostic way (deb+rpm distros, cloud integration or simple abstract VMs, ...)
What is you workflow? :)
It works well with Ubuntu and we are working on CoreOS (although no promises it gets merged back into core).
Kops can do upgrades although historically it was safer/easier to stand up another cluster alongside and migrate.
Happy to help if you have any questions - Matt [at] reactiveops.com.
You have to understand that there are still a lot of unfinished features (for example, almost no real documentation), still a lot of operational aspects left uncovered (persistent volume backed by local disk - for example running software that needs low-latency I/O, eg. DB servers).
The general installation flow is to beat it into submission. Drag the thing kicking and screaming into a cluster until it forms a quorum. (etcd, apiserver + controller-manager + cloud-stuff, scheduler, kubelets), and don't forget about the overlay network. And that's it, if it works and ugly, it works. As you say there are too many bugs in docker/rkt (OCR, libcontainer, the container filesystem problems - overlayfs, aufs, btrfs, devicemapper, AppArmor, SELinux labels and so on issues, and other Linux kernel related issues), and in Kubernetes itself too, and then there's the whole networking layer/aspect, still very much in flux.
But it's usable, because it's "antifragile", so if it can reach a working state, you can be pretty confident that it'll be able to reach it again if you add more nodes, nodes crash, load fluctuates, updates happen, deployments happen, etc.
I think minikube is a crucial early step in allowing at least agnostic development where a "professional" can then ease it into a cloud provider by turning knobs here and there. It'll probably stay that way for awhile but we'll see.
All for Kubernetes.
edit: links fix
Just from my perspective, having sat down and done a hackfest with guys from Deis in Redmond, they bring a phenomenal amount of experience in Kubernetes to Microsoft. We have a number of people that work in the area (contributing helm charts, other Pos, etc), but more knowledge in an interesting and growing area is always great to have.
Additionally, Workflow is a great way to get started on containerized apps and it works quite well atop ACS.
If people are doing interesting things on Azure (ACS/acs-engine or even VMs) using Kubernetes or DCOS, I'd love to hear about it.
Because they're selling time on an IaaS. They don't have to pick winners when they present a fungible pool of resources.
By way of analogy, BP and Shell don't care where their petrol gets burnt. Ford, GM, it's all the same to them.
Disclosure: I work on such a platform, Cloud Foundry, on behalf of Pivotal. We have a close working relationship with both Microsoft and Google.
So it is Kubernetes AND Mesos for them.
Just to clarify, we never dropped the original PaaS. Deis (now named Workflow) is still in active development, uses Kubernetes as the underlying scheduler and has monthly releases. We actually just released v2.13.0 5 days ago. :) https://github.com/deis/workflow
There is no direct upgrade path from v1 to v2, the branding was changed (new product name entirely) to coincide with the release of v2, and the v1 LTS branch is no longer receiving updates, support, or new builds when issues are identified.
It's kind of like the axe that is passed down from generation to generation for 150 years. Can't really call it the same axe anymore when you've replaced the blade and the handle several times over.
(This coming from a happy Deis user that still has living installations of both v1 and Workflow.)
Congratulations on the acquisition!
To be fair, we did continue to support Deis v1 for 3 whole years(!), which is very long considering we're just a small startup. Being woken at 3AM from yet another obscure etcd/fleet server failure really sucked, and systemd never truly got along well with Docker making for some fun interactions with Fleet. Overall (speaking personally as an engineer and support engineer), we are very happy we made the decision to switch to Kubernetes. Mind you, we were as early adopter as you could get with Fleet and etcd at the time, and etcd in particular has been significantly better for us in terms of stability/error reporting.
> There is no direct upgrade path from v1 to v2
I agree that it sucks there was no upgrade path from v1 to v2, but we felt it was necessary to make breaking changes to move forward from Fleet into the world of Kubernetes. That doesn't change the fact that we never dropped the PaaS product as a whole, though.
> the branding was changed (new product name entirely) to coincide with the release of v2
This was actually more to do with us becoming "Deis the company" more-so than the v2 release. Lots of users were getting confused with "Deis the company" and "Deis the github project", so we decided to rename it Workflow to help make it more clear in conversation.
> and Workflow is a completely separate and different product.
Curious to understand how you feel like v2 is a completely different product. From a user's standpoint the product offering never changed. The API, CLI and `git push` workflows were all still present in v2 and were drop-in replacements, save for backwards-incompatible database migrations (hence v2.0.0). It was just the administration's point of view that changed (Fleet -> Kubernetes, deisctl -> helm). To me it still feels like the same product, but I'm curious to hear from you why you feel differently. :)
The kind of support I got from Deis the company is really without comparison when it comes to Open Source projects anywhere else.
The fact that v2 is a drop-in replacement for v1 really eases the sting of the fact that there is no direct upgrade path. I still have an old Deis v1 kicking around because I left the company, transitioned to hourly, and made an agreement to draw down my hours at this company, where we are in reality hardly using Deis at all. But in the small capacity we are using it, there is what I'd call "VMware Levels of Reliability" and so it successfully became a piece of the infrastructure there.
So it is disingenuous for me to say that I was not able to upgrade my v1 installation. The reason I was not able to upgrade is because there was not strong interest in upgrading. The unsupported product is as good as our (also unsupported! but luckily not End-of-Life) VSphere and VSan environment.
It is a different product to me, in short because I am the one administering it. It runs on a different platform altogether, it has no distributed filesystem component where the old version did, and it has not really harmed me in any way that there is no upgrade path. It is just a couple of facts that led me to the conclusion that they are in fact distinct and other products that are not directly related to each other, except that they could easily pass for one another if you asked a user.
I am really happy for you guys, Microsoft is a real big name compared to EngineYard, and while I could get behind EY+Deis, it's a hard sell for the Design Review Board. But I can tell them "look, Microsoft is doing this now" and they will know what that means instantly. Big guns. No joke software.
This is how I've actually felt about Deis from the beginning, but now it's going to be a much easier sell to get Big Wigs to sign off on. Nobody ever got fired for buying Microsoft!
Your announcement to End-of-Life Deis v1 came what seemed like days before CoreOS announced their decision to kill Fleet. So, not like there's anything you could have done about it, save deciding to pick up supporting Fleet for yourselves.
(And I like fleet, but I understand thoroughly why it was a good decision for CoreOS and for Deis to end support for it. It was a wholly inferior solution, begging for a replacement.)
Distilling my previous comment, it's really this.
I navigated the waters of Fleet and Deis v1 to find an answer to "how can I make sure this does not go down with lots of lead time and half a dozen warnings well in advance before it does go down." (Aside, my datacenter at the time had famously reliable electricity on two grids that just does not go down.)
Now I have to renegotiate that position to get the same guarantees with Kubernetes. Before, I was worried about maintaining etcd quorum when N machines go down, and preventing split-braining. Now, I'm still worried about those things, but they're behind an abstraction layer of Kubernetes API and new suite of tools for managing it.
I'm not expected to manage my etcd quorum in the same way. I am sure that's good news. Or I am still expected to manage etcd quorum, but it's buried behind a mound of Kubernetes so it's hardly even clear that there is ETCd running at all on a basic cluster without multi-master. You couldn't have a Deis v1 cluster without running at least 3 instances of etcd. You were forced right away to get to know those failure modes.
If it's not clear yet, I am a really small-time consumer of high availability.
The new system also forces many best practices on me, in ways I'm not accustomed to. (More not bad things...) I was once advised to split my control plane from my data plane (and possibly also my routing plane) back in Deis v1, to ensure reliability. I never got around to it with Deis v1... but in Kubernetes it's already been done for me by kubeadm.
I had a 5-node fleet cluster with 5 etcd members that was "all control plane all the time" inside of what amounts to a single AZ, and it's pretty clear that this is a totally wasteful design now. You wouldn't build a K8S cluster with 5 masters and no minions. But what's 40GB of RAM in a private data center? To ensure reliable service, sure we had that kind of RAM just laying around. With Deis v1 we did exactly that. For a cluster the size of mine, I'd say it was a thoroughly researched and well-advised decision... at the time.
It's clearer to me from talking with you that, from a code perspective, there is just way too much code in common to call it a brand new product. For a cluster admin who doesn't get very deep into the code though, I feel like it's a much easier case to make that it is in fact a distinct and new product. The constraints have all shifted, and the guarantees are in quite a lot of cases not the same.
This all amounts to basically hand waving though, and I'll reiterate I am not passing judgement on the progression of Deis/Workflow and its place in the container ecosystem.
It's been a wild ride! Thanks for bringing along a community with you. The marching forward and continued progress of free software such as Deis and Kubernetes is a thing to behold.
I'm not sure how I feel about that statement.
PS: I'll take this opportunity to shamelessly promote my web based Deis UI: http://github.com/olalonde/deisdash
How often is a company, in effect, acquired twice?
Let the PaaS arms race begin.