Our charts are at  if you want to take a look.
 https://github.com/sapcc/kubernetes-operators in the "openstack-operator" directory
 https://github.com/sapcc/openstack-helm and https://github.com/sapcc/helm-charts (two different repos since we're in the middle of modularizing the original monolithic OpenStack chart in the first repo into per-service charts in the second one)
Example command can be
cat service.yml | kexpand expand -v image-tag=git-135afed4 | kubectl apply -f -
An engineer only has to create the service.yml and Jenkins does deploy it automatically on every master build.
*kexpand is a small tool which does something similar to sed, but in a simpler and less powerful way (keep it simple): https://github.com/kopeio/kexpand
There are lots of answers here that aren't helm, so I'm curious if there are any particular reasons that people ruled out helm?
I think that for third-party packages and related templating (which seems like the original use-case) it works well, but I would be wary of using it for high-res deploys of our own stuff.
I do use it for things like nginx ingress but for stuff I've built a service.yml / deployment.yml are fine.
I use pykube (also worth looking at the incubator project client-python) to write deploy scripts in Python; client-python is particularly nice as it uses OpenAPI specs to give you type hinting on the API objects/fields that you're writing. Much more civilized than typing bare yaml into a text editor.
If Python isn't your thing you can generate your own client from the OpenAPI specs, though I've found the client generation process to be a bit buggy.
One route I started checking but didn't commit to was using Ansible. They have a relatively good Kubernetes playbook and a facility to store secrets. That said, every damn task needs to be pointed to the K8S API endpoint, which is not the greatest.
disclaimer: I work on openshift
Have you let anyone in? What's the value in all the marketing hype if you then don't let people in.
But if you are looking at doing self-hosted Kubernetes, you really should give OpenShift a look. It does a lot to give sane defaults for things like ingress and building containers, while simplifying the deployment story.
For just getting started and kicking the tires, you can use minishift to spin up a local VM with everything installed and configured: https://www.openshift.org/minishift/
Disclaimer: I also work for Red Hat. We're everywhere :-)
For instant Online hosted access, our Pro tier will immediately allow you to create an account if you sign up today. On our Starter (free) tier we are working hard to provision as many new user accounts as we can (this week we were able to allow in 2,000 new users). Both of these are accessible at openshift.com
And of course, there is the possibility of hosting your own OpenShift cluster for free (the project is open source), which was more my intention of recommending a kubectl wrapper for the above poster.
If the new $50/mo Pro Tier and free Starter account has already been advertised in any way, my experience should tell you that it was ineffective... (just so you know)
Maybe this was done on purpose because you can only allow a limited number of users in while remaining within capacity (so telling all of your old Developer Preview users they can now go ahead and hit up this new free tier all at once is maybe a recipe for exactly what situation you're trying to avoid)
But until I found out otherwise in this thread, I was still under the impression that the only remaining ways for me to use OpenShift (after the Developer Preview was ended) were, to spin one up by hand and host it for ourselves, or to pay something like $10,000 for a managed cluster on AWS.
Edit: I just found an announcement of OpenShift Online Pro Tier in an e-mail dated July 31. So it looks like I'm not actually too far behind the curve; and it was announced, I just didn't read it...
The original OpenShift Developer Preview made you sign up, but you would be allowed into the platform within hours or days.
We will be communicating our progress more openly and consistently with those on the waitlist. We really do appreciate your patience and are working hard to get people onboard.
Disclosure: I'm a PM in the Red Hat DevTools BU.
But I have been interested in OpenShift, and it is a good sign for me that so many people who obviously are involved in OpenShift are representing on this thread.
I will definitely give Openshift another look soon. We're building a group of experienced devs to help onboarding our new dev employees as they come on, and we're going to have some standard-setting capacity when it comes to showing off the tools we use.
(We are currently well behind in the K8S space imho, but getting better and I think making a more serious push for it is going to come soon. Our institution is large and like all large institutions Byzantine bureaucracy results in some extreme levels of inertia; well we haven't really dove into containers near the level I had at my last position, which was still only about token level – most core infrastructure still not containerized or even touching any containers.)
So, blank canvas!
I'm very much hoping to have an array of tools that I can share with new devs, so they can organically decide what works best for their own selves. I had kind of discounted oc based on not having used it much myself, but I do remember having some joy at reading the documentation that it was more centrally located and easier to "grok" as well as feeling more complete.
Anyway both things that will be super helpful to anyone who is new at Kubernetes or Dev stuff in general. There is just so much to learn and the ecosystem is ofc constantly evolving!
I will keep openshift origin in the toolbox and take another look at pro offering.
Thanks for introducing yourself to me!
I just checked out your pricing page and I'm 100% wrong about this. You have a $50/mo tier now! That's fantastic, thanks for pointing me at that.
You should definitely spam everyone on the OpenShift.io waiting list and let us know about your new pricing /s
No seriously, I would have liked to get some spam about this. Maybe you sent it and I missed it. I am a lot more interested at $50/mo than I would have been at $10k/yr!
We also use an internal tool that:
- maps applications to namespaces within clusters for different environments (since we have 1 cluster per environment)
- does some magic around secrets to make them easier to interface with
I say this in spite of the fact that it was announced last week, the next release of Deis Workflow will be the last (under the current stewardship, and probably under that name.)
It's just such a solid system, I would even more strongly recommend the (already EOL'ed early last year) Deis v1 PaaS, except that you've already indicated you're moving to K8S, and Deis v2 is designed for K8S. I still recommend the v1 PaaS for people learning about principles of HA clusters. (Another disclosure: I have published about how to do this, a work on how to do a cheap HA cluster using Deis v1 PaaS.)
I have a strong suspicion that Deis will live on after March under stewardship of new leadership from the community.
In the mean time, you have roughly 6 months of support from Microsoft, maybe I am overstating to say that they have committed to keeping the lights on for that long, but they have committed to merging critical fixes for that long (and we hope that in 6 months, Kubernetes will have solidified enough that we don't have to worry too much about breaking changes from upstream release mongers anymore.)
Personally I don't buy commercial support and it would not be the deal maker or breaker for me.
Even more so in a landscape that's constantly changing like Kubernetes. You have zero guarantees that it'll be maintained and will keep up with new breaking changes.
You should know how your infrastructure works well enough to maintain it for yourself. I (personally) will be maintaining this one in the future, if necessary! We're working it out now. What do you mean by "strong suspicion?"
Please don't downvote because you read a few words you didn't like, I was upfront about this EOL date because I don't want it coming back later that I was dishonest about it, but my perception is not that "EOL" means it's dead, it is that "EOL" means it's done. Stability is a good thing. Microsoft also EOL'ed MSPaint.exe, and I remember how the community reacted. I think the quote was about "works for 99% of users and has been stable for over a decade? sounds like a good candidate for deletion!"
The project is cancelled because it's not strategically important to Microsoft, not because it's not viable or having technical issues. The core devs have chosen to work on more kubernetes-native tooling. They aren't abandoning Kubernetes, and I'll bet you don't have a competing product you can show me that has guaranteed to keep the lights on for the next 6 months.
I don't mean anything by "strong suspicions", you do: "I have a strong suspicion that Deis will live on after March under stewardship of new leadership from the community."
It seems you're implying someone is going to pick it up and offer a level of support that will justify it as a viable option. I don't have insider information to make that judgment but maybe you do.
I'm not making any judgment on its technical merits or the reasons that let Deis to sunset it.
If you don't look at it, you have no basis to compare it to the alternatives that you're evaluating. I would rather pick a really good solution than a supported one. If I have to pick a supported solution, then I would rather at minimum compare it to a good one, so I can be honest about what shortcomings it has. It is telling that most of the recommendations in this thread are Helm.
Helm is great, but it ain't no Deis.
Deis Workflow is rock-solid and worth evaluating, even if support is a requirement for you. Other competing solutions are not as good. I don't even know any other that are really comparable. Maybe OpenShift, but it is not "really K8S"
Can you elaborate more? What makes you think that this is true?
The BuildConfig and ImageStream for starters.
It's not a substantive difference that makes OpenShift much harder to learn, but it is a difference that means "if RedHat decides to 'Deis' OpenShift," we're stuck rebuilding everything for the better-supported K8S-proper mainline tree, and large parts of our tooling are going to need to be replaced because Kubernetes does not do ImageStream, and OpenShift does not do Helm.
Maybe the chances of that happening are low, but there are enough differences that from my understanding, I should not ever expect Kubernetes projects to be directly portable to OpenShift without modification (or vice-versa.)
It's also very expensive for an open-source project. Granted you are paying for support and cloud hardware, but I can take Kubernetes and spin it up anywhere. Try installing RedHat OpenShift onto arbitrary releases of CoreOS, Debian, and Amazon Linux like you might be able to do with kubeadm or kops.
That was one of the core promises of Kubernetes, to run anywhere that you can run Docker. My experiences with OpenShift were anything but that. (If I want to run OpenShift Origin, I'll be setting up a latest release of Fedora or RHEL to do it on, I guess.)
I will take Deis for my dev environments at least, because I think the chances that Kubernetes core devs are going to break the APIs in a way that makes it impossible for Deis to be kept alive by a ragtag bunch that figured out how the CI scripts work, pretty much nil. I can take Deis to any cloud provider on any operating system that can do Kubernetes, or onto my own metal (or on Minikube, or on Localkube, or ...)
You get the point by now... Kubernetes brings an ecosystem of options, and OpenShift narrows the scope and range of that ecosystem substantially.
When Deis v1 was EOL'ed, I got into a bit of an argument on HN with Bacongobbler about whether Deis v2 was a different product or not. I argued that it was, because it runs on a different platform now (K8S) and does not support running on the old platform anymore (Fleet).
Technically not true because you can run K8S on Fleet, and Deis v2 on that. But for a sysadmin, it was different, because I knew the rules about making Deis v1 with Fleet "High-Availability" and the rules were all different for Kubernetes, so I argued that it was different.
But for a user, the APIs are all compatible, and they may even bring API integrations such as deisdash.com with them to the "new" platform. (Deisdash is the only API integration for Deis that I am aware of, but when Deis v1 was EOL and Deis v2 was production-ready, you could use Deisdash with either.)
I've now fully eschewed Deis v1 (my old place of employment still has one standing, but it runs such a small amount of infrastructure that I could replace it with a severely less sophisticated setup and nobody would notice until it failed.) I'm on Workflow now, and I have approximately no regrets about it. I can take it anywhere that I can take helm and K8s.
I'll be looking forward to see what the Deis/Azure team bring out in the future that's going to obviate the need for me to be on unsupported EOL Workflow. Because according to Deis team lead @gabrtv, they are still just getting warmed up:
It's true that openshift goes a lot further to disabling things that are dangerous or not ready. Ie preventing root containers, or not enabling third party resources until it went to beta. But everything that runs on Kube runs on openshift that depends on a beta feature or higher.
Re: other OSes - a large part of what we do at Red Hat is making all the other stuff work - Docker, filesystems, selinux, security, NFS, volume drivers, network, etc. A lot of times it's not worth the extra effort to track five distributions of anything, but instead to focus on making something actually work. The behind the scenes work outside of Kubernetes is just as important as the Go code, and so we focus on those few operating systems and making it all work together.
The last I heard, you just can't really use Helm on OpenShift unless you go to some lengths to lock it down to a single namespace.
It would be amazing if someone could publish a Helm on OpenShift guide! Hmm, it seems you maybe already did: https://github.com/kubernetes/helm/issues/2517
Starting with OpenShift 3.6 (on Kube 1.6) all RBAC roles between Kube and OpenShift are treated equivalently, and from OpenShift 3.7 onwards the OpenShift RBAC rules are just a compatible API shim on top of Kube RBAC. The out of the box rules on OpenShift are more restrictive simply to ensure that full multi-tenancy is possible, but they can always be lifted.
(I don't know how much you've looked at Deis, but I couldn't think of anything better to compare it to than OpenShift. I could probably switch from Deis to OpenShift without too much hassle. Now I'm going to have to go ahead and try Deis _ON_ OpenShift, though :)
I don't know if you need to have Docker for Mac installed, but I would guess you don't (it would be crazy to try to interface with arbitrary versions of Docker, it probably runs its own docker inside of a virtualized layer.)
I'm going to have to look at this again in some more depth. Looks better than when I saw it last time! (That's to be expected, I guess, but again it is encouraging!)
To be fair, I'd advise against choosing core infrastructure components based solely on a recommendation in an HN thread.
You should do your own research, obviously! I am comfortable enough with the Deis brand to say that it is a solution that does not even remotely rival the Linux Kernel in terms of complexity. Kubernetes has API stability, and you can count on APIs that are not marked "alpha" or "beta" to be around in the same form as long as it is still called Kubernetes.
Deis is made of small, totally understandable parts. It is not a monolith that you need to weigh heavily in your conscience whether to allow it into your infrastructure, in case you can't find support for it at some future date... you can mix and match components, if you find one part does something that your infrastructure was lacking!
I'd be glad to elaborate on my strong suspicions, but tl;dr the last few days I've been scrambling to figure out how I'll get my issues resolved, now that the Microsoft dev team I had working on them for free is going somewhere else.
I've been raising every issue I can think of so I can get eyes on it before it's too late. And so far, I don't see anything that I think I can't solve for myself. My short list of issues, I've been able to solve almost all on my own! I have a lot of knowledge about Deis, I've been around for a few years, but I am not a core dev and I have never contributed any commits.
My impression of the codebase is that it is uncompromising and extremely comprehensible. There are at least about half a dozen of us that it appears will be sticking around; we don't want to undermine Deis and Microsoft's EOL notice, because "no breaking changes" will make our job easier as maintainers into the future. The first person to say the wrong thing, may wind up responsible for a fork. It is a delicate time, but I hope to convince some people that it should not be completely discounted because of this news.
But I don't really know what's going to happen to the project. It could be that development continues on Raspberry Pi and the project loses its focus on cloud platforms, because it's cheaper to do the development on RPI. And that might be fine for cloud users (or, it could be fine until your cloud provider makes a breaking update! More likely I think than K8s breaking APIs that are marked stable.)
The kind of people that prefer to downvote you because you said something they didn´t agree with are by definition not the kind of people that would have anything meaningful to contribute. Downvoting on HN is increasingly getting in the way of the utility of HN. I used to come here for the interesting discussions. I don´t bother that much anymore.
Drive-by downvoters definitely bother me! I try not to let it bother me though.