Hacker News new | past | comments | ask | show | jobs | submit login

We're currently looking at moving our applications to k8s, and was wondering what deployment tools people are using? This week we are evaluating spinnaker, helm and bash wrappers for kubectl. There is concern over adding too many layers of abstraction and that KISS is the best approach.



At SAP, we're using Helm [1] to deploy OpenStack (plus a bunch of extra services like a custom Rails-based dashboard [2]) on baremetal Kubernetes. For image building, testing and deployment, we use Concourse CI [3], and OpenStack assets (like service users, service projects and roles) are deployed with a custom Kubernetes operator [4].

Our charts are at [5] if you want to take a look.

[1] https://github.com/kubernetes/helm

[2] https://github.com/sapcc/elektra

[3] https://concourse.ci

[4] https://github.com/sapcc/kubernetes-operators in the "openstack-operator" directory

[5] https://github.com/sapcc/openstack-helm and https://github.com/sapcc/helm-charts (two different repos since we're in the middle of modularizing the original monolithic OpenStack chart in the first repo into per-service charts in the second one)


We also did some evaluation and then decided to stick to KISS an chose kubectl commands combined with cat and kexpand. Really simple approach to allow dynamic kubernetes deployments.

Example command can be

   cat service.yml | kexpand expand -v image-tag=git-135afed4 | kubectl apply -f -
The service.yml contains the full deployment configuration, service definition and ingress rules. So this works without preconfiguring anything in kubernetes when deploying an new service.

An engineer only has to create the service.yml and Jenkins does deploy it automatically on every master build.

*kexpand is a small tool which does something similar to sed, but in a simpler and less powerful way (keep it simple): https://github.com/kopeio/kexpand


We basically did the same thing except we used Ansible for our templating. This allows us to store all our shared "environment configuration data", e.g., name of RDS for services in prod environment, name of backup S3 bucket for services in dev environment, in an Ansible role then just pull that information into our templated deployment manifest file. So far, it's worked out pretty well for us.


I do something similar, but envsubst does the job.


+1 on envsubst, it's the minimal solution to the problem of templating kubernetes (of course YMMV, we are a small team and don't need more complex stuff)


Ditto me too to get my git sha into my image names and deploy with deployments.


We (ReactiveOps) use a combination of CircleCI and some scripts wrapping kubectl https://github.com/reactiveops/rok8s-scripts


We're using helm, but that was chosen mostly based on gut feel. It's an official project, and has momentum. We didn't want to spend too much time choosing a tool until we knew our requirements, and we don't really have a firm grasp of requirements until we've used something for a while. It seems to be working well for us so far, but it's still early.

There are lots of answers here that aren't helm, so I'm curious if there are any particular reasons that people ruled out helm?


We ran into a number of issues with Helm when deploying - failures leading us to have to rollback, with rollbacks then failing, requiring manual changes to unblock.

I think that for third-party packages and related templating (which seems like the original use-case) it works well, but I would be wary of using it for high-res deploys of our own stuff.


Are you doing what Gitlab claims below "We (GitLab) use it mostly as a templating system as well" or really using it to manage complex apps? I think it doesn't really offer much for normal microservices.

I do use it for things like nginx ingress but for stuff I've built a service.yml / deployment.yml are fine.


I recently spoke about the approach we use at Ticketmatic: https://rocketeer.be/articles/coreos-fest-2017/


That must have been an awesome talk. Thanks for the write-up!


If you're using our GitLab consider using Auto Deploy. Our CTO recently made a quick start guide for it https://docs.gitlab.com/ee/ci/autodeploy/quick_start_guide.h...


FWIW, the next version of GitLab's Auto Deploy will use Helm under the hood (and let you bring-your-own-chart).


A lot of folks are using Helm, but I find it very opaque to debug when templates go wrong (and I feel quite strongly that we shouldn't be writing untyped templates for our models). Also I found writing reusable spec components to be very difficult, e.g. a reverse proxy that I add to a number of pods.

I use pykube (also worth looking at the incubator project client-python) to write deploy scripts in Python; client-python is particularly nice as it uses OpenAPI specs to give you type hinting on the API objects/fields that you're writing. Much more civilized than typing bare yaml into a text editor.

If Python isn't your thing you can generate your own client from the OpenAPI specs, though I've found the client generation process to be a bit buggy.


I've got a simple setup that makes use of YAML files, Rake tasks, and raw kubectl. I've yet to take a look at helm or spinnaker but it's on my list. You really can go a long way just with K8s' own tooling.


I've started with creating Rake tasks. Then I found https://github.com/CommercialTribe/psykube which is opinionated. Then I've ended up creating my own simple ruby tool to manage kubernetes with my own directory structure and configuration.


We've written a bunch about Spinnaker & K8s at http://go.Armory.io/kubernetes ; hope that's helpful!


I feel you. About a month ago I was fighting with the same feeling. In the end, decided to use Kubernetes only for a single piece of infrastructure so it's all pretty manageable through scripts. Managing secrets in particular is a pain in the ass.

One route I started checking but didn't commit to was using Ansible. They have a relatively good Kubernetes playbook and a facility to store secrets. That said, every damn task needs to be pointed to the K8S API endpoint, which is not the greatest.


Agree - we've been talking about how we can more natively tie the inventory into clusters, contexts, and apps. The host focus of Ansible doesn't always map to other domains, but I think it has a real chance with Kube.


I'd recommend looking into openshift. it's basically kubectl + cool deployment features. there's also free, paid, and dedicated online hosted options.

disclaimer: I work on openshift


You had a big announcement about openshift.io. Everyone on HN signed up, but it's been months, but I'm still 'awaiting approval'

Have you let anyone in? What's the value in all the marketing hype if you then don't let people in.


Remember that OpenShift.io is just one (newer) way to work with OpenShift, currently focused on deploying to the hosted OpenShift Online.

But if you are looking at doing self-hosted Kubernetes, you really should give OpenShift a look. It does a lot to give sane defaults for things like ingress and building containers, while simplifying the deployment story.

For just getting started and kicking the tires, you can use minishift to spin up a local VM with everything installed and configured: https://www.openshift.org/minishift/

Disclaimer: I also work for Red Hat. We're everywhere :-)


I can't speak much for OpenShift.io, which is a different product and as another commenter mentioned is just one way to work with OpenShift, with added team-based features on top of OpenShift.

For instant Online hosted access, our Pro tier will immediately allow you to create an account if you sign up today. On our Starter (free) tier we are working hard to provision as many new user accounts as we can (this week we were able to allow in 2,000 new users). Both of these are accessible at openshift.com

And of course, there is the possibility of hosting your own OpenShift cluster for free (the project is open source), which was more my intention of recommending a kubectl wrapper for the above poster.


When the Developer Preview of OpenShift Online closed, the next news I heard from OpenShift was about OpenShift.io. Before and after that, I have only received comms about webinars from the mailing list, and I've been watching for news that hosted options were coming back.

If the new $50/mo Pro Tier and free Starter account has already been advertised in any way, my experience should tell you that it was ineffective... (just so you know)

Maybe this was done on purpose because you can only allow a limited number of users in while remaining within capacity (so telling all of your old Developer Preview users they can now go ahead and hit up this new free tier all at once is maybe a recipe for exactly what situation you're trying to avoid)

But until I found out otherwise in this thread, I was still under the impression that the only remaining ways for me to use OpenShift (after the Developer Preview was ended) were, to spin one up by hand and host it for ourselves, or to pay something like $10,000 for a managed cluster on AWS.

Edit: I just found an announcement of OpenShift Online Pro Tier in an e-mail dated July 31. So it looks like I'm not actually too far behind the curve; and it was announced, I just didn't read it...


Seconded. The last e-mail I got from OpenShift.io indicated they haven't let anyone in. I've already practically lost interest. It was evidently only "coming soon" but that announcement really looked like "coming tomorrow."

The original OpenShift Developer Preview made you sign up, but you would be allowed into the platform within hours or days.


OpenShift.io is pre-beta but we have begun onboarding in order to get early feedback. So far ~400 people and in and we are slowly adding more as we stabilize and build out the product and underlying SaaS infrastructure.

We will be communicating our progress more openly and consistently with those on the waitlist. We really do appreciate your patience and are working hard to get people onboard.

Disclosure: I'm a PM in the Red Hat DevTools BU.


I don't mean to spout off in a public forum, don't think I don't understand how much work is involved in putting out something that we're not going to be disappointed with!

But I have been interested in OpenShift, and it is a good sign for me that so many people who obviously are involved in OpenShift are representing on this thread.

I will definitely give Openshift another look soon. We're building a group of experienced devs to help onboarding our new dev employees as they come on, and we're going to have some standard-setting capacity when it comes to showing off the tools we use.

(We are currently well behind in the K8S space imho, but getting better and I think making a more serious push for it is going to come soon. Our institution is large and like all large institutions Byzantine bureaucracy results in some extreme levels of inertia; well we haven't really dove into containers near the level I had at my last position, which was still only about token level – most core infrastructure still not containerized or even touching any containers.)

So, blank canvas!

I'm very much hoping to have an array of tools that I can share with new devs, so they can organically decide what works best for their own selves. I had kind of discounted oc based on not having used it much myself, but I do remember having some joy at reading the documentation that it was more centrally located and easier to "grok" as well as feeling more complete.

Anyway both things that will be super helpful to anyone who is new at Kubernetes or Dev stuff in general. There is just so much to learn and the ecosystem is ofc constantly evolving!

I will keep openshift origin in the toolbox and take another look at pro offering.

Thanks for introducing yourself to me!


OpenShift.io is a different product than hosted OpenShift on its own. If you were to sign up for the Pro tier at openshift.com your account would be immediately provisioned. We are working on provisioning enqueued users for our Starter tier as well but again, these are separate from OpenShift.io


Yeah, OpenShift Pro is also quite expensive... starting at $10k for 5 nodes right?

I just checked out your pricing page and I'm 100% wrong about this. You have a $50/mo tier now! That's fantastic, thanks for pointing me at that.

You should definitely spam everyone on the OpenShift.io waiting list and let us know about your new pricing /s

No seriously, I would have liked to get some spam about this. Maybe you sent it and I missed it. I am a lot more interested at $50/mo than I would have been at $10k/yr!


We wrote an internal tool that wraps Helm and GPG. But we're really using Helm as a glorified templating system; since we deploy from git, Helm's release management is useless to us, and is even somewhat in the way. We might decide to drop Helm at some point, I think.


We went down a similar path and ended up using helm-template [0] to render our helm charts without tiller.

We also use an internal tool that:

- maps applications to namespaces within clusters for different environments (since we have 1 cluster per environment)

- does some magic around secrets to make them easier to interface with

[0] https://github.com/technosophos/helm-template


That's a good point. We (GitLab) use it mostly as a templating system as well. It's a step up from piping `sed` output to `kubectl`. But we have our own tools for managing redeploys and rollbacks.


We are using ecfg (from Shopify) and Jenkins+kubectl. We use ansible for a couple of things but it's largely only because of some parameters of our architecture for a legacy monolith.


You should take a look at Deis Workflow.

I say this in spite of the fact that it was announced last week[1], the next release of Deis Workflow will be the last (under the current stewardship, and probably under that name.)

It's just such a solid system, I would even more strongly recommend the (already EOL'ed early last year)[2] Deis v1 PaaS, except that you've already indicated you're moving to K8S, and Deis v2 is designed for K8S. I still recommend the v1 PaaS for people learning about principles of HA clusters. (Another disclosure: I have published[3] about how to do this, a work on how to do a cheap HA cluster using Deis v1 PaaS.)

I have a strong suspicion that Deis will live on after March under stewardship of new leadership from the community.

In the mean time, you have roughly 6 months of support from Microsoft, maybe I am overstating to say that they have committed to keeping the lights on for that long, but they have committed to merging critical fixes for that long (and we hope that in 6 months, Kubernetes will have solidified enough that we don't have to worry too much about breaking changes from upstream release mongers anymore.)

Personally I don't buy commercial support and it would not be the deal maker or breaker for me.

[1]: https://deis.com/blog/2017/deis-workflow-final-release/#futu...

[2]: https://deis.com/blog/2016/deis-1-13-lts/

[3]: https://deis.com/blog/2016/cheapest-fault-tolerant-cluster-d...


I'd advise against choosing core infrastructure components (that have a clear EOL deadline) based on strong suspicion.

Even more so in a landscape that's constantly changing like Kubernetes. You have zero guarantees that it'll be maintained and will keep up with new breaking changes.


You know it's open source, right? I have zero guarantees that any of my open source projects that I use for business critical infrastructure aren't going to pack up shop and quit maintaining their stuff tomorrow.

You should know how your infrastructure works well enough to maintain it for yourself. I (personally) will be maintaining this one in the future, if necessary! We're working it out now. What do you mean by "strong suspicion?"

Please don't downvote because you read a few words you didn't like, I was upfront about this EOL date because I don't want it coming back later that I was dishonest about it, but my perception is not that "EOL" means it's dead, it is that "EOL" means it's done. Stability is a good thing. Microsoft also EOL'ed MSPaint.exe, and I remember how the community reacted. I think the quote was about "works for 99% of users and has been stable for over a decade? sounds like a good candidate for deletion!"

The project is cancelled because it's not strategically important to Microsoft, not because it's not viable or having technical issues. The core devs have chosen to work on more kubernetes-native tooling. They aren't abandoning Kubernetes, and I'll bet you don't have a competing product you can show me that has guaranteed to keep the lights on for the next 6 months.


Unfortunately, neither it being open source nor its technical prowess is reason sufficient for some people. That's a simplistic analysis. Most people using Linux don't know it the kernel code "well enough" to maintain it in face of hardware changes and other external requirements.

I don't mean anything by "strong suspicions", you do: "I have a strong suspicion that Deis will live on after March under stewardship of new leadership from the community."

It seems you're implying someone is going to pick it up and offer a level of support that will justify it as a viable option. I don't have insider information to make that judgment but maybe you do.

I'm not making any judgment on its technical merits or the reasons that let Deis to sunset it.


OK, fair. It's still worth a look! That was my point. If you compare it to other alternatives and find it to be the best thing, it would be a shame to put it on the shelf instead when there are no alternatives that are as technically strong.

If you don't look at it, you have no basis to compare it to the alternatives that you're evaluating. I would rather pick a really good solution than a supported one. If I have to pick a supported solution, then I would rather at minimum compare it to a good one, so I can be honest about what shortcomings it has. It is telling that most of the recommendations in this thread are Helm.

Helm is great, but it ain't no Deis.

Deis Workflow is rock-solid and worth evaluating, even if support is a requirement for you. Other competing solutions are not as good. I don't even know any other that are really comparable. Maybe OpenShift, but it is not "really K8S"


> Maybe OpenShift, but it is not "really K8S"

Can you elaborate more? What makes you think that this is true?


It has incompatible resource types. They built OpenShift to handle authorization and permissions before Kubernetes RBAC was fully baked. So there are OpenShift solutions that don't exist on K8S, and vice-versa.

The BuildConfig and ImageStream for starters.

It's not a substantive difference that makes OpenShift much harder to learn, but it is a difference that means "if RedHat decides to 'Deis' OpenShift," we're stuck rebuilding everything for the better-supported K8S-proper mainline tree, and large parts of our tooling are going to need to be replaced because Kubernetes does not do ImageStream, and OpenShift does not do Helm.

Maybe the chances of that happening are low, but there are enough differences that from my understanding, I should not ever expect Kubernetes projects to be directly portable to OpenShift without modification (or vice-versa.)

It's also very expensive for an open-source project. Granted you are paying for support and cloud hardware, but I can take Kubernetes and spin it up anywhere. Try installing RedHat OpenShift onto arbitrary releases of CoreOS, Debian, and Amazon Linux like you might be able to do with kubeadm or kops.

That was one of the core promises of Kubernetes, to run anywhere that you can run Docker. My experiences with OpenShift were anything but that. (If I want to run OpenShift Origin, I'll be setting up a latest release of Fedora or RHEL to do it on, I guess.)

I will take Deis for my dev environments at least, because I think the chances that Kubernetes core devs are going to break the APIs in a way that makes it impossible for Deis to be kept alive by a ragtag bunch that figured out how the CI scripts work, pretty much nil. I can take Deis to any cloud provider on any operating system that can do Kubernetes, or onto my own metal (or on Minikube, or on Localkube, or ...)

You get the point by now... Kubernetes brings an ecosystem of options, and OpenShift narrows the scope and range of that ecosystem substantially.

When Deis v1 was EOL'ed, I got into a bit of an argument on HN with Bacongobbler about whether Deis v2 was a different product or not. I argued that it was, because it runs on a different platform now (K8S) and does not support running on the old platform anymore (Fleet).

Technically not true because you can run K8S on Fleet, and Deis v2 on that. But for a sysadmin, it was different, because I knew the rules about making Deis v1 with Fleet "High-Availability" and the rules were all different for Kubernetes, so I argued that it was different.

But for a user, the APIs are all compatible, and they may even bring API integrations such as deisdash.com with them to the "new" platform. (Deisdash is the only API integration for Deis that I am aware of, but when Deis v1 was EOL and Deis v2 was production-ready, you could use Deisdash with either.)

I've now fully eschewed Deis v1 (my old place of employment still has one standing, but it runs such a small amount of infrastructure that I could replace it with a severely less sophisticated setup and nobody would notice until it failed.) I'm on Workflow now, and I have approximately no regrets about it. I can take it anywhere that I can take helm and K8s.

I'll be looking forward to see what the Deis/Azure team bring out in the future that's going to obviate the need for me to be on unsupported EOL Workflow. Because according to Deis team lead @gabrtv, they are still just getting warmed up:

https://twitter.com/gabrtv/status/891096179089342464


> Maybe the chances of that happening are low, but there are enough differences that from my understanding, I should not ever expect Kubernetes projects to be directly portable to OpenShift without modification (or vice-versa.)

It's true that openshift goes a lot further to disabling things that are dangerous or not ready. Ie preventing root containers, or not enabling third party resources until it went to beta. But everything that runs on Kube runs on openshift that depends on a beta feature or higher.

Re: other OSes - a large part of what we do at Red Hat is making all the other stuff work - Docker, filesystems, selinux, security, NFS, volume drivers, network, etc. A lot of times it's not worth the extra effort to track five distributions of anything, but instead to focus on making something actually work. The behind the scenes work outside of Kubernetes is just as important as the Go code, and so we focus on those few operating systems and making it all work together.


The fact is, most Kubernetes projects I know are installed by Helm, and (it might have been you, personally who) explained to me that Helm is incompatible with a multitenant environment. I think they've made some strides since RBAC has gotten a little more polished, ... but please correct me if I'm wrong, OpenShift permissions model and RBAC are more compatible than I think.

The last I heard, you just can't really use Helm on OpenShift unless you go to some lengths to lock it down to a single namespace.

It would be amazing if someone could publish a Helm on OpenShift guide! Hmm, it seems you maybe already did: https://github.com/kubernetes/helm/issues/2517


Helm isn't incompatible, it's just not currently set up for dealing with different tenants. You can use Helm in a single tenant fashion on OpenShift just like you can use it in a single tenant fashion on Kube today.

Starting with OpenShift 3.6 (on Kube 1.6) all RBAC roles between Kube and OpenShift are treated equivalently, and from OpenShift 3.7 onwards the OpenShift RBAC rules are just a compatible API shim on top of Kube RBAC. The out of the box rules on OpenShift are more restrictive simply to ensure that full multi-tenancy is possible, but they can always be lifted.


Awesome. This makes me feel more optimistic about OpenShift, especially given that I probably can't realistically take Deis Workflow to production now.

(I don't know how much you've looked at Deis, but I couldn't think of anything better to compare it to than OpenShift. I could probably switch from Deis to OpenShift without too much hassle. Now I'm going to have to go ahead and try Deis _ON_ OpenShift, though :)


Hmm. It looks like[1] I'm wrong about one more thing, that being where you can run OpenShift. The current installation docs describe a single all-in-one binary that you can use to run OpenShift origin on any current Linux kernel, or Mac:

[1]: https://docs.openshift.org/latest/getting_started/administra...

I don't know if you need to have Docker for Mac installed, but I would guess you don't (it would be crazy to try to interface with arbitrary versions of Docker, it probably runs its own docker inside of a virtualized layer.)

I'm going to have to look at this again in some more depth. Looks better than when I saw it last time! (That's to be expected, I guess, but again it is encouraging!)


> I'd advise against choosing core infrastructure components ...

To be fair, I'd advise against choosing core infrastructure components based solely on a recommendation in an HN thread.

You should do your own research, obviously! I am comfortable enough with the Deis brand to say that it is a solution that does not even remotely rival the Linux Kernel in terms of complexity. Kubernetes has API stability, and you can count on APIs that are not marked "alpha" or "beta" to be around in the same form as long as it is still called Kubernetes.

Deis is made of small, totally understandable parts. It is not a monolith that you need to weigh heavily in your conscience whether to allow it into your infrastructure, in case you can't find support for it at some future date... you can mix and match components, if you find one part does something that your infrastructure was lacking!

I'd be glad to elaborate on my strong suspicions, but tl;dr the last few days I've been scrambling to figure out how I'll get my issues resolved, now that the Microsoft dev team I had working on them for free is going somewhere else.

I've been raising every issue I can think of so I can get eyes on it before it's too late. And so far, I don't see anything that I think I can't solve for myself. My short list of issues, I've been able to solve almost all on my own! I have a lot of knowledge about Deis, I've been around for a few years, but I am not a core dev and I have never contributed any commits.

My impression of the codebase is that it is uncompromising and extremely comprehensible. There are at least about half a dozen of us that it appears will be sticking around; we don't want to undermine Deis and Microsoft's EOL notice, because "no breaking changes" will make our job easier as maintainers into the future. The first person to say the wrong thing, may wind up responsible for a fork. It is a delicate time, but I hope to convince some people that it should not be completely discounted because of this news.

But I don't really know what's going to happen to the project. It could be that development continues on Raspberry Pi and the project loses its focus on cloud platforms, because it's cheaper to do the development on RPI. And that might be fine for cloud users (or, it could be fine until your cloud provider makes a breaking update! More likely I think than K8s breaking APIs that are marked stable.)


I'm still waiting for anyone who downvoted to tell me what better alternatives to Deis Workflow exist. I would rather have a better solution than a supported one.


I'm still waiting for anyone who downvoted to tell me what better alternatives to Deis Workflow exist

The kind of people that prefer to downvote you because you said something they didn´t agree with are by definition not the kind of people that would have anything meaningful to contribute. Downvoting on HN is increasingly getting in the way of the utility of HN. I used to come here for the interesting discussions. I don´t bother that much anymore.


Yeah I see a lot of downvotes, but I must be getting almost as many upvotes because I don't seem to be gray text...

Drive-by downvoters definitely bother me! I try not to let it bother me though.




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: