Hacker News new | past | comments | ask | show | jobs | submit login
Deis to Join Microsoft (deis.com)
464 points by gabrtv on Apr 10, 2017 | hide | past | web | favorite | 106 comments



Congrats to Gabe and the whole Deis team on the acquisition.

For folks not familiar with Helm, it's basically apt-get for Kubernetes, but with the ability to deploy complex multi-tier systems. It has now graduated out of the Kubernetes incubator.

And their Workflow product (also open source), is basically the smallest piece of software that lets you run Heroku buildpacks on top of Kubernetes. So, you can get a 12-factor PaaS workflow, and still have the full Kubernetes API underneath if and when you need it.

Update: And I left out my all-time favorite piece of marketing collateral, their Children's Illustrated Guide to Kubernetes (available both as children's book and video): https://deis.com/blog/2016/kubernetes-illustrated-guide/

(Disclosure: I'm the executive director of CNCF and Gabe has been a super valuable member.)


I think Helm is sold short when it's described as apt-get for kubernetes. I think it's probably substantially closer to chef/salt/puppet for k8s, with a little bit of apt-get in the form of service dependency management. Pulling down the binaries is all handled by docker, but reusing and retooling config files is where Helm's real strength is, IMO.

I use Helm on a couple different projects as an integral part of CI/CD for templating config files and Ingress resources. Coupled with a good CI/CD system (I'm using GitLab), Helm templating is pretty critical in my workflow for creating demo applications (one per PR) at some specific endpoint/dns but then after integration, moving the updated code out to multiple DNS endpoints, etc.

Congrats to the team, and I look forward to seeing where Helm can go from here!


Have you written about your setup anywhere?

I'm right now working on porting my manifests + kubectl manual k8s workflow to helm+gitlab-ci(to automatically build and push imagines and then deploy to cluster), but I'm only just at the stage of creating the helm charts, so any literature about how to set up a ci to build the Docker images, push them, bump the chart version, deploy to staging environment for every PR and so on is something I'm very interested in.


We use Helm together with Gitlab's CI as well. To make that process smoother, we built landscaper [1]. It receives a desired state, which is a bunch of yamls. Each yaml contains a chart-reference and values (settings). When landscaper is run during CI, it obtains the actual state (the releases) and either updates, creates or deletes to obtain the desired state. The desired state is under version control, so lately I rarely do any manual k8s work. Creating or updating a deployment is a matter of a pull request.

[1]: https://github.com/Eneco/landscaper


This is great, we'll discuss integrating this with GitLab https://gitlab.com/gitlab-org/gitlab-ce/issues/30748


Well this is awesome. Thanks for posting.


I haven't, but I've thrown together a gist [1] of the most-relevant configuration. In this case it's a pretty simple static page hosted on an nginx container, but the approach should scale to multiple services if I ever started using proper Helm dependency management. If you have any questions let me know, and I can also comment the gist a bit more as I get time today.

Admittedly, I'm probably only using 50% of Helm's capabilities, because I'm primarily interested in its templating abilities rather than managing a centralized repo of apps that can be installed.

My current pattern (which I'm still iterating on) is that I have a `chart` directory which contains the Helm chart and templates, and TBH I don't really bother updating the version in `Chart.yaml` because I'm managing it in-repo with git tags, branches, etc. Then my gitlab CI scripts just `--set` a few variables so that the right docker image, DNS, and ingress path values are used for the branch.

[1] https://gist.github.com/andrewstuart/8006a6f39ce5cb3fff7211e...


This is exactly the pattern that we are experimenting with. Glad to see that we're not alone in thinking that this is much better than the centralized chart repo way.

This does mean that we're only using a small portion of Helm, and I wonder if a simpler tool wouldn't work just as well. All you need is templating, really.


Templating is a good start, here are a few other things of value that Helm provides in this context. Release management, upgrade/rollback. Dependency management and the ability to create common charts and byo containers, the testing framework to test deployments on Kubernetes. I recently did a session at KubeCon about all these uses cases at KubeCon if you're interested in taking a look. https://youtu.be/cZ1S2Gp47ng Happy to chat more if you're interested.


Release management is desirable, but Helm's version-number-oriented release management is a bad match for Kubernetes and git. We don't deal with versions; we use commit IDs, and we use Github.

I also don't understand why Helm requires "packaging" and "publishing" anything into a repo. If I commit the chart files to a Github repo, why can't Helm just go there and get them? I shouldn't need to run an HTTP server to serve some kind of special index. Git is a file server.

What I do like is the idea of Helm as a high-level deployment manager. Kubernetes deployments have nice rollback functionality, but you can't roll back multiple changes to things like configmaps and services, which I think Helm does? But Helm still wants those numbered versions...

In my opinion, Helm ought to ditch the "package management for Kubernetes" approach, and instead focus on deployment. Right now it feels like it's straddling the line between the two, and coming up short because of that. Perhaps what I want should be a separate tool.

Edit: Two more things: First, I don't like how Helm abuses the term "release". To me, and also the rest of the world, a release is a specific version of an app. You upgrade to a release, you don't upgrade a release. I think you should rename this to something like "deployment", "installation", "instance", "target" or similarly neutral.

Also, I have to say that templating ends up being a bit boilerplatey. There are things you typically want parameterized across all apps — things like the image name/tag, image pull policy, resources, ports, volumes, etc. The fact that every single chart has to invent its own parameters for this — which means your org has to find a way to standardize them across all apps — isn't so cool.

Edit: One more thing: Release names seem to be global across namespaces. That very much violates the point of namespaces. Name spaces. :-)


You don't have to publish the chart. Installing from a directory in a GitHub repo works fine.


But then you don't get dependency tracking, and you also have to clone the repo to install it, instead of something like "helm install github.com/foo/bar#e367f6d".


Really? I started playing around with and it not clear how to do this with helm.


Ah, I think I understand the confusion now. I guess you're wanting to do something like `helm install https://github.com/some/url` and that doesn't work. But I was assuming you were consuming the chart in the same repo, or including it via a submodule, such that the chart would be a local file reference.

Sorry for the confusion!

To use a URL, I think you'd either have to push up the packaged chart as a .gzip file in your repo (which is annoying), or you'd have to package up the chart and create a GitHub release with the resulting gzip file to be able to reference it that way. In GitLab you might be able to point to a pipeline artifact[0] after packaging the chart using GitLab CI.

In one experimental project[1], I'm using GitLab Pages to package and publish the index. It works out surprisingly well, but has some shortcomings.

[0]: https://docs.gitlab.com/ce/user/project/pipelines/job_artifa... [1]: https://gitlab.com/charts/charts.gitlab.io


`helm install path/to/chart/directory`


You're not alone and it's great to see others thinking this way.

We're looking at making it easier for people to bring-your-own-helm-chart[1] and have GitLab deploy it. There's value in keeping the chart in the project repo, and there's value in decoupling the chart and putting in a different repo. I'm fascinated to see which becomes best practice.

[1]: https://gitlab.com/gitlab-org/gitlab-ce/issues/29969


We considered keeping all the charts together in one repo. But it's just simpler for developers.

I can't say I'm thrilled to put deployment-specific values in the repo. Maybe the solution is to keep the chart itself, with its defaults, in the repo, and then have a configmap or third-party resource in Kubernetes that contains the values. To deploy anything, you use the Kubernetes API to find the app's values and merge them into the template.

I wouldn't be surprised if one day Kubernetes might support that kind of templating. I also wonder what kind of system Google uses internally. They have been rather silent about offering good advice here.


I hope eventually you can use environment-specific variables[1] in GitLab to manage your deploy variables. [1]: https://gitlab.com/gitlab-org/gitlab-ce/issues/20367


Awesome! I will be reading through the gist later today when I get back to working on it.

I'm not planning on using all of helm either, and specifically, the whole chart repo feature is something that I don't see myself using at the moment.

The workflow I'm envisioning is one where I have a Project-A. Inside Project-A, there is a chart directory containing the project chart, so each project contains their own chart instead of having the charts in a centralised repo. The chart will be used for deployment, but something like versioning will not be bumped manually but instead be bumped by the CI after (if) tests succeeds, at which point a new docker image is built, tagged with a version-bump (or build number) and pushed, and then the chart is updated to use the new docker image and getting it's version bumped (or build number) and then at last deployed to staging environment. There should also be a next step for taking a chart deployed to staging and deploy it to production, but I'm not sure how that should be done yet.

I'm also not sure how or what the correct solution is in the case of actually updating the chart in the repo in the CI, or just supplying runtime variables (or what it's called) to helm as you deploy the chart that then overwrites the values hardcoded in the chart, but in any case, I'm envisioning a workflow where you for each PR and version-tag you make to a repo, a new docker image is produced and pushed, and a new chart is produced and deployed, so that you have a complete history of images and chart for each pr and version.


Yeah, I'd think you'd want to take advantage of specifying the image tag as a variable rather than committing the image tag back to the repo. Kind of defeats the point of helm to hard code everything. :) Only bump the chart version when the configure itself (outside of the image tag) changes.


Ah, yes, that makes sense!


Skimming through the comments here so I apologize if you are already aware of this, but please feel free to drop by in #helm on the Kubernetes Slack channel with any questions there. :)


I've been meaning to make a blog post about doing this on k8s.camp. I'll try to get to it this week/weekend and I'll let you know. I use helm to deploy several hundred services and microservices (stateless, stateful, etc).

This gist of mine may help you get started with the CI portion (CI and k8s is interesting, there is a lot you need to account for as you're not given errors if a pod fails to deploy, etc). It could easily be converted to helm upgrade instead of raw kubectl - I've kept it kubectl so people know what's going on.

https://gist.github.com/mikejk8s/0f805c3e7d0704cbea63db846a0...


Here's something I've put together. Hopefully it helps. https://youtu.be/NVoln4HdZOY


Ha, I just watched that. Someone posted it to me on the #helm channel on kubernetes slack just after I had watched your kubecon talk. Needless to say, I've been watching your croc-hunter pipeline doing it's thing a few times now


How does it compare to DC/OS on Mesos? Interested in hearing from people experienced in both.


The Children's Illustrated Guide to Kubernetes is pretty to look at, but it's terrible as a guide. I previously commented as to why: https://news.ycombinator.com/item?id=11927711


So Deis PaaS is dead. Is this now a AquireHire to get Kubernetes to Azure?

https://deis.com/blog/2017/deis-paas-v1-takes-a-bow/


Brendan Burns (k8s co-founder) joined Microsoft last July, so this is certainly not their first move in the Kubernetes area.


Definitely not dead. Actively developed and worked on. The release schedule has continued with an unbroken chain of monthly stable releases, while v1 platform was sunset with loads of advance warning time.

I'm hoping the monthly "Town-Hall" style Zoom meetings will continue, but if so I will also expect them to likely start getting much larger quickly, now with this news.


Support for v1 was dropped earlier this year. v2 is still actively developed and worked on. It's even mentioned in the blog post.

Obligatory disclaimer that I'm an engineer @ Deis working on Workflow daily.


Oh wow, the CIGtK is really awesome.


The industry is consolidating around the Kubernetes ecosystem. This acquisition is an example of many others that will follow as the major players want to build up their offerings and expertise.


Every time someone is acquired by Microsoft I can't avoid feeling sad for them.

It's true they'll get a decent amount of money, that, from now on, they have infinitely deep pockets, that they'll have some of the best keyboards and mice, but it's also true their wiki will end up in Sharepoint and their e-mails in Exchange.


I wonder what the ramifications will be?

Possibilities off the top of my head:

- Tighter integration of Helm, Workflow and Steward into Azure Container Service seems like an obvious one.

- Integration of Helm into Visual Team Studio Services?

- Option to deploy your app from Visual Studio to Azure Container Service

- Better container tools for Azure CLI


And what's the fate of their source control system?


Deis mostly has open source tooling on GitHub, I would expect it to stay that way. They have delivered many community projects that made it to https://github.com/kubernetes/ and https://github.com/kubernetes-incubator.


Microsoft is aligning on Git, so it's unlikely that anything will change (and the horror stories of the past are no-longer applicable for new aqui-hires.)


Vanilla git can't handle a repo the size of Microsoft's combined codebase though; surely there must be some additional secret sauce involved?



A lot of development around Azure tools is done in the open under various "groups" -- MicrosoftDx, Azure, etc -- there is no one "source repo" for all products. Even when I was at Skype, the repo for that was separate from that for S4B, etc -- I'm sure this has changed in the last few years. But, there isn't a monolithic repo.

Something like Office or Windows, well, that's outside my wheel house of knowledge.


Well I'd suggest "repo" -- the tool that goes with Gerrit, and collects many git repos into reviewable release units with a manifest. That's what Android does.

(But I'd guess that's probably enough to shoot it down as a tool for Microsoft to ever use, right there. Not rightfully, but likely in fact... maybe that's the old Microsoft though.)


There is no combined Microsoft codebase


From the cited Microsoft link:

>"the Windows codebase has over 3.5 million files and is over 270 GB in size." //


That's just Windows. There's no combined codebase akin to Google's.


TFS+git would be my guess.


Wait what? Didn't the Deis team just join EngineYard last year? Furthermore, why would Microsoft want Deis? They haven't really shown an interest in Kubernetes thus far.


> They haven't really shown an interest in Kubernetes thus far

Except for helping with Windows Server support for kubernetes[0] and supporting kubernetes in Azure Container Services[1]

[0]: http://blog.kubernetes.io/2016/12/windows-server-support-kub...

[1]: https://azure.microsoft.com/en-us/blog/kubernetes-now-genera...


And hiring the founder away from Google (Brendan Burns).


Well Microsoft did hire one of the Kubernetes Founders last year (http://www.crn.com/news/cloud/300081316/microsoft-hires-goog...) and have been working on getting Kubernetes running on Azure, so it doesn't surprise me that they'd want to strengthen that effort.


It's reported elsewhere[1] Deis is being acquired from EngineYard.

[1] https://techcrunch.com/2017/04/10/microsoft-acquires-contain...


My stint in computers started back in the DOS days, diverged into *nix, Os/2, and other operating systems, but also knew way too damn much about Windows platforms. By nature, I was a BSD guy, moved to Linux for some things. I've worked for MSFT for 6 years now.

When the Skype buyout was announced, my friends called and laughed at me. The tech evangelist who worked for Microsoft at the time that I ran across while in Stockholm for business was super stoked.

I know and lived the days, I see the stories about the various "reporting back" -- that is outside my purview -- but when it comes to engagement with the OSS community, including Kubernetes, some simple searching will reveal just how much we've been involved in such.

One engineer working for the Azure Container Service team personally wrote most of the Azure Cloud Provider. An engineer from Redhat contributed the initial persistent volume on Azure support. An engineer in DX/TED is helping improve persistent volume support in 1.5.3 and 1.6. These things are easily revealed looking at Github PRs and other activity.

It's easy to post the "knee-jerk" post, I know I see something and think such, but sometimes a bit of searching will reveal surprises.


I am confused by this as well. I thought does was part of engineyard. Can someone clarify? And does this affect dokku by any chance?


> And does this affect dokku by any chance?

No cause effect on dokku. Both are separate projects managed by separate entities.


Deis gave sponsorship money to Jeff Lindsay at some point to work on things that benefitted the docker paas ecosystem - think registrator or herokuish. At no point did any entity named "Dokku" get any money out of this, and Deis being acquired has no bearing on the project whatsoever.

We do, however, wish them luck and are happy to see them succeed.

- Source: I am one of the primary Dokku maintainers.


I think EngineYard is going elsewhere- Microsoft only acquired the Deis part of the business.


Well they've gotten kubernetes running on azure.


Yes you can run Azure Container Service using Kubernetes. When we migrated to Azure, we went with this and it worked reasonably well. However, they're using Ubuntu as their base OS, which we weren't the biggest fans of.

Instead, we switched to running normal VMs using CoreOS and running Kubernetes on top of that. Same features and stability, but with the auto-update benefits of CoreOS.



We literally e-mailed you asking for this last month. Fantastic.


Out of curiosity, why'd you choose Azure over GKE?


Cold hard cash. Our Azure credits have a longer expiry date.


Makes sense. Any regrets?


Basing on my experience there will be a migration to GCE when credits run out and you total the difference in costs. Azure is fine, but can't compete on price. (We migrated from AZ to GCE & K8S)


Probably. But first have to build the business up so we can afford paying server costs before we can start worrying about Azure vs. GCE.


I'm really curious how people administer their K8S clusters for installation, upgrades, etc.

I'm very familiar with docker, which we've been using for over 2 years. But now, we're trying to get k8s running, with either kargo, kubeadm, deb packages, etc.: They all failed with different bugs on different set of clouds / settings. (Trying to stick to running it on Ubuntu xenial).

Not sure if it's because 1.6.* just came out of the oven when I started...?

Thanks to Minikube, I understand how powerful k8s can be, and actually find kubectl quite simple to use, but I'm confused by how fragile and complex installation and setup seems to be. I'm unsure how someone is supposed to maintain this system considering how (overly?) modular it is and the bugs I've encountered. Knowing that docker has a LOT of bugs, and k8s builds on top of it, I'm a bit scared. And there is no clean documentation on how to install it, with sections for all your choices, in a generic/agnostic way (deb+rpm distros, cloud integration or simple abstract VMs, ...)

What is you workflow? :)


We are using https://github.com/openshift/openshift-ansible which gets you a k8s cluster + user management and a great web interface. I actually learned a lot about cluster management from that repo since it goes through all the standard best practices.


We (ReactiveOps.com) use kops: https://github.com/kubernetes/kops

It works well with Ubuntu and we are working on CoreOS (although no promises it gets merged back into core).

Kops can do upgrades although historically it was safer/easier to stand up another cluster alongside and migrate.

Happy to help if you have any questions - Matt [at] reactiveops.com.


Use CoreOS, use kubeadm. Knowledge of go-lang is necessary (I hope this will change, as go's readability is ... very lacking).

You have to understand that there are still a lot of unfinished features (for example, almost no real documentation), still a lot of operational aspects left uncovered (persistent volume backed by local disk - for example running software that needs low-latency I/O, eg. DB servers).

The general installation flow is to beat it into submission. Drag the thing kicking and screaming into a cluster until it forms a quorum. (etcd, apiserver + controller-manager + cloud-stuff, scheduler, kubelets), and don't forget about the overlay network. And that's it, if it works and ugly, it works. As you say there are too many bugs in docker/rkt (OCR, libcontainer, the container filesystem problems - overlayfs, aufs, btrfs, devicemapper, AppArmor, SELinux labels and so on issues, and other Linux kernel related issues), and in Kubernetes itself too, and then there's the whole networking layer/aspect, still very much in flux.

But it's usable, because it's "antifragile", so if it can reach a working state, you can be pretty confident that it'll be able to reach it again if you add more nodes, nodes crash, load fluctuates, updates happen, deployments happen, etc.


I agree with k8s not being environment agnostic. It's most of the way there but your decision to run on GCP, AWS, or some other platform, can play a large role in decision making and k8s adoption. I feel like right now in particular development is weighting to make k8s more friendly on GCP probably because the GCP team is working really hard and they're using that momentum (and a lot of open source / conference presence) to keep that advantage going forward.

I think minikube is a crucial early step in allowing at least agnostic development where a "professional" can then ease it into a cloud provider by turning knobs here and there. It'll probably stay that way for awhile but we'll see.


I really hope Microsoft doesn't hurt this. For example, there are great docs for AWS and Google, but given the way they've ruined Skype, I really hope they don't turn this into some kind of Azure-focused system while dropping support for AWS, etc. Congrats to the awesome Deis team -- let's just hope that Microsoft doesn't just run it into the ground when it comes to non-Azure platforms.


Congratulations Gabriel! Having worked on containers space in Microsoft Azure before, my opinion is that this is a great move by Microsoft. In the past years, I've seen the company struggle in finding great talent in OSS/Linux stack. Simply, there are a lot of areas Microsoft could expand, but there is not enough talent. Deis will definitely take a ton of expertise in open source software and community to Microsoft. Now that Kubernetes is a big part of Azure’s container service, Deis brings a lot of fresh blood to Microsoft. I hope it works out great for both companies (and the open source Kubernetes community).


I'm somewhat surprised. Why would Microsoft put weight into Deis vs use something like Kubernetes or Mesos? I haven't kept up with Deis's growth and I'm obviously very happy for them, but I'm curious what the gain is. Based on HN posts and other devops forums, Kubernetes eeems to have gained a lot of momentum recently.


Best to think of Deis as a Kubernetes company. We are much more than the PaaS solution many folks know us for.


Congratulations! There are few Kubernetes experts on the market right now. Deis is certainly a smart acquisition for Microsoft (along with hiring Brendan Burns).



Both of these links are the same?


Fixed the links ;)


Though that doesn't make the statement less true.


We have customers running DCOS workloads as well as Kubernetes workloads on Azure. Some use ACS, some use bare VMs (be it ubuntu, CoreOS, etc). Some use DCOS enterprise from the market place.

Just from my perspective, having sat down and done a hackfest with guys from Deis in Redmond, they bring a phenomenal amount of experience in Kubernetes to Microsoft. We have a number of people that work in the area (contributing helm charts, other Pos, etc), but more knowledge in an interesting and growing area is always great to have.

Additionally, Workflow is a great way to get started on containerized apps and it works quite well atop ACS.

If people are doing interesting things on Azure (ACS/acs-engine or even VMs) using Kubernetes or DCOS, I'd love to hear about it.


Microsoft, like Google, is working with several opensource platform providers.

Because they're selling time on an IaaS. They don't have to pick winners when they present a fungible pool of resources.

By way of analogy, BP and Shell don't care where their petrol gets burnt. Ford, GM, it's all the same to them.

Disclosure: I work on such a platform, Cloud Foundry, on behalf of Pivotal. We have a close working relationship with both Microsoft and Google.


This IS Microsoft investing in Kubernetes. They also have partnership with DC/OS.

So it is Kubernetes AND Mesos for them.


What are some good devops forums? I remember trying to find them a year ago, and I didn't seem to be looking in the right places


Deis has changed a lot since I last looked at them. They've dropped their original PaaS and developed an ecosystem on top of Kubernetes. Microsoft has been showing a lot of interest in Docker, so can see why this acquisition would make sense for them.


> They've dropped their original PaaS

Just to clarify, we never dropped the original PaaS. Deis (now named Workflow) is still in active development, uses Kubernetes as the underlying scheduler and has monthly releases. We actually just released v2.13.0 5 days ago. :) https://github.com/deis/workflow


In a sense it is the same product (for the users, Deis Workflow / Deis v2 is basically a drop-in replacement for Deis v1, and API Clients such as Deis Dash http://deisdash.com/ can work with both). But in a very real sense also, the old product was End-of-Lifed and Workflow is a completely separate and different product.

There is no direct upgrade path from v1 to v2, the branding was changed (new product name entirely) to coincide with the release of v2, and the v1 LTS branch is no longer receiving updates, support, or new builds when issues are identified.

It's kind of like the axe that is passed down from generation to generation for 150 years. Can't really call it the same axe anymore when you've replaced the blade and the handle several times over.

(This coming from a happy Deis user that still has living installations of both v1 and Workflow.)

Congratulations on the acquisition!


> the old product was End-of-Lifed

To be fair, we did continue to support Deis v1 for 3 whole years(!), which is very long considering we're just a small startup. Being woken at 3AM from yet another obscure etcd/fleet server failure really sucked, and systemd never truly got along well with Docker making for some fun interactions with Fleet. Overall (speaking personally as an engineer and support engineer), we are very happy we made the decision to switch to Kubernetes. Mind you, we were as early adopter as you could get with Fleet and etcd at the time, and etcd in particular has been significantly better for us in terms of stability/error reporting.

> There is no direct upgrade path from v1 to v2

I agree that it sucks there was no upgrade path from v1 to v2, but we felt it was necessary to make breaking changes to move forward from Fleet into the world of Kubernetes. That doesn't change the fact that we never dropped the PaaS product as a whole, though.

> the branding was changed (new product name entirely) to coincide with the release of v2

This was actually more to do with us becoming "Deis the company" more-so than the v2 release. Lots of users were getting confused with "Deis the company" and "Deis the github project", so we decided to rename it Workflow to help make it more clear in conversation.

> and Workflow is a completely separate and different product.

Curious to understand how you feel like v2 is a completely different product. From a user's standpoint the product offering never changed. The API, CLI and `git push` workflows were all still present in v2 and were drop-in replacements, save for backwards-incompatible database migrations (hence v2.0.0). It was just the administration's point of view that changed (Fleet -> Kubernetes, deisctl -> helm). To me it still feels like the same product, but I'm curious to hear from you why you feel differently. :)


A huge kudos for keeping up your legacy support for 3 years! Making needed but intrusive architectural changes is hard for any open source project, and gets harder when you have paying customers. You typically never get praise for doing the right thing...thank you.


Please do not take anything I said as an attempt to throw shade or being critical of your team! As a non-paying customer, I have to say I'm extremely happy with the support you've delivered (and consistently.) I wish I could have got my company to throw money at you, but it did not work out.

The kind of support I got from Deis the company is really without comparison when it comes to Open Source projects anywhere else.

The fact that v2 is a drop-in replacement for v1 really eases the sting of the fact that there is no direct upgrade path. I still have an old Deis v1 kicking around because I left the company, transitioned to hourly, and made an agreement to draw down my hours at this company, where we are in reality hardly using Deis at all. But in the small capacity we are using it, there is what I'd call "VMware Levels of Reliability" and so it successfully became a piece of the infrastructure there.

So it is disingenuous for me to say that I was not able to upgrade my v1 installation. The reason I was not able to upgrade is because there was not strong interest in upgrading. The unsupported product is as good as our (also unsupported! but luckily not End-of-Life) VSphere and VSan environment.

It is a different product to me, in short because I am the one administering it. It runs on a different platform altogether, it has no distributed filesystem component where the old version did, and it has not really harmed me in any way that there is no upgrade path. It is just a couple of facts that led me to the conclusion that they are in fact distinct and other products that are not directly related to each other, except that they could easily pass for one another if you asked a user.

I am really happy for you guys, Microsoft is a real big name compared to EngineYard, and while I could get behind EY+Deis, it's a hard sell for the Design Review Board. But I can tell them "look, Microsoft is doing this now" and they will know what that means instantly. Big guns. No joke software.

This is how I've actually felt about Deis from the beginning, but now it's going to be a much easier sell to get Big Wigs to sign off on. Nobody ever got fired for buying Microsoft!


Not to mention (but I'll mention)

Your announcement to End-of-Life Deis v1 came what seemed like days before CoreOS announced their decision to kill Fleet. So, not like there's anything you could have done about it, save deciding to pick up supporting Fleet for yourselves.

(And I like fleet, but I understand thoroughly why it was a good decision for CoreOS and for Deis to end support for it. It was a wholly inferior solution, begging for a replacement.)


If it's not totally clear, "VMware Levels" is also meant as a compliment, !


> The unsupported product is as good as our (also unsupported! but luckily not End-of-Life) VSphere and VSan environment. > It is a different product to me, in short because I am the one administering it. It runs on a different platform altogether

Distilling my previous comment, it's really this.

I navigated the waters of Fleet and Deis v1 to find an answer to "how can I make sure this does not go down with lots of lead time and half a dozen warnings well in advance before it does go down." (Aside, my datacenter at the time had famously reliable electricity on two grids that just does not go down.)

Now I have to renegotiate that position to get the same guarantees with Kubernetes. Before, I was worried about maintaining etcd quorum when N machines go down, and preventing split-braining. Now, I'm still worried about those things, but they're behind an abstraction layer of Kubernetes API and new suite of tools for managing it.

I'm not expected to manage my etcd quorum in the same way. I am sure that's good news. Or I am still expected to manage etcd quorum, but it's buried behind a mound of Kubernetes so it's hardly even clear that there is ETCd running at all on a basic cluster without multi-master. You couldn't have a Deis v1 cluster without running at least 3 instances of etcd. You were forced right away to get to know those failure modes.

If it's not clear yet, I am a really small-time consumer of high availability.

The new system also forces many best practices on me, in ways I'm not accustomed to. (More not bad things...) I was once advised to split my control plane from my data plane (and possibly also my routing plane) back in Deis v1, to ensure reliability. I never got around to it with Deis v1... but in Kubernetes it's already been done for me by kubeadm.

I had a 5-node fleet cluster with 5 etcd members that was "all control plane all the time" inside of what amounts to a single AZ, and it's pretty clear that this is a totally wasteful design now. You wouldn't build a K8S cluster with 5 masters and no minions. But what's 40GB of RAM in a private data center? To ensure reliable service, sure we had that kind of RAM just laying around. With Deis v1 we did exactly that. For a cluster the size of mine, I'd say it was a thoroughly researched and well-advised decision... at the time.

It's clearer to me from talking with you that, from a code perspective, there is just way too much code in common to call it a brand new product. For a cluster admin who doesn't get very deep into the code though, I feel like it's a much easier case to make that it is in fact a distinct and new product. The constraints have all shifted, and the guarantees are in quite a lot of cases not the same.

This all amounts to basically hand waving though, and I'll reiterate I am not passing judgement on the progression of Deis/Workflow and its place in the container ecosystem.

It's been a wild ride! Thanks for bringing along a community with you. The marching forward and continued progress of free software such as Deis and Kubernetes is a thing to behold.


I think he is just referring to your Deis v1 PaaS before you started on Workflow and k8.


That is true; though technically the Deis v1 PaaS still lives on as Workflow. Internally the v1 PaaS bits are all there in Workflow, just switching out Fleet for Kubernetes. It was just renamed when we became "Deis the company" rather than "Deis the project". I just wanted to clarify that we never "dropped" doing the PaaS thing.


"Microsoft has a storied history of building tools and technologies that work for developers."

I'm not sure how I feel about that statement.


Does this mean a commitment of Microsoft on k8s ecosystems or simply a talent acquisition or both?


They've just hired one the founder of Kubernetes so my guess would be yes https://www.onmsft.com/news/google-engineer-kubernetes-found...


Congrats to the Deis team! I was an early adopter and the team was just fantastic at answering questions, implementing feature requests and resolving issues on GitHub.

PS: I'll take this opportunity to shamelessly promote my web based Deis UI: http://github.com/olalonde/deisdash


Congrats on the acquisition, I'm just sad I can't be there to congratulate you.


What!? I never would have expected this, quite a surprise. I hope it works out for them.


If Deis was that valuable, one could assume that EY agreed to this arrangement because they are low on cash and needed the money.

How often is a company, in effect, acquired twice?


Love Deis, we have used V1 and V2 (with kubernetes) at my current job with success, but also had weird stability and reliability issues from time to time.


The Deis team was extremely helpful over IRC when I was building on top of their PaaS. Great team and culture, Microsoft is lucky to have them.


Well, that's definitely a way to buy yourself into the Kubernetes ecosystems. Congratulations to the Deis team!


Congratulations to the Deis Team. It's been a pleasure working with them on the Helm project.


Next week, Flynn to Join Apple.

Let the PaaS arms race begin.




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: