Hacker News new | comments | show | ask | jobs | submit login
Talk of a Split from Docker (thenewstack.io)
323 points by iamthemuffinman on Aug 30, 2016 | hide | past | web | favorite | 204 comments



I doubt they would ever consider it, but I think Docker Inc's best move would be to push reset for Docker 2.0:

- Fully embrace Kubernetes for orchestration

- Drop Swarm

- Roll Docker Engine back to its pre-1.12 scope

- Get on board with standardizing the image format, now

- Stop fighting Google and instead let them help you succeed

A Docker distro of Kubernetes would do very well in enterprise on-prem or private cloud environments. They already have a great developer experience. Companies will pay for support on both.

Continuing to oppose Kubernetes risks damaging the significant brand equity they've accrued as containers in production become mainstream.


Sadly, the current situation seems to be optimal for Docker inc, even if it is not optimal for users of Docker.

K8s makes docker commoditized. For example, I am running GKE k8s cluster and I only use docker directly to take Dockerfile and produce an image. I push images to google replacement of docker hub - google container registry. I run things locally via minikube. If GKE would be using something custom to run images I would not even notice. I am close to the point that switching from docker to rkt would be barely noticeable.


OT: any chance you have or will do a write up on your set up and process? It sounds interesting!


This is pretty much what you would end up with when following official google cloud tutorials. Some time ago I started with https://cloud.google.com/python/django/container-engine. I still run django based on this tutorial in addition to a few more things.

Some things like minikube are somewhat bleeding edge, so maybe they are worth describing.


Unfortunately there is no way to justify a unicorn valuation if all that you have as an asset is an open source file format.

Consider Vagrant -- like Docker, it's essentially a wrapper around other people's facilities and libraries. Mitchell and co appear to be trying to escape the trap by moving into nearby areas of real value, but there's no meaningful way to monetize Vagrant no matter how popular it is.


> Unfortunately there is no way to justify a unicorn valuation

Then it is indeed unfortunate that Docker is building infrastructure code which is tainted by the game theory surrounding the need for a unicorn valuation. Maybe that's the problem, instead of Docker itself.


That is exactly the problem. Linux containerisation is done by building tooling that uses kernel-level features. You don't need a start-up with millions in VC funding to implement the basic feature-set. At the other end of the complexity scale, both flatpak and systemd-nspawn are more specialist containerisation tools, which are each largely the product of just one person.


Docker itself is part of that problem, not separate to it. They willingly accepted almost $200 million in venture capital. We all know what the VC game is. If they can't turn that $200M into a $5B+ exit, they lost.


> If they can't turn that $200M into a $5B+ exit, they lost.

Well, the VCs lost. Presumably, the people at Docker -- including the people who would do fantastically well with an exit that also made VCs happy -- are drawing paychecks, in some cases substantial ones, and will have in many cases done tolerably well (rather than have "lost") if no VC-friendly exit ever occurs.


Do you remember when open source meant people just working on stuff for fun and trying to provide free as in speech alternatives to pay crap?

I feel like this is just another step into the corporate OSS world which is a far departure of what a lot of the original OSS architects envisioned.

Trying to monetize OSS often leads to rushed features and bad decisions like in most traditional non-OSS/enterprise products. If you strip off the polish and the fancy website, you should still have a usable, well documented thing that people need. That may also be a product you can sell support for or host, but it doesn't have to be.

Unfortunately, there aren't as many grants and not as much developer free time to go around anymore.


I'm not sure OSS ever meant that:

Many people believe that the spirit of the GNU Project is that you should not charge money for distributing copies of software, or that you should charge as little as possible—just enough to cover the cost. This is a misunderstanding.

Actually, we encourage people who redistribute free software to charge as much as they wish or can. If a license does not permit users to make copies and sell them, it is a nonfree license. If this seems surprising to you, please read on.

from:

https://www.gnu.org/philosophy/selling.en.html


> Actually, we encourage people who redistribute free software to charge as much as they wish or can.

What they don't mention is that, when every recipient of the software is free to redistribute it and compete against you, as much as you can rapidly becomes zero (or, at most, just enough to cover the cost of the most-efficient distributor), since free competition drives prices down to the marginal production cost.


Except of course, that the business model of free software is generally support-license based. Yes, someone can take your software and provide support instead of you but they usually are not as experienced as you with what you created. If they are, you should hire them.

Companies make money charging for free software all the time, and I really wish we would stop having to go through this discussion every time that free software shows up as a point. Companies have been making money from free software for more than 20-30 years at this point, and none of it required taking away the freedom of their end-users.


> Except of course, that the business model of free software is generally support-license based

Its charging for support (not support-license since the support contract isn't a license when the software is actually delivered on a Free license) because that's an alternative to charging for software, because, practically, you can't charge for Free software, for the reason discussed in GP.

> Companies make money charging for free software all the time

No, they don't. They often make money charging for ancillary services related to Free software, which their involvement in contributing to the Free software may have positioned them to provide at an advantage to competitors, but not many make money charging for Free software.


> Its charging for support (not support-license since the support contract isn't a license when the software is actually delivered on a Free license) because that's an alternative to charging for software, because, practically, you can't charge for Free software, for the reason discussed in GP.

The industry calls them support licenses (since generally you only support X machines running your software -- though of course you can have alternative models), so that's what I'm going to call them.

As for not being able to charge for Free Software, it is true that unless you have some value-add (preinstalling or burning to physical media) then yes, youre going to have trouble selling the bits that make up a piece of software. But then again, why does is that model taken as being the "right model" with the free software model being the "alternative". In fact, many proprietary companies have the same model (Oracle will charge you for support too). How is it a better model that you buy a set of bits and that's all you get (no promise of updates, no support if something breaks, nothing other than the 1s and 0s that make up version X.Y of the binaries you built). In fact, I'm having trouble of thinking what companies have such a model, because it's so user-hostile (even for proprietary so software).


OSS is a corporate-friendly fork of Free Software like GNU

https://www.gnu.org/philosophy/open-source-misses-the-point....


> Do you remember when open source meant people just working on stuff for fun and trying to provide free as in speech alternatives to pay crap?

OSS is a path bound to failure.

On the one hand, going OSS means the project will never have the resources because it ain't charging money. On the other hand, it will never be able to charge money because people don't want to pay for something that's advertised as free. That's a vicious circle.

The average non-crap software is taking astronomical effort to make and it's getting harder and more complex over time. A bunch of guys working for free in their spare time just don't have the resources to keep up.

P.S. When in doubt, remember that the difference between the great pyramid of Kheops and Microsoft Windows, is that Windows took more effort to make.


> because it ain't charging money.

Free software refers to freedom, not price[1]. Companies have been making money selling free software for more than 20-30 years at this point (even free software that is developed in the open).

[1]: https://www.gnu.org/philosophy/free-sw.html


> Free software refers to freedom,

Philosophically speaking, maybe. In practise though, free software are considered because of pricing and not freeness.

Bring a PAID open-source software for a comparison in front of your directors and/or fellow engineers. I did that many times, from $1000/year to $3M/year subscription.

Even the most open-source aficionados will quickly reveal that he doesn't care about open-source. All he thinks about is pricing.


I gladly pay for free software, and donate to several groups of developers developing free software. Maybe I'm in the minority, but at least I stick to my principles -- free software for me is about freedom (I contribute to many projects that I use every day, which I wouldn't be able to do without free software). Price is irrelevant.

In addition, I work for a free software company -- everything I work on is done in the open and all of our customers have freedom when they use our software. So I really do practice what I preach, and encourage others to follow suit.


> open source meant people just working on stuff for fun and trying to provide free as in speech alternatives to pay crap?

You're mixing quite a few things. "Open source" never meant that, and was always geared towards corporations. Free software does mean providing "free as in speech" alternatives to things, but it never meant stopping people from charging users (you having to "pay crap").


I wonder if the Docker Hub could be their monetization vehicle. They become the "github" of the container world.

It may not justify the valuation they want.. which is another issue.


I believe that their business model is providing a proprietary fork of the Docker engine (along with a bunch of other proprietary tooling) to enterprises as well as support for it. I won't lie, it really bothers me that code I've helped write is being used in proprietary software.


Then don't contribute to projects whose license you don't agree with?


Personally I make all of the projects I make GPLv3-or-later. However, certain projects are of signficiant importance to the free software community that I consider the benefits to free software to outweigh the potential drawbacks by proprietary forks. Unfortunately, it still makes me feel unhappy about people taking my work and forking it under a proprietary license.


> I won't lie, it really bothers me that code I've helped write is being used in proprietary software.

Then you better don't contribute to anything that's not exactly GPL.


Or AGPL. This is a fair point, and I make all of the code I write GPL. Unfortunately, there is only one containerisation project that is under the GPL and only the core of it is under the GPL (LXC is [L]GPL but LXD is not). It's quite worrying that so few Golang projects (because for some reason every containerisation project is written in Golang seemingly in anger) are under the GPL.


Make it AGPL. The GPL can be circumvented by slapping an HTTP interface on the logic, which is how Google has gotten away with keeping so much private.


I've written about this before:

http://penguindreams.org/blog/the-philosophy-of-open-source-...

TL;DR is that we have a lot of corporate open source now for infrastructure, but the golden age OSS end products people hoped for in the 90's/00's never really happened.


Note: "Open Source" does not have a philosophy[1]. The free software movement does (it has a philosophy based on software freedom), but the "open source" movement does not discuss any of the ethical reasons why users should have freedom (it only discusses practical issues). Personally I think that the "open source" split is one of the reasons that the state of free software is so worrying today.

[1]: https://www.gnu.org/philosophy/open-source-misses-the-point....


That's a great point. The thing is, I don't see them as getting serious about it. For example, quay.io provides better functionality around registry. Docker Hub is "good enough" and gets traction from the sheer fact that Docker runs it.


As the proverb goes, "Choose, or someone will choose for you."

rkt is getting into Kubernetes as an alternative runtime engine:

http://thenewstack.io/kubernetes-chief-we-back-docker-althou... (2015)

http://blog.kubernetes.io/2016/07/rktnetes-brings-rkt-contai... (2016)

Back then it was, "It's sad to see so much drama; Docker engine has much more traction." Remember all of that?

The story is now, "hey, rkt lets us test out our framework for supporting alternative runtime engine".

There's a shifting attitude there. Goodwill from the community is eroding. I don't know how long this window will remain open, but it won't remain open indefinitely.


I think there's also momentum to add runc as an alternative runtime engine, which makes sense.

Also, Mesos have started shipping their own container runtime.

This feels like an inflection point in the history of containers.


> I think there's also momentum to add runc as an alternative runtime engine, which makes sense.

Actually, we're working on getting generic OCI runtimes supported (runC being an example of such a runtime). In principle, you should be able to swap out the OCI runtime and still get Kubernetes to work with that runtime. It's an exciting time. :D


Yes, I think the 'Not invented here' syndrome has reached epidemic proportions - Especially within open source communities where a lot of people create stuff from scratch just for the fun of it.

When I read the article announcing Docker v1.12, it was riddled with enthusiasm - That's a red flag - This rarely happens in our industry.

Working on open source is not like working in a startup; it's not like riding a roller coaster; it's more like riding the metro - You spend most of your time underground and when you finally arrive at your destination, the slow ascent up the escalator is rarely that exciting.


>A Docker distro of Kubernetes would do very well in enterprise on-prem or private cloud environments. They already have a great developer experience. Companies will pay for support on both.

Some early data here: https://twitter.com/dberkholz/status/720686568671940608


Interesting thing about that survey is that it covers PaaS solutions (Cloud Foundry, Kubernetes, etc) on top of OpenStack. Other folks seem to be going in the other direction using PaaS to deploy OpenStack. https://tectonic.com/openstack/ http://www.tcpcloud.eu/en/blog/2016/06/27/making-openstack-p...


One word for you: Investors

They could never do that and expect to survive as a company as it would be "giving away" money on the table so to speak.


[flagged]


Except of course considering the fact that Google Engineers probably have some experience with scalability. Not to mention that the current Kubernetes community has an incredibly open design process, and is very community-oriented. So it's very far from its origins as an ex-Googler project.


Disclosure: I am one of the Google people who founded the k8s project. Product guy, not engineer though.

We are really concerned about 'writing letters from the future' to the development community and trying to sell people on a set of solutions that were designed for internet scale problems and that don't necessarily apply to the real problems that engineers have.

I spent a lot of time early on trying to figure out whether we wanted to compete with Docker, or embrace and support Docker. It was a pretty easy decision to make. We knew that Docker solved problems that we didn't have in the development domain, and Docker provided a neat set of experiences. We had 'better' container technology in the market (anyone remember LMCTFY?) but the magic was in how it was exposed to engineers and Docker put lightening in a bottle. The big things were creating a composable file system, and delivering a really strong tool chain are the obvious two, but there were others. I remember saying 'we will end up with a Betamax to Docker's VHS' and I believe that remains true.

Having said that there are a number of things that weren't obvious to people who weren't running containers in product and at scale. It is not obvious how containers judiciously scheduled can drive your resource utilization substantially north of 50% for mixed workloads. It isn't obvious how much trouble you can get into with port mapping solutions at scale if you make the wrong decisions around network design. It isn't obvious that labels based solutions are the only practical way to add semantic meaning to running systems. It isn't obvious that you need to decentralize the control model (ala replication managers) to accomplish better scale and robustness. It isn't obvious that containers are most powerful when you add co-deployment with sidecar modules and use them as resource isolation frameworks (pods vs discrete containers), etc, etc, etc. The community would have got there eventually, we just figured we could help them get there quicker given our work and having been burned by missing this first time around with Borg.

Our ambition was to find a way to connect a decade of experience to the community, and work in the open with the community to build something that solved our problems as much as outside developers problems. Omega (Borg's successor) captured some great ideas, but it certainly wasn't going the way we all hoped. Kind of classic second system problem.

Please consider K8s a legitimate attempt to find a better way to build both internal Google systems and the next wave of cloud products in the open with the community. We are aware that we don't know everything and learned a lot by working with people like Clayton Coleman from Red Hat (and hundreds of other engineers) by building something in the open. I think k8s is far better than any system we could have built by ourselves. And in the end we only wrote a little over 50% of the system. Google has contributed, but I just don't see it as a Google system at this point.


Thank you, a very insightful comment. FWIW I think you made a smart choice by leveraging docker and it's ecosystem (not just the tools, also mind share, the vast amount of knowledge spread via blog posts and SO, has to be considered). While docker's technical merits over competing solutions were indeed arguable then, it's UX was a game changer in this domain, everyone got the concept of a Dockerfile and "docker -d" and many embraced it quickly.

Kubernetes came around exactly at the time when people tried to start using it for something serious and things like overlay networking, service discovery, orchestration, configuration became issues. For us, k8s, already in it's early (public) days, was a godsend.

Also, i can confirm: the k8s development process is incredibly open. for a project of this scale, being able to discuss even low-level technical details at length and seeing your wishes being reflected in the code is pretty extraordinary. It will be interesting to learn how well this model will scale in terms of developer participation eventually, i figure the amount of mgmt overhead (issue triaging, code reviews) is pretty staggering already.


Thanks for the insightful response. I didn't mean to say that the entire project came from some shot of inspiration by some Google engineer, nor do I think of Kubernetes as a "Google system" -- I'm well aware of that separation.

The main thing that I think makes Kubernetes the perfect example of a modern free software community is the open design process and how the entire architecture of Kubernetes is open to modification. You don't see that with Docker, and you don't see it with many other large projects that have many contributiors.


So is there ongoing work to replace borg with K8s (afaik the adoption of omega pretty much stalled and/or died)?

I myself would feel more confidence in K8s if there was usage of K8s outside of GKE and the dogfood being created was eaten by the rest of google as well (or at least there was a publicized plan to make that happen), because otherwise it feels more like K8s is a experiment that google folks are waiting to see what happens before the rest of google adopts it.

Thoughts?


Well, they're about a decade ahead of everyone else, at least. It's not about if you feel "tired", it just is. If you don't like it try to learn from them and do better.


Agreed! If the issue is Google and other tech giants feel their orchestration tools is being threatened by swarm, too bad. I fully agree with Docker Inc here. Innovation involves risks, and if you don't appreciate that, then don't use it. However, I do agree that more attention and more thorough testing should be performed before something is stated as being "production ready". Keep innovating Docker!


I'm not saying those looking to fork Docker are wrong. I don't think that they are. But I think Docker's approach to Swarm is more useful than the roadmap that those organizations considering a fork wish to pursue.

Kubernetes, Mesos, etc. appear to be great orchestration tools for an organization with a few [or many] engineers dedicated to operations. Their not so great for a small team [or individuals] who are just trying to deploy some software.

As I see it, Swarm seeks to solve orchestration analogously to the way Docker seeks to solve containers. Before Docker, LXC was around and the Google's of the world had the engineer-years on staff to make containers work. Docker came along and improved deployment for the ordinary CRUD on Rails developer who just wants to go home at night without worrying about the pager going off.

To put it another way, it looks to me like the intent of Swarm is to provide container orchestration for people who don't run a data center. Like Docker, it is an improvement for those scaling up toward clusters not down from the cloud.

None of which is to say that moving fast with Swarm isn't a business strategy at Docker. There's a whole lotta' hole in the container market and part of that is because the other organizations currently supporting development of container orchestration tools has business interests at a much larger scale...Google doesn't see a business case for pushing Kubernetes toward the Gmail end of the ease of use spectrum.

The desire to fork is based on the needs of the cathedral not those in the bazaar.


> Kubernetes, Mesos, etc. appear to be great orchestration tools for an organization with a few [or many] engineers dedicated to operations. Their not so great for a small team [or individuals] who are just trying to deploy some software.

In my opinion, the opposite is true about Kubernetes (though it may be true about the Mesos stack). It's a great way to reduce the need for dedicated ops engineers.

Once you have a Kubernetes cluster running, it's essentially a "platform as a service" for developers. They can deploy new pods and configure resources all with a single client. You generally no longer need to think about the servers themselves. You only need to think in terms of containers. No package management, no configuration management (in the Salt/Puppet/Ansible/Chef sense), no Unix shell access needed.

It's hugely liberating to treat your cluster as a bunch of containers on top of a virtualized scheduling substrate.

Kubernetes itself is very lightweight, and requires minimal administration. (If you're on GKE, K8s is managed for you.)

Swarm Engine is very much knocking off Kubernetes' features. Swarm offers the convenience of building some orchestration primitives into Docker itself, whereas Kubernetes is layered on top of Docker. Other than that, they're trying to solve the same thing. I'd say Kubernetes' design is superior, however.


You still need to provision the Kubernetes/Docker servers? You still need Anisble/Puppet for that part.

There aren't a lot of "Here's an API to throw up your containers and provision IPs" services out there. I see that Amazon has one (I avoid Amazon; they're the new Wal-Mart), but no one else really. I mean DigitalOcean provides CoreOS, so in theory you can provision one of those droplets and toss up some containers, but there's not a real API for "use this endpoint to deploy your container."

In the corporate world, yes .. we have dev ops teams to try out and create provisioning systems to throw up Mesos or CoreOS or OpenStack clusters. Once they're up and your devs are using them, they're lower maintenance that provisioning individual servers for individual services. For home devs, it'd be nice to have a plug an forget for docker containers (other than Amazon).

The other thing I still don't get about Docker: storage. MySQL containers have instructions for mounting volumes, but it doesn't feel like there's a good, standard way to handle storage. I'm sure that makes it more versatile, but like I said previously: plug and forget for home devs and startups. You want a databases as a service, get ready to shell out $80/month minimum for Amazon RDS or create a set of scripts around your docker containers that ensure you have slaves and/or backups.


Try Joyent's Triton. It literally is

> a real API for "use this endpoint to deploy your container."

I've been using Triton since as early as possible and it's fantastic.

You can tag/label containers and they show up in the SDN DNS by tag (called CNS), no need for Consul or other service discovery.

I do still use Consul for many reasons but CNS is awesome for bootstrapping a cluster as well as public dns routing/failover.


> You still need to provision the Kubernetes/Docker servers? You still need Anisble/Puppet for that part.

Also true about Docker Swarm.

> There aren't a lot of "Here's an API to throw up your containers and provision IPs" services out there.

I strongly believe this (in particular, "hosted Kubernetes as a service") will start arriving soon. It's a great business opportunity.

> The other thing I still don't get about Docker: storage. [...]

Kubernetes solves this. For example, if you're on GCE or AWS, you can just declare that your container needs a certain amount of disk capacity, and it will automatically provision and mount a persistent block storage for you. Kubernetes handles the mounting/unmounting automatically. Kill a pod and it might arise on some other node, with the right volume mounted.


Don't try to put storage into containers at this point. You'll just be in for a bad time.

Either run a seperate vps for MySQL or use a hosted solution.

Containerization will get there but i think putting old some types of old school apps in containers is a bad idea.

E.g. redis, beanstalk, and similar fit nicely I to clusters and restarting nodes without too much issue or downtime.

MySQL, if your install is large enough, you never want it to go down. This pretty much goes against everything to do with containerization, your developers also gain no benefit of it being in a container (apart from maybe running a small db in developent only where Provisioning a vm would be too painful)


I don't disagree. I think the differences lie in reducing versus eliminating a requirement for operations engineers and the level at which abstractions are surfaced. To me, Swarm looks headed toward abstractions at a higher level than Kubernetes much like Docker surfaces higher level abstractions than LXC.

Maybe that's the difference between a product and a solution. I don't know.


> To me, Swarm looks headed toward abstractions at a higher level than Kubernetes

Kubernetes admittedly takes a slightly different, modular, layered approach, whereas Swarm is simple to the point of naivety.

This simplicity is potentially a threat to its future ability to adapt to different requirements, whereas Kubernetes offers a separation of primitives that allows it to scale from "simple" to "complex".

For example, in Kubernetes, a deployment is a separate object from the pods themselves. You create a deployment, which in turn creates a replica set, which in turn creates or tears down pods as needed.

But you don't really need to work directly with replica sets or know exactly how they work, but they're there, and can be used outside of a deployment. If all you care about is deploying apps, then you only need to deal with the deployment.


Swarm is simple to the point of naivety.

Exactly. To me, that approach is what allowed Docker to make containers on the laptop ubiquitous. Kubernetes is unlikely to be the tool that takes schedulers to that universe. Swarm might and I think that's the goal.

I mean, I don't really want to care about pods and replica sets. They're obstacles in between me and what I want when what I want is more horsepower behind an app. For the same reasons I probably am better off with garbage collection than malloc [1]. I've only got so many brain cells.

[1]: http://linux.die.net/man/3/malloc


It's not useful though. You can't take Docker "laptop containers" and use them in production. It's at best a development tool, at worst creating yet another environment to support and to have test differences in.


The nice thing about Kubernetes is that you don't really need to know much about those things, but if you do need to "level up" in terms of production complexity, those tools are there to use.

I'm sure Swarm has its place, though.


I'm not sure Swarm has it's place yet, but rather that it is actively looking for it and that's the basis for the complaints. Kubernetes and Mesos have market segments that are distinctly different from that which Swarm appears to target. It's roughly those capable of moderately sophisticated devops versus anyone who can use Docker.


And, there is enough documentation where you can:

(1) Write your own custom deployer https://github.com/matsuri-rb/matsuri

(2) Write your own controllers https://deis.com/blog/2016/scheduling-your-kubernetes-pods-w...

Those two are just examples. My point is that if you need to dig in, you can. You can do this because those higher-level behaviors are built on a solid foundation of well-thought-out primitives.

What I expect to happen is that we will have a diverse set of things built on top of Kubernetes. Some of it will get folded into Kubernetes core. Some will not. I think the center of gravity for containers has been shifting towards Kubernetes.


Kubernetes deployments are as high level as you can get while still having anything to do with actual operational environments. You don't have to use the lower level constructs if you don't want to. If you don't understand what either is doing, you're not going to be able to operate it anyway, and all of these still require operators.


When you say K8s is lightweight, I presume you're purely talking about GKE deployments, and not on-prem or alternate cloud deployments?

My experience of K8s so far is that GKE is happy path and most of the demonstrations/documentation is focused on that use case. When you step off that path, you can either go for something scripted which does a lot of things in the background, or what seems like quite an involved manual setup (etcd, controller node setup, Certificate setup, networking etc)


Disclosure: I work at Google on Kubernetes

We _absolutely_ want to push Kubernetes to be "gmail-easy" - here's a recent demo:

https://www.youtube.com/watch?v=Bv3JmHKlA0I

If you don't feel like watching a video, here's the commands to get a fully production-ready cluster going:

  master# apt-get install -y kubelet
  master# kubeadm init master
  Initializing kubernetes master...  [done]
  Cluster token: 73R2SIPM739TNZOA
  Run the following command on machines you want to become nodes:
    kubeadm join node --token=73R2SIPM739TNZOA <master-ip> 

  node# apt-get install -y kubelet
  node# kubeadm join node --token=73R2SIPM739TNZOA <master-ip>
  Initializing kubernetes node...    [done]
  Bootstrapping certificates...      [done]
  Joined node to cluster, see 'kubectl get nodes' on master.
All certs, all configurations, etc - done. No bash, no mess. All you need is a package manager and docker (or your container runtime of choice) installed.

If you have thoughts on how we should do it better, PLEASE join us! https://github.com/kubernetes/community/tree/master/sig-clus...


I think one of the challenges for me in learning Kubernetes is that the a lot of the tutorials do take a quite "run this script and things happen" approach which is great in that it gets you up and running quickly, but is less good for understanding what's under the hood, which is really needed before some people will feel fully comfortable with the technology.

Obviously there's Kelsey Hightower's great Kubernetes the hard way tutorial, but even there some details are quite GKE specific.

It'd be nice to see some tutorials that make no assumptions about running on a specific cloud provider or using automated tools to bootstrap the cluster.


Oh, for sure! Being totally script free (and cloud neutral) is the goal of this effort. As you see above, there's absolutely ZERO script/bash/automation/etc.


I have a somewhat confused feeling that your definition from above of 'fully production-ready cluster' is different than mine and others ;)


I appreciate your thoughts.

My suggestion is to design an interface for Kubernetes in ways less reflective of Conway's Law. That's how the needle moves from 'solution' toward 'product'. In essence, that's what I think 'Gmail Easy' boils down to. Gmail has user stories that are based on people (much)+ less technically sophisticated than a new fresh from school Google SRE.

I think that the call to action suggests why Docker is pursuing Swarm.

In a decade of container orchestration development, creating a good onboarding experience with the diverse demographic of Docker users has never been a priority within Google. If it had been a priority, Gmail-easy Kubernetes integration might have long since been a pull request from Google. That's what it says right on the label of the open-source software can.

The talk about forking Docker seems consistent with the absence of pull requests. It's just another business decision.


People on the bazaar who just want to deploy a CRUD web app you mentioned shouldn't be getting into the containers/orchestration in the first place. Just go for some PaaS or rent a server or two, write shell or python scripts to set them up and save yourself all the hassle.


The benefits of containerization are not limited to production deployment. Docker makes it trivial to run the exact same container in QA/dev as well - it's a 10 line Dockerfile and three commands (create, build, run) vs having to build custom shell/python scripts. Don't underestimate how useful that is, especially in smaller offices where you don't have the staff dedicated to automating everything. Even small shops also benefit from the ephemeral nature of containers - redeploying a container is a lot quicker and easier than redeploying an entire VM/physical server. PaaS isn't without it's own issues either (you have to learn the AWS/Google/openShift/azure/heroku way of doing things) and can be cost prohibitive.


What does Docker bring to this? This is basically the argument "use your production deployment mechanism to create your development environment". There are lots of ways other than Docker to do this, Vagrant being one of the most prominent.


One of the differences between a containerized deployment and scripted provisioning with tools like Vagrant is the state of a node following a deployment failure.

Deployment of a container to a node is roughly an atomic transaction. If it fails due to a network partition or server crash etc. The container can just be deployed again to the node again. By comparison, a partially executed provisioning script can leave the target node in an arbitrary state and what needs to happen next to bring the node online depends on when and how the deployment script failed as well as the nature of any partial deployment...and whatever state the server was in prior to deployment.


That's a reason why Docker is great for production deployment. If you don't use Docker for production your production deployment scripts have to deal with that.

But if you use your production deployment scripts for development deployments, then that problem has been dealt with one way or another.


Absolutely. One can reinvent any part of the Docker [+ GKE] setup from first principles on either dev or prod axis. That being said, it's nice to spend a day to setup Docker [+ GKE], and have [most of] the "babysit apt-get" and "babysit prod runtimes" taken care for.


Not being a containerization user currently, I cannot vouch for the accuracy of the above assertions, but this reasoning is why I've been pondering moving from VMs to containers at or consultancy. We regularly have to "approximate" customer setups for development, and distributing an environment definition to peer developers seems more appropriate than the umpteenth VM clone.


I can very much recommend that approach. Working in a very small team we're using docker solely for reproducible development environments with very small footprint. Our projects are usually smaller low range backoffice sites.

We're still hesitant to deploy docker in production though, but its on our list. For now we rebuild the production environment from the recipes in the Dockerfile and docker-compose.yml.

Before that we were using VMs with vagrant. I never really liked how ressource intensive and slow they where. And without VM I also don't see the need for Vagrant anymore if I can use docker directly.

So after 2 years of using docker I'm really very happy with our development setup.


Virtualbox sucks. Using a VM other than Virtualbox with Vagrant would have solved your problems too.


Main question with vagrant was: why?

It's just adding yet another layer to the stack, so another interface to learn. And if things don't work you have to deal with the “low level“ docker setup anyway.

Seriously, I've never looked back at Vagrant ever since.


> People on the bazaar who just want to deploy 3 node web app with some web server, some sql database and some caching layer shouldn't be getting into the containers/orchestration in the first place. J

But, that goes against the main hype driven development tenant. If it is on the front pages of HN, CTOs should quickly scoop up the goodies and force their team to rewrite their stack without understanding how technology works. Things will break, but hey, they'd be able to brag to everyone at every single meetup how they are using <latestthing>. Everyone will be impressed, and it also looks good on the resume.

/s (but I am only half-joking, this is what usually happens).


s/tenant/tennet/g


Really? I use kubernetes right now for a small use case (that hopefully grows) like this. I find it to be pretty exceptional, although the initial cluster setup can be difficult.

I also like using multiple vendors for a tool stack. Helps keep concerns separated.


People in the bazaar know the math and figured out, that they can get a second-hand Xeon or two servers from eBay, put it into cabinet with AC and electricity and withing few months, it will be cheaper than continued renting of PaaS.

For internal apps, that might be way better and cheaper than to run on AWS or GCP.


And faster. It seems like we've forgotten that latency is a thing.


In my comment I considered mentioning that Google might be seen as having a commercial interest in a trend toward CRUD on Rails developers paying for PaaS/IaaS on it's proprietary platforms as a reason for not making Kubernetes as dumb-easy as Docker for an average developer.

On the other hand, I don't think that roll-my-own shell scripts are going to solve the problem Swarm/Kubernetes/Mesos address -- scheduling -- as well as those tools will. The reason I think that is that many scheduling problems are NP-complete [1] and dynamically optimizing a scheduler for a particular workload is a non-trivial algorithmic challenge even at the CS Phd level.

[1]: Wikipedia lists five 'scheduling' problems: https://en.wikipedia.org/wiki/List_of_NP-complete_problems


Sometimes CRUD on Rails turns into Twitter. Containers might have pushed the switch to Microservices down the road and allowed an earlier focus on monetization.


So it's docker + some massive orchestration tool but never worth it to just do docker?


> Kubernetes, Mesos, etc. appear to be great orchestration tools for an organization with a few [or many] engineers dedicated to operations. Their not so great for a small team [or individuals] who are just trying to deploy some software.

I disagree. I worked with Kubernetes on a small team, and its core abstractions are brilliant. Brilliant enough that Docker Swarm is more or less copying the ideas from Kubernetes. Even so, Docker Swarm just isn't there yet.

> As I see it, Swarm seeks to solve orchestration analogously to the way Docker seeks to solve containers.

That's how I see it too. However, multi-node orchestration is MUCH harder. The Kubernetes folks know that. Setting up K8S from scratch on a multi-node setup is still difficult, and the community acknowledges it. Contrast that with the way Docker is handling it. Docker seems to be trying to sweep difficult issues under the rug. Look at the osxfs and host container threads on the Docker forum as examples. People had to ask the developers for more transparency before it started showing up.

I don't blame Docker for how they got to "sweep difficult issues under the rug", since that is part of their product design DNA. It's just that things are coming apart at the seams.

> Google doesn't see a business case for pushing Kubernetes toward the Gmail end of the ease of use spectrum.

That's correct. That is what GKE is for. On the other hand, that is what Deis/Helm is for. Turning the project over to the Cloud Native Foundation, listening to the developer community ... there is a lot of things Docker can learn from the Kubernetes project when it comes to running an open-source project.


Docker Inc. is the poster child of the cathedral.


I work on a small team. We use Mesos and it works for us.


So what should cathedrals do? Rely on the unpredictive bazaar?

  They're [Kubernetes, Mesos, etc] not so great for a small team [or individuals] who are just trying to deploy some software
The industry consists not only of small teams and individuals, we (by this I mean all IT) also use services provided by cathedrals (AWS) which have to have something stable to rely on


Makes sense.

Since the late 90's early 00's when Linux won the data center. Most people became really enjoyed the we don't break user land motto.

Once a kernel interface went live, it stayed that way. Ugly spots and all. Containers are starting to become a fairly important part of IT/Cloud infrastructure. Easily compariable to the OS itself. Logically those involved with maintainence would demand the same.

Yes I'm aware Docker is more a control program for interacting with Cgroups, setting quotas, installing packages, and isolating processes. Not the OS itself. It is an abstraction over the OS, hence for most developers it feels like part of the OS. So logically they'd demand it be as robust as the OS.


I think this is part of a more general trend of companies trying to use software that definitely isn't ready for prime time. Sure, it's trendy and being hyped, but those are not sufficient reasons to use it in production.

These days some new technologies become "trendy" (on HN and elsewhere) and start being used by developers. That in itself is great, but many of those developers are also young and inexperienced and do not realize that production systems have different requirements.

There are many symptoms: the docker situation, npm (need I say more), libraries like Semantic UI that can't be built in a CI environment from the command line (require user interaction). Even small things, like tools that change their behavior based on files in one of the parent directories (npm again, but not only), or the proliferation of fancy progress bars and useless drivel being spewed to the terminals, are symptoms. Those are tools designed by developers working on their laptops, for developers working on their laptops, and do not (at this stage) fit the requirements of a production server environment.

Maturity and stability are valuable traits in software, especially in larger systems.


That is actually what I like about being a senior developer on enterprise consultancy.

We see these fads come and go, wait for the dust to settle and use what is actually mature to be deployed in real production environment.

It also helps that our customers don't sell software, as such they view of what technology stack is supposed to look like is quite different from what the typical Starbucks developer thinks about software.


I understand that you are probably referring to developers who work in a Starbucks, not for them. But I found your comment funny because Starbucks is one of our clients and our stack is exclusively mature, production ready technology.


>It also helps that our customers don't sell software, as such they view of what technology stack is supposed to look like is quite different from what the typical Starbucks developer thinks about software.

Selling software to people who use software vs those who develop software is a vastly different experience. As are the questions and rigor they'll put a consultancy agency though. As well as what they expect out of a product.

>We see these fads come and go, wait for the dust to settle and use what is actually mature to be deployed in real production environment.

Yup nothing like Oracle SQL + Java App driven by Tomcat or Spring. Hell you can even get an Oracle Rep to appear in your sales pitch. The suits really like that.


One of the customers I was dealing with recently still requires .NET 4.0 due to XP support.


Well, these customers are extreme in the other direction... JBoss 4 on Windows 2003? What could possibly go wrong, it has been working fine for years...


Personally not so far, but Java 1.4, Websphere 6, JSF 2.0, .NET 4.0, XP support still are quite actual subjects.

But it surely is more fun than arguing what are this month JavaScript best practices and frameworks. Been there and don't miss it.


Just finished my Espresso Macchiato. I'm now waiting for it to influence my views on software,(jk it's funny the correlation between workplace/lifestyle preferences and software quality)


The one that springs to mind for me is terraform. We made the decision to use it in production and it is costing us a lot of time and money. It feels like it will continue costing us a lot of time and money for a long time. It 100% feels like alpha software, yet it's being hyped and overhyped by the tech community.

I personally think using it was the right call regardless (the alternatives are worse in the long run), but it's such a massive upfront investment, I couldn't recommend it.

One thing though:

> tools that change their behavior based on files in one of the parent directories

You mean like git? I agree with your general sentiment but you're backing it up very weakly, with essentially irrelevant things. Terminal interactivity is not a bad thing; TUI software is built for humans as well as machines and well behaved components will detect when they are not in a tty.


I really want to use terraform but after dealing with the collection of hacks called vagrant I am not so excited to do so. (speaking of randomly merging up parent directories)


> > tools that change their behavior based on files in one of the parent directories

> You mean like git?

In case of interactive tools like git it is something you would expect. But it came to me as a rather huge surprise that if you type "npm install" in a directory where you just unpacked sources, what will actually happen depends on what exists in the PARENT directory. Not to mention writing to the parent directory.

I am not alone in being caught off-guard by this: https://github.com/npm/npm/issues/2896

I'd say this is unacceptable in case of tools that are "make-like".


That seems pretty normal, if I'm in a subdirectory of my project and install a dependency, it makes sense that it would install it in the root of my project where the node_modules/ folder is.

When I use git in a subdirectory of my project, it looks for .git/ up the directory tree until it finds it.


It's an npm convention according to the dependency resolution algorithm, not a flaw. If you have a .git folder in the parent directory you also get different/possibly unexpected behavior. It's not unacceptable if you have read how it works. Both cases are documented.


what's make like about a package manager?


Do you really think that npm doesn't build software using dependency management?


It sure does "build software" but it's primary function is package management. We could call Emacs an email client by the same token. An equivalent of make in js world are Gulp, Grunt etc.


What issues do you have with Terraform? A few people in my org are considering implementing it.


Oh where to start...

The state file is atrocious, a constant source of pain. It is hard to set up in the first place, it is hard to import resources into it, it is hard to rename resources within it.

Sometimes, resources don't canonicalize correctly and will always tell you there's something to modify, even though there isn't. Other times, resources don't destroy properly because you are using some untested settings within them.

"It's alpha software" is the best description I can give. It has a ton of rough edges.

The pros: It's decently fast, and a lot more workable than the alternatives (cloudformation or simply tracking stuff by hand). It supports more than just AWS, including fairly obscure stuff like Cloudflare DNS records. It's conceptually solid.

Don't say no to it outright, but you should know what you are getting into.


That's interesting. These guys tend to have pretty good luck with it - https://segment.com/blog/the-segment-aws-stack/


Seen the article before. It's not wrong - Terraform is pretty great. But it's leaving out the parts where a lot of stuff breaks, behaves in unexpected ways, is missing critical features etc. A lot of engineering time has to be dedicated to terraform if you're going to use it in a serious stack (and it's overkill as just a toy).


I'll keep that in mind. Thanks!


Been using Terraform in anger, and I agree with your statement. It is not for everyone, not yet anyway, but I love using it.

Btw, Terraform 0.7 comes with initial support for import and state manipulation commands.


Thanks for posting this. So it sounds like the main source of frustration is really just the state file? Is that accurate? I have heard this from someone else.

Also you mentioned that Terraform was better than the alternatives. Could I ask what other alternatives you looked at and why you ended up choosing Terraform over them?

The reason I ask is that I am also considering Terraform. However other providers also seem to support more than just AWS and are fast enough.


The syntax abstraction of cloud formation alone is worth using terraform for me. The state file I delegate to tearing the infrastructure down. No more.

Use the AWS API and your config management API ( chef ) to manage the state of the systems.


> and a lot more workable than the alternatives

That is pretty much it right there.

I've yet to see another solution that beats it. Warts and all, it's the best I've seen.


Definitely seeing a lot more of this type attitude in the industry:

"You old developers just don't get it! The young are building the future! We LIVE on the bleeding edge, man. Who cares if there's a few bugs, that's what you MAINTENANCE developers do, fix the bugs that are beneath me. Besides, I'm out in like six months anyway. On to my next gig."


I find it refreshing, as a near-40 year old who's been in the industry for over 10 years, that there is finally movement against the absurd notion that one must spend more than a couple of years at most (or all) their jobs in order to be taken seriously.

There are two extremes. The one you point out leads to developers with lots of breadth and no depth. But the other is still the industry standard today: one that values depth in a particular (and possibly completely irrelevant outside of the company) domain.

I hope we keep moving towards the middle of the two.


And I would love to see the "developers" who stomp their foot and say they're not going to fix bugs because that's maintenance shown the door.

Yes, this happens and yes it is very immature.


No doubt. I've even encountered somebody who stomped his feet and said they weren't going to write test code for QA because that was beneath them.

I think that's because the entire industry devalues both QA/testing and maintenance. Even in places that aren't the Valley these two roles are given a lower social status than other developer roles. The budgets and recognition for teams in both domains is typically much smaller than others, and QA especially is usually among the first on the chopping block during a retrenchment.

We as a whole have made these roles seem less valuable and less values, so to me it's not surprising that folks don't want to do them.


Presumably you'd want to strike a balance. I know I'd find a job rather dull if all I were doing was fixing bugs. Fixing bugs is very important, but I don't think it's unreasonable to ask for a a reasonably diverse workload.


Which is what I'm talking about. These people only want to write "new" code. Only. They don't want to fix ANY bugs. It's "beneath" them.


It really depends on personal traits. At our company we keep weekly training sessions to align everyone to have some minimal breadth.


But most software is promoted as "Production Ready" to these companies, because their producers need "Rapid Growth"-TM.

If this went bad, you see something like MongoDB happen.


> If this went bad, you see something like MongoDB happen.

What did we see? There's still buttloads of software being written whose authors flat out refuse to support a sane database.


I'm not really sure why the proponents of this split don't just put their efforts into improving one of the alternatives which are already available (e.g. rkt). A split would seem to be a bad outcome allround (confusion in the market, divided resources, duplicated features), whereas competing products might bring out the best in each other.

Also it does seem a little odd to me to see people suggesting that Docker needs to be more stable and "boring" (from this article https://medium.com/@bob_48171/an-ode-to-boring-creating-open... referenced in the main link) to fit in with other projects in this space like Kubernetes, when it seems that most/all of these projects including Kubernetes are moving as fast as each other...


The container runtime is a basic building block. If it moves too fast (and in broken ways), stuff on top of it breaks. If Intel processors came with a completely new instruction set in every generation, it'd be hell.

I don't think people are suggesting that Docker Inc. should be boring. The criticism is more targeted towards Docker Engine, the container runtime.


Personally I think the entire container space moves too quickly to be a good target for stable long lived services. Pretty much every element is under fast development.

Stability is more the attribute of more mature services like Virtual machines, which have been around as a heavily used technology for quite a long time now...


The funny part about this is CoreOS is talked about being one of the players in the article, and they are the developers of rkt.


From the article:

> Spokespeople from ... CoreOS denied any knowledge of Docker-related talks.


Indeed, I did think that was rather odd...


IMO, they shouldn't get behind rkt, because it too is owned and primarily developed by a company who uses it as a commodity to sell their own products as well, and the same concerns arising from Docker wholly owning development against Docker Engine could very easily arise against rkt as well.

I believe an independent project managed by a board of shareholders would be better for us all.


We (CoreOS) built rkt because we feel strongly that the container engine should be a stable and fundamental building block of software. By itself rkt is not a product and we want to ensure it is something that can be integrated into other systems. For example, we continue to add rkt support to Kubernetes[1] to provide a user transparent option for container engines. Similarly, there are integrations with Mesos and others.

Also, rkt has had significant external contribution like the virtual machine isolation work from Intel[2]. We have spun out independent projects like the Container Networking Interface (CNI)[3] that is now used in Kubernetes, Cloud Foundry, and Mesos.

Stable and robust components are necessary to make this ecosystem work for people operating them. And we built rkt to help the entire ecosystem (and yes ourselves) be successful with that. The product strategy ends there. If a community of folks felt putting rkt into an independently managed foundation would help achieve that goal we would be happy to do that work with others.

But, I think the project has a good track record of working with others and focusing on a narrow feature set.

[1] http://blog.kubernetes.io/2016/07/rktnetes-brings-rkt-contai... [2] https://coreos.com/blog/rkt-0.8-with-new-vm-support/ [3] https://github.com/containernetworking/cni


Out of curiousity, where do you feel that Docker has less of a focus on stability than rkt, is it just in releasing swarm or in other areas?

Whilst I'm more familiar with Docker than rkt it seems that both projects are still changing quite quickly and have a lot of movement in terms of features being added & changed...


rkt tries to stay out of the way for everything that isn't "download, validate, and execute a container". For example, it doesn't know about networks but will hook out to CNI to setup a network namespace. It isn't an init system and will instead rely on a system init system to monitor it. It doesn't do any clustering but can be used by things like Kubernetes.


I'd agree that if it seems that containerization is a fundamental part of an OS that it would make sense to have a neutral platform for it.

That said there were containerization efforts in this area in the past (e.g. LXC) which never really seemed to attract the same kind of traction as Docker.

Ultimtely if people want to create containers, it's entirely possible to do with pure linux commands and no upper-level software at all, it's just Docker makes it much easier/nicer to achieve :)


> Ultimately, if people wanted to create [text files], it's entirely possible to do with pure linux commands with no upper level software at all, it's just [Sublime Text] makes it much easier/nicer to achieve.

It's absolutely possible (see: Bocker), but not a very productive way to go about things. It's much more productive to take an existing technology and make it better than to start from scratch (yet again), especially with permissive licenses like those used in Docker Engine.


possibly although unless they got a decent number of the original developers across in a fork (Which seems somewhat unlikely) it would be a pretty difficult thing to pull off to maintain/develop the codebase.

Not sure I can think of a case where a fork has been successful without a decent percentage of the original developers supporting it.


I think that LXC was on a good path if you were an Ubuntu shop, however I think that tying LXD to Openstack was an odd choice.


How is LXD tied to openstack? You can use LXD in exactly the same way you used LXC (with a nicer CLI interface, no less).

nova-lxd adds lxd as a "hypervisor" choice for openstack, but it's a completely optional way to use it.

Try it on ubuntu right now: sudo apt-get install lxd; lxc list (yes, the client for lxd is called lxc, it's kind of confusing. Sorry.)

Disclaimer: I work for Canonical, but not on LXD (I use it daily outside of openstack though)


I don't think rkt is a valid candidate. People are already used to the basic of Docker command. There is no reason to change to something else. If rkt decides to reproduce the feature set, how is that different from a fork?


People being used to the Docker commands is a very weak reason rkt isn't a valid candidate. Let's not forget the most important aspect of rkt, and that is that it follows the container spec standard.


> People are already used to the basic of Docker command.

The tiny amount of existing early adopters can be made to adopt something else one more time.


Not to mention that they constantly break the Docker api and interactions. Knowing the commands today is not helpful in the future. Once you get past "docker run <image>" so much has changed the last couple of years that it is pointless to hold that up as a virtue.


So the challenge, to me, is that if you fork docker, then people are going to have to get used to a different command structure anyway as the forks will diverge.

One of the reasons to avoid a fork is the confusion caused by two kind of similar but not quite the same implementations that diverge slowly over time.

rkt is unlikely to just copy the feature set of Docker, but they provide competition in implementing new ideas and as it's not intended to be identical there's less confusion.


Docker's mistake is bundling swarm and throwing away regular docker-compose with services. The bigger mistake was presenting them as they worked perfectly, just because in the sake of a badly timed DockerCon (seriously why the hell do we need 6-month spaced dockercons) they released something that was not complete and fundementally different from previous way of running containers. I feel their urge to monetize but events like this really leaves bad memories. By the way, does anyone remember the service command that was introduced in ~1.8 or 1.9 and just vanished? It was a similar mess.


Wait did they throw away docker-compose with services in this latest release? I'm not totally up on this release obviously.


Not actually, but swarm services with docker application bundle (dab) seems like a replacement for docker-compose and makes it redundant. All they had to do was promote compose from a cli tool to docker daemon itself and it would be great, not a major fundemental change. Now, I cannot even get the logs of a service because it is failing too fast and I can't catchup with it.


That sounds like a bad idea to me. I do agree that Docker rushed things a bit with Swarm (on the orchestration front) but I think that they're doing an excellent job with the containers themselves.

I don't think a fork will help - I think a fork would make sense if there were concerns about Docker's level of 'openness' but I think that's not the problem here.

Forking/duplicating a technology whose main premise is being "a single consistent environment for running apps" sounds like a contradiction to me.


There are concerns at every level about the decisions coming from Docker, Inc. and there have been consistent issues for a long time. Swarm is just the latest issue, and the arguably the worst, since none of the key decisions are defensible - Why is it there in the core product? Why was it added in a point release? Why was it included at all when it is clearly not stable? Did any major contributor outside Docker, Inc. have any input into this?

Ultimately, forking is what happens when the contributors can't work with the maintainer, and it seems pretty clear that Docker, Inc. can't act as responsible maintainers.


I don't feel like that. The Windows release can cause the machine to permenantly loose networking and on Ubuntu 15 the service is causing ridiculous CPU usage on some machines and locking up. Something is broken in the recent releases of the basic engine, its not stable.


Every time I think of Docker recently, the image of huge container ship accident shows up in my head. This thing needs to be standardized, and things are not going that way at the moment.


This seems rather manufactured. In the span of days, we've seen article after article coming out and condemning docker, advising kubernetes, and now this fork. Who benefits?


People who want a stable container orchestration engine and through that, the users.


Imagine there would be "docker engine for running in production" outside of control of docker inc, with backing from the rest of orchestration industry. It may be considered as a tool for compelling docker inc. to play more nicely, rather than the total fork.

- Developers interested only in the docker engine would consider using it for higher reliability, less breaking and rushed changes or force-bundling of things they don't want.

- If enough developers are using it, docker inc. would be compelled to maintain compatibility to avoid "mainstream" docker ending up as the tool only for dev/CI.


Beyond API flux, one thing docker could really focus on to alleviate a lot of pain would be to have some more stable / sane version management. If you need to break the API to bring in new features so be it, but maybe a former API client should be able to still interface with newer versions through backwards API support. When docker was new there was maybe a case to be made for stricter version matching, but it's just a sign of immaturity at this point that new versions of dockers upset the rest of the ecosystem so greatly.


I'm fairly sure that old Docker clients can communicate with new Docker servers (the docker.sock API is versioned and is backwards compatible). If that doesn't work, you should file a bug (there's lots of code inside the API handlers that deals with setting older API version defaults).


You are talking about the web generation here, no F-ing way do they care about API backwards compatibility...


Something like the "tick-tock" strategy that Cassandra has adopted?


While Docker is probably not the last word in containerization technology (which is a good thing), the idea behind it is quite powerful: Small, lightweight, self-contained objects that perform a given function and that we can plug together in many ways. I think the impact of having something like this will not be limited to traditional DevOps but will permeate many other areas as well, like data analysis and the delivery of end-user applications.


> Small, lightweight, self-contained objects that perform a given function and that we can plug together in many ways.

So Unix philosophy?


You bring up a really good point - that is the UNIX philosophy, but UNIX didn't win the datacenter wars: Linux did. And Linux doesn't entirely share that philosophy; depending on the Linux release it can be the polar opposite; piling on feature after feature into a monolithic block like systemd.

That's not an indictment of systemd... it's just an example of how I'm not sure everyone has glommed on to the fact that just a GNU isn't UNIX neither is Linux.


systemd is a really recent development. Linux won the datacenter wars well before any of the recent desktop stuff started encroaching, and it did it on the back of being a free Unix clone.


Yep. Linux offered a free unix that was not touched by the AT&T lawsuit and could run on commodity hardware.

Then we had the whole dot-com bust that freed up a whole lot of hardware to run LAMP stacks on, and things really got rolling.


It is, but even the raft of features found in some GNU command line utilities are probably beyond the classic UNIX philosophy of small and simple tools.


Almost: Containers go beyond the Unix philosophy in the sense that they (in theory) can run completely untrusted code, which isn't really advisable in a Unix system (although the system provides some means for it).

Concerning composability and interoperability I'm with you, and I think it's a shame that Docker decided to build their own stack instead of improving LXC.


I feel bad for shykes. This can't have been a very fun release. He tries really hard, and I get the feeling he takes a lot of the feedback to heart.


He's aggressive and isn't super consistent. In the end I think the conclusion of the Twitter conversation I had with him was good but then you look at the rest of the it and it's just aggressively negative. Conspiracies everywhere.


Didnt this happen like 2 years ago when CoreOS created the Rocket Project?

https://github.com/coreos/rkt/


When i noticed the posting mentioning CoreOS and Red Hat i found myself thinking that both of them would not mind seeing Docker go belly up. This because both are in deep with Systemd, and it is at this point offering a straight up competitor to Docker.


I hope rkt, runc and similar get donated to the CNC Foundation and get a direction there


runC is part of the Open Containers Initiative (which is a project by the Linux Foundation). So there wouldn't be much sense moving runC between two different Linux Foundation projects (especially since the same person [Chris Aniszczyk] is currently managing both projects). Also, since Kubernetes is part of the CNCF I'm hoping that will mean we'll get some support to adding OCI support to Kubernetes (something that we're currently working on).


I agree. I said the same thing on a Reddit post.


"What’s happening right now, if we are not careful, will fragment the container ecosystem, and it make it impossible for single containers to target multiple runtimes,"

UNIX had this problem and look how long it took before things settled. Linux was the result. Maybe this is a good thing?


Have they really settled?

In the 90's we had the UNIX wars.

Nowadays we have the GNU/Linux wars.

Each distribution does its own thing and every disagreement leads to yet another fork.


I don't mind the forks, as they are all happening out in the open.

What i do mind is how existing, well established distros are being hijacked by stary-eyed techies rather than spinning off their own proof of concept that others can adopt if they want to or not.

Especially when said hijacking happens not overtly but subversively by tightly integrating previously separate projects to create a TINA situation.


What does TINA mean?


There Is No Alternative. A mantra from Margaret Thatcher...


Right, so in that context who is the FreeBSD of containers?


Illumos (formerly OpenSolaris) containers perhaps?


Those are the ones!


SmartOS. Solaris zones which SmartOS sports were inspired by BSD jails.

I upvoted your question because as far as I can tell, you are the only person here to ask it.


The various distros have converged enough that you can typically run any userland program on any recent version of any distro. The recent adoption by virtually everyone of systemd is a sign of this, it seems.


And yet, the major commercial applications I use (Chrome, Steam, Matlab) all come with their own copies of standard system libraries.


meh, that was fully possible before systemd.

The real sticking point were distro package managers and the odd file placement difference (mostly regarding where to put the rc script to have things start on boot).

This meant you could not just put a package out there and tell some clueless web monkey to go curl it straight into production as root. And IMO that was a good thing.


> that was fully possible before systemd.

The wide acceptance of systemd was an example of the convergence of the main distros. I didn't mean you to infer it was some sort of prerequisite.


And that convergence is more politically than technically driven.


Linux distros don't compete, that's pretty clear.


Sure they do. You can only install one distro on a machine. Since you have to choose one, they are competing for that spot.


Sounds like the Docker container format should be split of into an independent foundation. Because everybody wants to use it but there's no money to be made of it. Then companies can compete on how to run Docker in production.


That is/was supposed to be what OpenContainers.org (OCI) is all about.


Good point. I guess there's more that should be a shared commonly funded resource: probably the core engine.


Support for legacy OSes in newer releases would be wonderful but I don't know how hard that is... There's a lot of talk about how the older kernel makes it really difficult.

If you're using CentOS 6 you're stuck with Docker 1.7. There are a lot of enterprise companies out there (I'm looking at you big banking) that aren't ready to move to CentOS / RHEL 7 and trying to get stable usage out of Docker 1.7 doesn't "Just Work" in my experience.

Anyone here use rkt with CentOS 6??


> The Docker orchestration capabilities are opt-in; they must be activated by the user. Though not opting in may lead to backward compatibility issues down the road.

Whoa, citation needed! Not activating swarm features has the same probability of causing problems as relying on fork() to create a process. Maybe not, but I don't see Docker suddenly forcing everyone into using swarm. It seems unreasonable to even suggest this, and a bit of a scare tactic.


I'm not developing anything against Docker except for automation tooling, and I would kill for a stable Docker Engine; stable disk drivers, stable CLI arguments, stable configuration file formats, a stable daemon, and so on.

That said, it's a hard problem, and I certainly don't have the time to work on it myself; nor can my employer spare me to work on them either.


I am sort of curious.

1. Who is running docker in production.

2. If you are running docker in production what sort of money are you making with it ?


I think a lot of these issues were already nascent when CoreOS decided to fork Docker. People are asking for 'boring container infrastructure' -- and it's called rkt. I remember the bruhaha at the time, with the Docker folks pissed at the CoreOS folks for doing so. That was Dec 2014. It looks like the way Docker handled the 1.12 release is shifting the sentiment.

For example: I ended up writing this to ask the Docker team for more transparency on this issue: https://forums.docker.com/t/file-access-in-mounted-volumes-e...

And they responded with an awesome reply addressing it: https://forums.docker.com/t/file-access-in-mounted-volumes-e... and it went a long way towards helping the community understand the issue what to do about helping.

However, there are also threads like these that asks for the same issue:

https://forums.docker.com/t/should-docker-run-net-host-work/... https://forums.docker.com/t/access-host-not-vm-from-inside-c... https://forums.docker.com/t/explain-networking-known-limitat...

They kinda left the community in a limbo here, and quietly added a line in the documentation saying it won't happen. But without the transparency, we don't really know what's going on here.

Back then with the rkt split, the Docker design was to gear towards users so that there is as little friction as possible. It worked all right when it was just Docker Engine on Linux. Over time, due to differences in distros, you can see the container abstraction leaking here and there. Generally manageable.

In the 18 months since, it's becoming clearer that Docker is drunk on their story. Seems like more and more of the leaks from the abstraction are getting swept under the rug while they are trying to make a land grab for the orchestration. Yet Docker is doing it in a way that is sacrificing the goodwill from the community. It first starts with the third-party vendor relationships, but as you can see from these forum posts, it is starting to leak into the relationship with end-users as well -- the developers.

There's still time to turn the (ahem) ship around. But a big part of what is driving Kubernetes success isn't that their abstractions are brilliant, but rather, that project is very transparent and communicative of what they are doing and what they are intending with the community. I get Docker is trying to do that with Docker Swarm, yet I think they missed a critical part of why and how Kubernetes gained so much traction so quickly.


Great summary of the issues. Though I've been only tangentially involved, I agree with all the points, especially "drunk on their story".


> the way Docker handled the 1.12 release

Could you elaborate on that, please?


So instead of going to working, stable alternatives to Docker like SmartOS, people still cling on to it and try everything and anything to save it!

I'll be damned if I understand why someone would continue to cling on to broken software[1] when there is a working alternative. Can someone explain this irrationality in terms which I can understand?

[1] https://news.ycombinator.com/item?id=12377457


Maybe Docker should do what Ubuntu does: periodic LTS releases with a guaranteed support timeframe. They can experiment with the newer releases, and the LTS releases will be there for people who need stability and don't need bleeding-edge, unproven features.


This sounds like a bluff to me. "Oh, you want fast moving changes, mobility up the stack, and centralized control? We want none of those things. Either you soften your stance and start listening or we'll fork."


I think this is all a push by Docker's management to get the company acquired



The real reason everyone is upset is that I can now say.. goodbye kub, goodbye mesos, goodbye coreos, your are all complicated. I'm thrilled about docker 1.12 and swarm mode.


That's not why. Good for you on using Docker 1.12 with Swarm mode, if that's what works for you. Have fun in production.


I agree. The big guys are scared. I played with it a lot and the new Docker Swarm mode is really, really great. It's the Digital Ocean of orchestration. Granted, 1.12 is the first incarnation, has flaws and is unfit for production, but that will change. I can wait a few months.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: