So the 'smartness' of HN when it comes to containers is in question when it failed to hold the 'marketing message' to account and single handedly hyped Docker without any context to LXC, in development since 2007 a whole 6 years before Docker 'discovered' it at dotcloud.
The reason why Docker took LXC containers and ran them without init was never explained or the trade off in complexity compared or questioned, something basic you would expect in a technical forum. The fact that most users remain confused about LXC and Docker on threads even today speaks for itself.
Now it promotes itself as the originator of container technology as if LXC on which it was based does not exist.
And not just LXC but the work of other critical projects like aufs and overlayfs is pushed into the background so Docker can 'claim ownership of containers'. Has Docker contributed code or resources to any of these projects it built on? Do docker users even know the name of the developers?
So no recognition, no support. The message to open source developers is toil for decades and any marketing savvy VC funded company can take your work to market and basically erase your contributions and no one will hold them to account. HN has been ok with this kind of predatory marketing since 2013.
I don't want to make a political message here, but this is essentially the risk you take with a more permissive license. Someone absolutely can come along and build a product that subsumes yours. They can create proprietary extensions that you can not use, essentially gaining the benefit of your work without having to return the same favour. Basically, you then have to compete against yourself.
With a copyleft style license, it is much harder (though not completely impossible) for someone to do that. You will most likely be able to use their extensions and will be able to compete on a basis of who is executing better.
There are serious downsides to copyleft style licenses for things like containerisation, though. The barrier to entry is really large and you are going to cut out a lot of players who could help you. So it's really a matter of strategy. But if you are going for a permissive license, you'd better have all your ducks in a row because you should expect something like this to happen.
Free and open source business models are still pretty naive these days and I think it's going to take a few more decades before we have a really good idea of the best way to proceed.
If anything happened in 2016 in the containerization space, it was that Docker was playing a lot of catch-up to Kubernetes. It seems like they rushed out things and then they botched the OSX native release of Docker in Q2 2016, and lost a lot of goodwill in the community of individual developers (more so than CoreOS did with rkt? That's how it feels like to me. Back then, people thought CoreOS were the assholes for forking. I think that sentiment has shifted). I have not tracked how Docker is doing in enterprise, but I guess they are doing well?
I kinda doubt Docker (specifically, Docker Swarm, and the orchestration tools Docker forced into docker-engine) will ever catch up to Kubernetes, but hey, things move fast, right?
Having said all of that, I like this description: https://news.ycombinator.com/item?id=14141832 ... which makes _much_ more sense. Maybe now, someone will use Moby to create a native OSX Docker kit that actually has decent file sync performance (either with unisonfs, rsync, or nfs, and not try to layer it as another FS).
Docker's biggest asset right now is that everyone has written Dockerfiles already, but unfortunately, Dockerfiles are terrible and people can't wait to get off of them.
Long-term, I expect the platform to be k8s + CoreOS + rkt. Docker's future, if anything, will be in hosting the image repository.
I have been dabbling with that same thought for a while too. One thing I have not yet figured out, though, is what CoreOS' vision is for using rkt as a developer on Mac, something a lot of developers are.
We're using Google Container Engine with containervm, which I understand is very similar to CoreOS.
To me, the leading edge and center of gravity has already moved onto Kubernetes (and the various orchestration systems that is attempting to compete with Kubernetes). Even though the buzz around Docker is still growing, the ideas coming out of the Kubernetes community is what is influencing and leading the containerization technologies. Small movements in Kubernetes will result in large movements around people using Docker.
The core functionality of Docker can be implemented in ~100 lines of bash script:
More on topic though, most of the article is marketing BS so developers have lots of buzzword to give to their bosses when requesting approval for stuff related to Moby.
Disclosure: I work on Cloud Foundry on behalf of Pivotal.
I think this was mainly intended as an answer to the criticism docker has been receiving (by kubernetes maintainers and others) ever since they decided to ship swarm with docker. I think this move is great as it goes a step further and allows you to swap out build systems, volume mgmnt too. Even though I did not mind docker shipping with swarm, others in the community were, and this shows docker listened, which is great.
EDIT: grammar fix
It's fine to offer off-the-shelf supported configurations but why start with everything and the kitchen sink and then try to backtrack?
"Hey cool project, I want to try it!"
To already setup in under a minute depending if I'm at home or not.
Also the VMs start really fast, so it's good for getting exotic compiler toolchains on Windows running quickly. Everything has a dockerfile.
I will try to summarize: when we build Docker for Mac, Docker for Windows, Docker for AWS etc, we assemble a lot of individual components into a complete platform. Then we pack that platform into a bootable artifact for the target environment. That's a lot of work, and it gets harder as the number of targets multiply. We developed a framework to make this more efficient. That framework has become the de-facto upstream of the Docker platform - it sits between the individual upstream projects and the finished product. So we're open-sourcing it as Moby, moving all of our open-source process under it, and inviting the community to come play. Think of it as the "Fedora of Docker".
Here's more technical details from the readme: https://github.com/moby/moby/blob/moby/README.md
- If you're a Docker user, nothing changes: Docker remains the same
- If you're a Docker open-source contributor, you're now a Moby contributor. Everything is basically the same, except more modular and more open, and you are less tied to Docker.
- If you're building non-Docker container platforms, it's easier to share components and ideas with the Docker community, without being forced into anything you don't like.
The Moby tooling itself is pretty neat: you define all the components in your system (including the OS and hypervisor, if required), then pack them into the artifact of your choice. For example you can assemble LinuxKit+ContainerD+Redis into a tiny "RedisOS", and then boot it straight from bare metal; or virtualize it with HyperKit and run it on a Mac; or virtualize it with HyperV and run it on Windows. Moby does all of this for you automatically (this is one of the keynote demos).
We also showed a "Kubernetes in a box" assembly, to show that you don't have to stick to Docker-built components.
How do you (a for profit company built on open source) enable your community, to build and sell (their own) products, as well as enforce your trademark? If anyone (everyone?) can build and distribute and call their build `docker`, it becomes really hard to protect your investment (as a company and their fiduciary responsibility to their stock holders) from dilution of TheBrand(tm).
This is not cynicism and I don't fault Docker (the company) at all, it's just the reality of doing business. It's why Canonical requires a contractual relationship for infrastructure clouds to distribute Ubuntu images, it's why you have things like Firefox and IceWeasel, so on and so forth.
None of these systems scale up from a single 1 node system to a full distributed cluster. Here was my attempt of trying to get DC/OS to run in a minimal cluster on Digital Ocean:
We use DC/OS at work and the web UI frequently transfers over 1MB of json a second and the failed containers tab can max out CPU at 100% O_o
Marathon, Kubernetes, Nomad and Swarm all have their own orchestration json files in totally different formats. It gets more confusing with you take about pluggable network layers (WeaveNet, Flannel).
I'd _like_ to think Docker is trying to bring some standardisation to clusters, networking and scheduling ... but I'm not sure if that's the case.
You wouldn't want to run a single node in production, of course, because that defeats the point (and introduces a single point of failure). But there's nothing technically preventing you from doing it.
As a Kubernetes user, I hope Docker as an application is going away, because it's increasingly in the way. Kubernetes would benefit from managing its containers with a more direct, more lightweight mechanism (like rkt or containerd).
If you're on a Debian/Ubuntu-based OS, you basically `apt-get install kubeadm kubelet kubectl`, then `kubeadm init` (and you have 1-node cluster)
Then you `kubeadm join --token $token $master_node_ip` and you have the second node. Repeat as necessary, throw in some proper automation when you have enough (3+) nodes.
There is a disclaimer that it's beta, though.
Still, Docker feels much simpler, and I really believe "simpler" means "better". Maybe that's just my prejudice, but Kubernetes has awfully enterprisey aftertaste and feels to be full of magic. If Docker or Swarm goes haywire - and every software has bugs - I can check almost every component, piece by piece, down to what kernel is told to do - thankfully, they're still not too far from the actual OS primitives. If affected node number is tolerably small, I can even "downgrade" to Compose and manual scheduling and networking kludges, with relative ease. I'm unsure what I would do if some day Kubernetes would insists on malfunctioning.
I agree with you, Kubernetes seems insanely complicated. I haven't used it in production though. I'm in a Marathon shop, and it is really slick once you have a working DC/OS cluster .. but setting up that cluster requires a full time team. It's not trivial and far from simple.
What do you mean? This is exactly what they do.
> Running Kubernetes Locally via Minikube
> Minikube is a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day.
We use 1-node for dev work and then n-node cluster on staging and production. My team's dev workflow spans from 1-node to n-node just fine.
Granted, I also wrote Matsuri to consolidate managing dev and production systems. But I also know that K8S 1.5 and 1.6 introduced a lot of resources that makes it easier to manage things like that (such as ConfigSets). I also know that Helm has been doing a lot in this area. (I had never used it because it overlaps with what Matsuri does).
Of course, there are very good reasons for it to require 3 machines, and since machines are so cheap, it should be absolutely a no brainer to spawn a $5 node to get that quorum running, but that's I think the argument for it.
You run 3-nodes for availability and resilience. You can spread the 3-nodes across availability zones (isolating failure domains). If there is sufficient capacity, then one out of three nodes can go down, and K8S master will reschedule the pods in the down node. This also allows for controlled upgrades. If those are not a requirement (your prototype doesn't need that kind of uptime), then why not run it in 1-node?
The only subsystem I know of that has a quorum mechanic is etcd backing k8s master. The docs will recommended to run the backing etcd in a 3-node configuration-- again, for availability. However, I run k8s worker, k8s master, and 1-node etcd on the same dev box. (The trick is to tell etcd to listen to three ports and tell the single etcd node that those other ports are part of the quorum). There is no reason the 10-user app that can tolerate some downtime cannot run like that either.
I do think though, K8S has not yet closed the gap for people who want to prototype quickly and get something out. Heroku is a great platform for that. There is an migration path that goes from Heroku to Deis (which is built on top of Kubernetes and will work with Heroku build-paths).
You can run Kubernetes locally without a VM just fine, but a VM encapsulates everything nicely without needing to configure the host's network and so on. It doesn't "emulate" a cluster, it creates a cluster that happens to contain only one node. Plus, it works on non-Linux machines.
I've thought about using minikube for local development myself as a way of being able to reuse helm charts so instead of having one setup for dev (perhaps docker compose or simply the stack standard way) and one for staging/production (helm), it's all the same helm chart with just different values. Another reason is to be able to encapsulate dev environments better. Right now, every stack I use have different ways to do it (pythons virtualenv for example). If I were using minikube, everything would be using the same encapsulation. I've tried setting it up, and it works fine, using host mapping so files are still reloaded when files change locally on and so on, but something I haven't solved yet is that in this setup, the only way stack specific packages/modules/whatever is installed, is inside the container, which is a problem because my editor/IDE requires the packages/modules to be available locally for it to be able to do "intellisense".
Have you and your team delt with the same problem or other problems? How have you solved it? Any tips and thoughts are much appreciated!
I wrote Matsuri as a way to manage this. https://github.com/matsuri-rb/matsuri This was written with K8S 1.1 in mind, and later somewhat updated to 1.4.
It was primarily intended to solve:
(1) Consolidate interfaces for working with dev and production (or other environments such as staging, demo, etc.)
(2) It implements this by being a manifest generator and feeding the manifest into `kubectl`.
This was designed and written before ConfigSet, so some of what Matsuri solved is now solvable with ConfigSet.
Matsuri is written in Ruby, and so it takes advantages of class inheritance and method override to achieve what it does. The class definitions used generate the manifest is broken down into composable chunks, which can be overridden at any point.
In this way, I can have a base template for say, a web pod for dev, staging, and production. In dev, I would then method-override what is returned for volume mounts and volumes. In that way, I can specify source mounting and SSH agent sock mounting for dev ... but not for staging or production. In another example, I can use method override to augment the environment variables passed into the container in order to capture the uid/gid so that the container can reconfigure itself to use those.
There are helper methods to generate things like, correct format of ports or volumes. None of those are necessary if you already know how to format it.
Likewise, since a manifest is just a class definition, replication controller, replicasets, and deployments all reference a pod and use those to create the pod template spec.
I then have an extra resource called an "app" that does not map to K8S. They define a bundle of K8S resources or other apps that can be converged together. That allows someone to quickly setup a dev environment with all the moving pieces. Those also include hooks for you to do things like, define the command for shelling (with correct uid/gid) or console (which depends; redis console is invoked differently than a rails console). This allows a consolidation ... for example, to log into my dev rails console, I would do something like:
bin/dev console myapp
And to do that on production:
bin/production console myapp
(Powerful, but I suspect there will be some asks around security here).
I had flirted with the idea of allowing plugins or packages ... and then Helm was announced. I figured more people will use that than Matsuri. But one possibility might be to get Matsuri to work with Helm.
There are a lot of rough spots with Matsuri. The command-line interface could be done a lot better (right now, it assumes you will always be in the root of your platform repo). There are no documentation or examples for Matsuri. It does not implement all of the resources. It also helps a lot if you know Ruby.
Infrastructure is not my full-time role on my team. Being a small team and early stage, we have had to wear multiple hats. I had not had the energy or time to document and promote this. If you want to chat about this further, feel free to email me at hosheng.hsiao at gmail.
One of the headaches we run into comes from using Alpine as a baseimage. They are compiled against muslc instead of libc. You always have to install gems from inside the container -- but we can do that with host mounting and then executing the install from inside the container. That hasn't been a problem until we tried to install google-api gem and found out grpc doesn't play well with muslc.
Like others, I found the press release incomprehensible so the pull request linked to in the docker/docker README is where I'm drawing this understanding from: https://github.com/moby/moby/pull/32691
As far as why I think Docker might be doing this:
It seems like a continuation of a similar strategy that they did with containers in the first place back when they were known as dotCloud. Rather than continue to compete with Heroku as a PaaS they found it better to open source their container technology and start a container movement (for lack of a better word).
Now that the container movement has happened there are a lot of competing tools at the container runtime (Docker, rkt, systemd-nspawn, lxc, etc.) and management (Swarm, Kubernetes, Mesos, Nomad, various ones that delegate to AWS ECS, etc.) levels. Rather than compete with all of those they are pushing themselves up a layer of abstraction to make it easy for companies to create their own container management tools specific to their needs.
Rather than having just the Docker engine, it will coordinate docker, swarmkit, infrakit, linuxkit in a single project.
These will be swappable, so for example you could a) swap swarmkit for Kubernetes, b) swap linuxkit for Debian, c) swap infrakit with Terraform.
Like "Docker for Mac/Windows/AWS/Azure/GCE", etc. already exist - Moby will likely house all these variations and allow the creation of custom "Docker/Other for X/Y/Z".
Is this really necessary? Seems like it's just going to create tons of confusion.
When was the whole Docker circus not confusing?
Seems like a move made in a rush. It would have been better to spin out the dependencies of docker into moby instead of moving the project back and forth.
Agreed that it is very confusing at this stage.
You've just summarized everything Docker Inc. does.
Moby is NOT recommended for:
Application developers looking for an easy way to run their applications in containers. We recommend Docker CE instead.
Enterprise IT and development teams looking for a ready-to-use, commercially supported container platform. We recommend Docker EE instead.
Anyone curious about containers and looking for an easy way to learn. We recommend the docker.com website instead.
Basically, make all the building blocks of Docker available as a library, with some framework to use to connect them.
Probably the Docker project would become the blessed customer of this new framework.
I can't really see rkt, snap, OpenShift adopting this technology, though.
Edit: as many of the things they want this framework to do belong to orchestration (if there are several containers involved--they often are), this might also look like a pinch at Kubernetes.
For now you have to watch the DockerCon keynote to understand; it's intended as a "common assembly line for container systems".
That said, what do I know. The folks at docker are brilliant marketers (with a great product). The marketing is the main reason docker is so wildly popular.
- Moby VM is (was?) the name of the Linux VM that Docker for Mac (and Windows too, I think) runs as the host for Docker containers.
- Moby Dock is the name of the whale in Docker's logo
A Google search for "moby docker" restricted to results before April 2017 shows plenty of results
Edit: sample search result: http://lucjuggery.com/blog/?p=753