Hacker News new | past | comments | ask | show | jobs | submit login

On topic of docker and multi-container, multi-machine orchestration... Is there a comprehensive "docker deployment for dummies" guide out there? For example, let's say I have couple web applications with their dockerfiles ready, a database and a redis instance on software side, and then couple server instances for it all to run on. Where do I go from there? What's the best process to package everything up and get it to run on those servers? Deliver updates to those applications, preferably in zero-downtime manner? I have a vague notion that my CI should be building the images, and pushing them to something called docker registry. But how are those secured? Is that a paid service? And what happens then, how do servers know to fetch and run the new version?

I've implemented a zero downtime Continuous Deployment pipeline with Jenkins and Docker, see project here: https://github.com/francescou/docker-continuous-deployment

I wouldn't call it comprehensive but I did this:


This space is still (fairly) new, so the general answer seems to be that there are multiple solutions to each problem, some that work well with others and some that do not.

For orchestration, offhand the most active projects seem to be Kubernetes [1], Swarm [2], Deis [3] and Mesos[4]. Kubernetes is built primarily by Google, Swarm by Docker and Deis by EngineYard, with each team having experience in different areas (orchestration, containers and full-tier solutions, respectively).

[1] http://kubernetes.io/ [2] https://docs.docker.com/swarm/ [3] https://github.com/deis/deis [4] http://mesos.apache.org/

Kubernetes, Swarm and Mesos handle the orchestration portions only, while Deis is a more feature-complete solution that handles the CI and registry portions as well.

Delivering updates to these solutions and doing so with zero downtime is still very early as well. Kubernetes has a rolling update mechanism, but it can still (occasionally) result in downtime if not setup correctly. Deis handles updates via git-push and will ensure that new containers are in place before the old ones are taken out of service. As for Swarm, my personal knowledge is limited in regards to rolling update, so I'll leave that for someone else to fill in.

For building and delivering images, there are as well multiple solutions. The common solutions are to use a Docker-compatible registry such as Quay [5] (Disclaimer: I'm a lead engineer on the Quay team) or the DockerHub [6]. In addition to supporting simple image pushes, both registries as well support building images in response to GitHub or BitBucket, so they can also be used as an integrated CI, of sorts. Both these services are paid for private repositories. Docker, as well, has an open source registry [7] which can be run on your own hardware or a cloud provider.

Registries are secured by running under HTTPS at all times (unless explicitly overridden in Docker via an env flag), and having user credentials for pushing and (if necessary) pulling images. Registries typically offer organizations and teams support as well, to allow for finer-grained permissions. Finally, some registries (such as Quay) offer robot credentials or named tokens for pulls that occur on production machines as an alternative to using a password.

[5] https://quay.io [6] https://hub.docker.com/ [7] https://github.com/docker/distribution/blob/master/docs/depl...

In terms of how servers know when updates are available, it all depends on which orchestration system is being using. For Kubernetes, we at CoreOS has been experimenting with a small service call krud [8] which reacts to a Quay (or DockerHub) image-push webhook and automatically calls Kubernetes to perform a rolling update. Other orchestration systems have their own means and methods for either pushing or pulling the fact that the image to deploy has changed.

[8] https://github.com/coreos/krud

Hope this information helps! (and if I forgot anything, I apologize)

The docker ecosystem is hard to follow. Like you've just mentioned there are multiple solution to each problem. Docker based solutions for orchestration(Swarm), storage (v1.9) and networking (v1.9) overlap with the offerings from Kubernetes,Mesos, Flocker and whole bunch of others.

It's hard to know whether to wait for Docker to provide a solution or to use something that already has momentum. Take networking for example. Solutions have been bandied about for the last year or so and only now do we have something that's production ready. Do I rip out what I already have for something that is docker native or do I continue with the community based solution.

Storage (Data Locality) also follows a similar path. Kubernetes provides a way for making network based storage devices available to your containers. But now, with the announcement of Docker v1.9 do I go with their native solution or something that has been around for ~6months longer?

I've been working with these technologies for the past year and it has not been easy building something that is stable with a reasonable amount of future-proofness baked in.

My advice would be to think hard about your requirements and pick something which meets them. Don't fret about the "best" solution - you and your team have more important problems to solve. If something works for you then you have made the right choice. All the solutions you would pick today will still be around tomorrow.

Try writing a book on it! Maddening.

great writeup, most helpful. thanks a lot!

If you're comfortable deploying to AWS, I'm building an open source and free platform, Convox, that addresses your setup and deployment questions.

Here are a couple guides that walk you through your first Docker cloud deployment:

http://convox.github.io/docs/getting-started/ http://convox.github.io/docs/getting-started-with-docker/

This gives you a private build and registry service that are secured in your own VPC and accessable only through authenticated API calls.

The software that sets this all up is open source and free, but you do pay for your AWS usage (EC2, ELB and S3).

Servers know how to fetch the new version by issuing one `release` command that triggers zero-downtown rollout on the EC2 Container Service (ECS).

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact