Hacker News new | past | comments | ask | show | jobs | submit login
Lessons learned from using Docker Swarm mode in production (bugsnag.com)
92 points by gkze on Sept 15, 2016 | hide | past | favorite | 52 comments



I am kind of surprised with negativity about Docker and Swarm. Both of them but UX first and technology second. This is the correct approach to get adoption (and also the reason why they are popular). Getting started with Docker and Swarm is really really simple and it's very hard to dislike simple things. Compared to that mesos and openstack and other tech stacks like that are garaguntan and imo have very bad UX.


Consider that they have different target audiences.

Docker itself is mostly targeted as a tool for developers: you, the developer, dockerize your application, resulting in a container-image. Sure, that container-image then has to get deployed by someone (who isn't necessarily you), but the reason it's getting deployed at all is that a developer, at some point, made a decision to use Docker as part of the development process. Everyone else has to just deal with that.

Docker Swarm, meanwhile, is infrastructure, pure and simple. Developers don't touch it; ops people do. And ops people have very different opinions on what makes for a good piece of software than developers do. "Good UX" comes second to things like "stable" and "low overhead" and "predictable failure modes" and "configurable from a central source of truth."


In big companies ops trumps devs, and it's correct because they develop for 6-12 months and then operate that sw for 6 or 12 years. Been there, saw that (a mobile phone operator). So if ops says that docker is no-no for deployment, the dev has to work with another technology or convince them that everybody is going to benefit from it.

Startups begin as small companies and small companies have a single team that decide how to develop and deploy. Usually developers deploy and take care of production too. What's convenient for development often trumps what's convenient for production, at least for the first months or years.


You're presuming that the same company develops and deploys the software. My company runs many third-party Docker container-images in production—precisely because a Docker container-image is the only format that software comes in.


I've seen ops accepting to run a couple of services on Windows and Linux at a time they were all HP-UX and Solaris. There were no good alternatives for those services so ops were not happy but had to learn how to operate those servers. Can I suppose you went through the same?


this +100

I (single dev at my startup at that time) adopted Docker 0.4 alpha and i have grown with the docker ecosystem. Today we pay for codeship,etc.

there is zero chance i would have a Docker buy-in if I could not get started then. This is the case with Docker Swarm and k8s today.

I'm struggling with k8s... while the evolution of Docker -> Docker Compose -> Docker Swarm is fairly easy and incremental.

In 2 years time when I have a large devops team... i will spend money on Swarm. Docker Swarm is conquering from the bottom. K8s still has a chance... but it is choosing to compete with OpenStack rather than Docker Compose, which is a big mistake IMHO.


You have a good point. But startups do not have the luxury of using a separate ops team. It's the dev teams that deploy the code as well.


I have been wondering - whats the business model for Docker itself?


Selling training, support and docker datacenter (http://www.docker.com/products/docker-datacenter)


I am a broken record lately on here... Nomad is really awesome for avoiding configuration hell and having to manage multiple services for container orchestration. It's a single binary and very very easy to setup and run. I actually prefer it to Swarm but YMMV.

It uses Consul under the hood and has so far been bulletproof. (They all have their drawbacks / idiosyncrasies).

https://www.nomadproject.io/


The point I'm trying to make is that these clustering capabilities are directly inside docker, so you don't have to run anything else but Docker. That is a huge win for us


The one feature that to me seems to be essential but appears to be missing from all these container orchestrators is the ability to tie a remote volume (Ceph/Gluster/Lustre/etc) to a container so that if a container is scheduled to run on a certain node, the volume will automatically be mounted on the same node.

It seems from the mailing list that at least Nomad will have that at some point, but I have not seen much talk about it from Kubernetes or Docker Swarm.


Kubernetes supports this via PersistentVolumes. Supported types include:

GCEPersistentDisk AWSElasticBlockStore AzureFile FC (Fibre Channel) NFS iSCSI RBD (Ceph Block Device) CephFS Cinder (OpenStack block storage) Glusterfs VsphereVolume HostPath (single node testing only – local storage is not supported in any way and WILL NOT WORK in a multi-node cluster)

http://kubernetes.io/docs/user-guide/persistent-volumes/


Interesting, thanks! To be honest I haven't looked too much into Kubernetes so far because of the emphasis in cloud deployment, while my interest is in setting it up in bare metal.

This feature might make me give it a try though!


There are plenty of people running Kubernetes on bare-metal. There are a number of resources out there for PXE-booting a Kubernetes cluster on bare-metal.

Might be time to take a closer look ;).


Yes, but the instructions that I've found to do so are based on installation via shell scripts, which I'm not comfortable with. I'd like to understand it well before running it even in a test setup.


The top result for "PXE Boot Kubernetes" is an extremely comprehensive step-by-step guide for it from CoreOS. I didn't see any magic shell scripts. The second top link is Ansible with links the constituent Ansible playbook.

If you want insight into how the pieces fit together, the learnings from Kelsey Hightower's Kubernetes the Hard Way will certainly still map to bare-metal environments as well.


CoreOS has some pretty comprehensive docs on deploying Kubernetes on bare metal:

https://coreos.com/kubernetes/docs/latest/kubernetes-on-bare...


True. I didn't know that was even being discussed. I'd never really thought about the container host dynamically pulling in a volume as a dependency for a container.

The best I've been able to do is use AWS EFS on my container hosts so that my ECS tasks with volume mounts find the same stuff everywhere.


EFS on AWS is an interesting approach. It's a NFS mount that every instance and every container can share.

I wouldn't run a database off it but it's been great for simple file synchronization across containers and container restarts.


we tried using EFS for shared storage but quickly depleted our I/O bursting credits and our throughput dropped to a grinding halt, because our app's worklaod is both disk write and read intensive. No solutions yet


If you need more performance than EFS for your shared filesystem storage, you could give our ObjectiveFS (https://objectivefs.com) a try. We see significantly higher read/write performance, especially for smaller files.


Unrelated, just some color commentary, I put my first request that took over 24 hours to be approved... 150k+ provisioned iops. Seems they don't like doing that.


just curious - does it provide rolling updates to the jobs that you're running currently?


Nomad can do it but I've not used that feature.


This is a great write up, thanks for sharing the lessons learned.

I wonder if the open questions about instance management are solved by the "Docker for AWS" beta.

We are entering the commodity phase for orchestration software.

Blogs and HN comments are full of success stories on Swarm, Kubernetes, Mesos, Nomad and ECS.

There are also a few warnings, like the routing issue in this review, but it's simply a matter of time before those get sorted.

What's really going on here is that we are all learning how to handle complexities of distributed systems in the cloud. These new foundations means we can run more sophisticated apps easier and more reliably.


Nothing whose version ends in "-rc4" is used in "production". You're using it in a very hot beta test.


We ran RC4 in production and we're running 1.12.1GA in production right now as well. We have been making money while running this and serving live customer traffic so we consider it production :)


I hope you have a plan for your paying users when it breaks in production.


Honestly, you should try kubernetes. The experience is pretty much the same and the feature set is much more mature.


My experience with Kubernetes is limited, so take all this with a grain of salt. I've been using the integrated swarm since beta.

The new integrated swarm is a real game-changer in that it is much simpler to use compared to other solutions. With swarm, it's simply:

    docker swarm init
    docker swarm join --token <blah> <blah:2377>  
That being said, I found that Kubernetes offers more granularity in the level of control over the cluster. That's not something that __I__ need necessarily, though obviously YMMV.


I've had the same great, simple experience with Flynn. The terminal commands you give are barely different from Flynn's. And I like Flynn's capabilities better, at least for now. They've already figured out the routing fabric (unlike Docker, per the article) and they have a great redundant DB capability, sorely missing from other PaaSes, even k8s.

I have no doubt that docker will eventually catch up though.


That's cool. I've never looked at Flynn, but you've motivated me to give it a look!


People are working one this feature for kubernetes:

  kubeadm init master
  kubeadm join node --token=73R2SIPM739TNZOA <master-ip>
https://github.com/kubernetes/kubernetes/pull/30360


The big takeaway for me is that first impressions matter. Although the bulk of the post is about hard-earned knowledge and workarounds for completely unreliable features, getting a proof of concept in twenty minutes sealed the deal.


While we're on the topic can anyone recommend a system for rolling out (Java) applications across server farms that doesn't use containers? We have a bunch of shell scripts that are pretty horrible.

We could containerize, but we dont need that right now.


I know it's counter to your question, but it really is quite trivial to containerise a java app, for example:

  FROM openjdk:8u92-jdk-alpine
  COPY file.jar file.jar

  CMD java -Duser.timezone=UTC -cp file.jar com.foo

and to stay on-topic, you can run java apps in mesos without a docker wrapper :)


Ansible. It isn't perfect, but is far better than shell scripts for application deployment.


This is what I use. Other players include Puppet, Salt, and Chef


Distelli. It'll be a direct translation of your shell scripts to their format. I've had a great experience with them.


I haven't used it, but Nomad has specific support for Java apps.


From their site: Nomad has extensible support for task drivers, allowing it to run containerized, virtualized, and standalone applications. Users can easily start Docker containers, VMs, or application runtimes like Java.


The Java driver is documented here: https://www.nomadproject.io/docs/drivers/java.html


I don't get it. Containerizing your apps will help with many things including reliable rollouts. It's trivial to containerize a Java app. Do you just not want to learn about containers?

Using a container orchestrator for deployment is pretty much better than using a CM tool in every way... and it's certainly better than trying to half-ass one with bash scripts.


It is not always trivial to get buy in from your manager to shift your entire datacenter to running docker, or db admins, or the other dev teams, nor might it be trivial to tell the ops guys to go figure out how to run it.

If you are all of the above yourself - tings are much easier


NixOps?


> This might be a wishlist item (since we don’t find ourselves doing it frequently enough to merit an automated solution), but it would be very nice to be able to simply bake a new AMI, the completion of which would trigger a job that could swap out instances one or several at a time, such that we would be able to perform zero-downtime upgrades automatically. This can still be done, but right now it’s by hand.

BOSH[0] does rolling deploys, with canaries, out of the box.

At Pivotal we completely upgrade Pivotal Web Services to the latest Cloud Foundry within about a day of it releasing. PWS is our dogfooding the hard way: with a flagship platform that some of our customers have sue-you-to-dust-if-it-fails support contracts for.

Thousands of apps, tens of thousands of containers, thousands of VMS.

None of whom know that we restarted the entire infrastructure beneath them.

Disclosure: I guess that to the degree that Docker Inc realises that platforms are where the money is, my employers at Pivotal are competitors. But BOSH is still a fit for what you want.

[0] http://bosh.io/


I respectfully disagree.

When running on AWS you want CloudFormation controlling an AutoScaling Group for this.


OK, for disposable units, it will make sense. I'm less comfortable with entrusting highly stateful services to AWS alone.

As a footnote, BOSH works on AWS, OpenStack, vSphere, Azure, GCP and there are experimental CPIs for RackHD and Photon.


As long as your willing to re-architect your app so that workers pull web requests from a queue rather than expecting to just... serve traffic normally.

Or did I misunderstand that section?


Incorrect - that's what we're running currently but as soon as the routing mesh issues are resolved you can start running apps that listen on ports too


Docker have been copping some flack recently for releasing things that seem rushed and incomplete (the routing mesh being a good example) - doesn't it worry you that you're using a Release Candidate in Production? It seems risky considering their official releases are still pretty buggy.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: