Hacker News new | past | comments | ask | show | jobs | submit login

We've been using Docker for YippieMove (www.yippiemove.com) for a few months now, and it works great.

Getting your hand around the Docker philosophy is the biggest hurdle IMHO, but once you're there it is a delight to work with. The tl;dr is to not think of Docker as VMs, but rather fancy `chroots`.

In any case, to answer your question, for us it significantly decreased deployment time and complexity. We used to run our VMs and provision them with Puppet (it's a Django/Python app), however it took a fair amount of time to provision a new box. More so, there were frequently issues with dependencies (such as `pip install` failing).

With Docker, we can more or less just issue a `docker pull my/image` and be up and running (plus some basic provisioning of course that we use Ansible for).




how do you do restarts when you update the app ? I assume you have to take the app server out of the server pool (remove from load balancer or nginx) and shut it down, then docker pull your image.

I'm doing deploys with ansible and its just too slow


Actually, we have Nginx configured with Health Check (http://nginx.org/en/docs/http/load_balancing.html#nginx_load...). Hence, it will automatically take a given appserver out of the pool when it stops responding. Once the node is back again, Nginx will automatically bring it back into rotation.

Also, we actually use a volume/bind-mount to store the source code on the host machine (mounted read-only). That way we can roll out changes with `rsync` and just restart the container if needed.

The only time we need to pull a new update is if the dependencies/configuration of the actual container change.


How do you deal with connections that are in progress to the app server? If you just take it down, you're potentially throwing away active connections.


Yes, that's absolutely true and something we're aware of. It would of course be possible to solve, but would increase the complexity by a fair amount.

It is also worth mentioning that it is a more back-end heavy service, than front-end heavy. Since each email migration runs isolated in its own Docker container, a given customer can generate 100s of Docker containers.

Hence, given the relatively low volume of users on the web app, and the fast restart time, the chance of throwing away an active connection is relatively low.


ok thanks. I have several app servers and I take them out of nginx server list, stop it gracefully, git pull and configure (slow, I want to get rid of this step), put it back in nginx servers, move onto the next one.

tedious, although my whole deploy-to-all-servers is a single command.


Yeah, that sounds pretty tedious, but I guess it could still be automated (but somewhat tricky).

Once CoreOS becomes more stable, we're looking to move to it. The idea is then to use `etcd` to feed the load balancer (probably Nginx) with the appserver pool. That way you can easily add new servers and decommission old ones.


We automated this pretty trivially at my last job using Fabric[0]. All we had to do was cycle through a list of servers and apply the remove from LB, update, add to LB steps. Removing from the LB should simply block until connections drain (or some reasonable timeout). It makes deploys take longer for sure, but avoiding the inevitable killing of user connections was worth it.

[0] http://www.fabfile.org/


I will add that if you're using Docker (which we weren't) it might be easier to deploy a new set of Docker containers with updated code and just throw away the old ones.


that's exactly what I'm doing, but using ansible. I used to do it with fabric before that.

its slow, but I go have a cup of tea while it works.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: