Nice article but there is a fundamental thing missing: how to deploy and manage the fleet of containers to avoid downtime (e.g. having a systemd service that stops the container(s) and starts them with the new image is not optimal) and centralize logging.
From my experience "deploying" containers is pretty easy: build the image, push the image, pull the image, start container. The hard parts are: how to deal with persistency (I decided to let RDS manage my database), how to centralize logging and monitoring (just Cloudwatch? Swarm? Kubernetes?), how to make sure the cluster is healthy at all times (e.g. a container crashes, is it being replaced? How long does it take?) and how to make sure you have the correct number of containers running (e.g. I want 2 containers for the app server, 3 as workers, 1 as load-balancer).
If you really think about it, once you remove these advantages (from using containers, that is) the difference between running a container and having a script that provisions a virtualenv and adds a supervisor/systemd/initd service is negligible.
The post is about deploying a Django app on what would normally be a single server. What you describe can be done with various tools like Kubernetes, but it's outside the scope of the post. The difference is, indeed, not as big when you only have a single server, but it's still nice to not need to run a dozen commands to go from "blank server" to "running app".
If only it were easier to have seamless upgrades, it would be perfect.
Hey slig. I use Git (or Github) flow, so yes, every commit to master is necessarily a deployment (that's why I only have the build and deployment running on the master branch).
Captain Webhook just restarts the service. The systemd service file looks like this, and auto-pulls on every restart: https://www.pastery.net/xrutdm/
Yes, I think the way to do that is to have nginx switch new requests to the updated app server, and shut down the first one when it's not serving requests any more.
I don't know how you'd communicate with nginx inside the container to tell it to switch, though, or how you would be able to know when all requests were done on the old container. Hopefully there's an easy way.
Too bad that a single SIGHUP becomes so complicated with containers, but I guess it's a tradeoff.
About the Beanstalk, I haven't looked into it too much, maybe that can help with deployment. I'd be grateful if you could let me know how it worked, if you ever try it out.
Very neat, I used to use django + docker in production myself but we handled deployments via circleci. Essentially the build version would be published into consul and circle ci would handle the generation of a new build version from git. The only crappy part was docker tooling breaking our builds every other week. We relied on some homebuilt code that handled publishes to the docker registry and updating our consul cluster. That being said we never included the database in our deployment strategy and relied on it up and working 24/7. Also consul was a constant pain in the ass.
> We have to do some contortions with the Django devserver, because Docker doesn’t care if Postgres is ready before starting the server, so Django sees that it can’t contact the database and quits. So, we just wait until port 5432 is ready before starting the devserver.
docker-compose starts all services simultaneously and doesn't want you to use an init system inside the container, which means that any dependency that requires a service to be actually available needs some sort of waiting hack.
oh so there actually isn't a way to start the services in some sort of timed order? I've always wondered about that and I just have my services restart if they can't connect
Ah, that sounds good. How does Kubernetes handle updates safely? That requires pretty deep integration with the thing running in the container, doesn't it?
It supports rolling updates where one pod (this means container, usually) is updated at a time, with traffic being sent to other pods during that time.
I think the best practice right now is to use a Deployment (alternatively you can initiate a rolling update manually). Using a deployment makes updates as simple as "kubectl patch ...".
It does require a load balancer, which varies from platform to platform. In AWS, I assume it uses an ELB. On premise, you might use contrib/service-loadbalancer (haproxy).
From my experience "deploying" containers is pretty easy: build the image, push the image, pull the image, start container. The hard parts are: how to deal with persistency (I decided to let RDS manage my database), how to centralize logging and monitoring (just Cloudwatch? Swarm? Kubernetes?), how to make sure the cluster is healthy at all times (e.g. a container crashes, is it being replaced? How long does it take?) and how to make sure you have the correct number of containers running (e.g. I want 2 containers for the app server, 3 as workers, 1 as load-balancer).
If you really think about it, once you remove these advantages (from using containers, that is) the difference between running a container and having a script that provisions a virtualenv and adds a supervisor/systemd/initd service is negligible.