Hacker News new | past | comments | ask | show | jobs | submit login
Using Docker to develop and deploy Django apps (stavros.io)
115 points by stavros on Oct 3, 2016 | hide | past | favorite | 27 comments



Nice article but there is a fundamental thing missing: how to deploy and manage the fleet of containers to avoid downtime (e.g. having a systemd service that stops the container(s) and starts them with the new image is not optimal) and centralize logging.

From my experience "deploying" containers is pretty easy: build the image, push the image, pull the image, start container. The hard parts are: how to deal with persistency (I decided to let RDS manage my database), how to centralize logging and monitoring (just Cloudwatch? Swarm? Kubernetes?), how to make sure the cluster is healthy at all times (e.g. a container crashes, is it being replaced? How long does it take?) and how to make sure you have the correct number of containers running (e.g. I want 2 containers for the app server, 3 as workers, 1 as load-balancer).

If you really think about it, once you remove these advantages (from using containers, that is) the difference between running a container and having a script that provisions a virtualenv and adds a supervisor/systemd/initd service is negligible.


The post is about deploying a Django app on what would normally be a single server. What you describe can be done with various tools like Kubernetes, but it's outside the scope of the post. The difference is, indeed, not as big when you only have a single server, but it's still nice to not need to run a dozen commands to go from "blank server" to "running app".

If only it were easier to have seamless upgrades, it would be perfect.


I used to be in this situation. Then I started using https://convox.com/ and all my problems went away.


Hi Stavros,

Could you please clarify how do you do the deployment of a new version?

Not every commit to master is necessarily a deployment, right? So how do you trigger it?

How did you configure Captain Webhook to get the new image and replace the current one that was running?


Hey slig. I use Git (or Github) flow, so yes, every commit to master is necessarily a deployment (that's why I only have the build and deployment running on the master branch).

Captain Webhook just restarts the service. The systemd service file looks like this, and auto-pulls on every restart: https://www.pastery.net/xrutdm/

I will update the post with the above, thank you.



Sorry about the default duration, I've changed mine to never expire, thanks.


How do you prevent downtime from Docker restarting?


That is an excellent question! I don't, right now, but I'll find a way!


I think the "right" answer here is to put something like nginx in front with load balancing, and rolling restarts, or something like that.

I'm in a very similar situation as what you've got described, and I'll start in on that part of the project sometime in the next month or so.

Also been looking at Elastic Beanstalk for deployments. Have you looked at that yet?


Yes, I think the way to do that is to have nginx switch new requests to the updated app server, and shut down the first one when it's not serving requests any more.

I don't know how you'd communicate with nginx inside the container to tell it to switch, though, or how you would be able to know when all requests were done on the old container. Hopefully there's an easy way.

Too bad that a single SIGHUP becomes so complicated with containers, but I guess it's a tradeoff.

About the Beanstalk, I haven't looked into it too much, maybe that can help with deployment. I'd be grateful if you could let me know how it worked, if you ever try it out.


Very neat, I used to use django + docker in production myself but we handled deployments via circleci. Essentially the build version would be published into consul and circle ci would handle the generation of a new build version from git. The only crappy part was docker tooling breaking our builds every other week. We relied on some homebuilt code that handled publishes to the docker registry and updating our consul cluster. That being said we never included the database in our deployment strategy and relied on it up and working 24/7. Also consul was a constant pain in the ass.


"# Docker hack to wait until Postgres is up, then run stuff."

wat


FTA:

> We have to do some contortions with the Django devserver, because Docker doesn’t care if Postgres is ready before starting the server, so Django sees that it can’t contact the database and quits. So, we just wait until port 5432 is ready before starting the devserver.


This is unfortunately necessary.

docker-compose starts all services simultaneously and doesn't want you to use an init system inside the container, which means that any dependency that requires a service to be actually available needs some sort of waiting hack.


oh so there actually isn't a way to start the services in some sort of timed order? I've always wondered about that and I just have my services restart if they can't connect


Nope, not yet :/


In Dockerfile:

# Remove the git repo to save space.

RUN rm -rf /code/.git

Pretty sure that it will not save any space because it will be in separate layer. If you want space to be saved to should be done as "one command".

RUN git clone ... /code/ && rm -rf /code/.git


Ah, thanks for that! I wasn't sure if docker was sending all the commits or just the latest one. Will change, thanks!


If you only have one server, try Dokku.


Thank you, I heard about Dokku but wasn't sure if it fits. I'll give it a look, thank you!


I still prefer to use kubernetes for this: https://dewyatt.github.io/articles/continuous-deployment-of-...


Is it worth it for just one server? Also, how do you handle the database? Do you put that in a container too?


For production, it may be. Kubernetes would handle updates in a safer way, avoiding service interruptions.

Yes, the database can be in a container as well, with a persistent volume.

It's not that much more complex than what you just went through.


Ah, that sounds good. How does Kubernetes handle updates safely? That requires pretty deep integration with the thing running in the container, doesn't it?


> How does Kubernetes handle updates safely?

It supports rolling updates where one pod (this means container, usually) is updated at a time, with traffic being sent to other pods during that time.

I think the best practice right now is to use a Deployment (alternatively you can initiate a rolling update manually). Using a deployment makes updates as simple as "kubectl patch ...".

It does require a load balancer, which varies from platform to platform. In AWS, I assume it uses an ELB. On premise, you might use contrib/service-loadbalancer (haproxy).


Sounds ideal, thanks. I'm reading the docs right now, and it looks pretty simple, conceptually.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: