
Using Docker to develop and deploy Django apps - StavrosK
https://www.stavros.io/posts/how-deploy-django-docker/
======
lambdacomplete
Nice article but there is a fundamental thing missing: how to deploy and
manage the fleet of containers to avoid downtime (e.g. having a systemd
service that stops the container(s) and starts them with the new image is not
optimal) and centralize logging.

From my experience "deploying" containers is pretty easy: build the image,
push the image, pull the image, start container. The hard parts are: how to
deal with persistency (I decided to let RDS manage my database), how to
centralize logging and monitoring (just Cloudwatch? Swarm? Kubernetes?), how
to make sure the cluster is healthy at all times (e.g. a container crashes, is
it being replaced? How long does it take?) and how to make sure you have the
correct number of containers running (e.g. I want 2 containers for the app
server, 3 as workers, 1 as load-balancer).

If you really think about it, once you remove these advantages (from using
containers, that is) the difference between running a container and having a
script that provisions a virtualenv and adds a supervisor/systemd/initd
service is negligible.

~~~
StavrosK
The post is about deploying a Django app on what would normally be a single
server. What you describe can be done with various tools like Kubernetes, but
it's outside the scope of the post. The difference is, indeed, not as big when
you only have a single server, but it's still nice to not need to run a dozen
commands to go from "blank server" to "running app".

If only it were easier to have seamless upgrades, it would be perfect.

------
slig
Hi Stavros,

Could you please clarify how do you do the deployment of a new version?

Not every commit to master is necessarily a deployment, right? So how do you
trigger it?

How did you configure Captain Webhook to get the new image and replace the
current one that was running?

~~~
StavrosK
Hey slig. I use Git (or Github) flow, so yes, every commit to master is
necessarily a deployment (that's why I only have the build and deployment
running on the master branch).

Captain Webhook just restarts the service. The systemd service file looks like
this, and auto-pulls on every restart:
[https://www.pastery.net/xrutdm/](https://www.pastery.net/xrutdm/)

I will update the post with the above, thank you.

~~~
CroCroCro
clone: [https://www.pastery.net/yjqmbz/](https://www.pastery.net/yjqmbz/)

~~~
StavrosK
Sorry about the default duration, I've changed mine to never expire, thanks.

------
man5quid
Very neat, I used to use django + docker in production myself but we handled
deployments via circleci. Essentially the build version would be published
into consul and circle ci would handle the generation of a new build version
from git. The only crappy part was docker tooling breaking our builds every
other week. We relied on some homebuilt code that handled publishes to the
docker registry and updating our consul cluster. That being said we never
included the database in our deployment strategy and relied on it up and
working 24/7\. Also consul was a constant pain in the ass.

------
languagehacker
"# Docker hack to wait until Postgres is up, then run stuff."

wat

~~~
rspeer
This is unfortunately necessary.

docker-compose starts all services simultaneously and doesn't want you to use
an init system inside the container, which means that any dependency that
requires a service to be _actually available_ needs some sort of waiting hack.

~~~
hobolord
oh so there actually isn't a way to start the services in some sort of timed
order? I've always wondered about that and I just have my services restart if
they can't connect

~~~
StavrosK
Nope, not yet :/

------
interrrested
In Dockerfile:

# Remove the git repo to save space.

RUN rm -rf /code/.git

Pretty sure that it will not save any space because it will be in separate
layer. If you want space to be saved to should be done as "one command".

RUN git clone ... /code/ && rm -rf /code/.git

~~~
StavrosK
Ah, thanks for that! I wasn't sure if docker was sending all the commits or
just the latest one. Will change, thanks!

------
atrudeau
If you only have one server, try Dokku.

~~~
StavrosK
Thank you, I heard about Dokku but wasn't sure if it fits. I'll give it a
look, thank you!

------
dewyatt
I still prefer to use kubernetes for this:
[https://dewyatt.github.io/articles/continuous-deployment-
of-...](https://dewyatt.github.io/articles/continuous-deployment-of-pastely-
with-gke-kubernetes-ansible-jenkins)

~~~
StavrosK
Is it worth it for just one server? Also, how do you handle the database? Do
you put that in a container too?

~~~
dewyatt
For production, it may be. Kubernetes would handle updates in a safer way,
avoiding service interruptions.

Yes, the database can be in a container as well, with a persistent volume.

It's not that much more complex than what you just went through.

~~~
StavrosK
Ah, that sounds good. How does Kubernetes handle updates safely? That requires
pretty deep integration with the thing running in the container, doesn't it?

~~~
dewyatt
> How does Kubernetes handle updates safely?

It supports rolling updates where one pod (this means container, usually) is
updated at a time, with traffic being sent to other pods during that time.

I think the best practice right now is to use a Deployment (alternatively you
can initiate a rolling update manually). Using a deployment makes updates as
simple as "kubectl patch ...".

It does require a load balancer, which varies from platform to platform. In
AWS, I assume it uses an ELB. On premise, you might use contrib/service-
loadbalancer (haproxy).

~~~
StavrosK
Sounds ideal, thanks. I'm reading the docs right now, and it looks pretty
simple, conceptually.

