
Scalable and resilient Django with Kubernetes - hnarayanan
https://harishnarayanan.org/writing/kubernetes-django/
======
throwaway13337
I'm really struggling to understand why most developers feel they need all
this deployment complexity.

I've run django and other web deployments with simple shell scripts and
occasionally some python to glue it together.

Most recently I'm running a django web server and some other custom stuff (a
real time stateful server, database, nginx, etc).

A somewhat complex setup (or at least as complex as it needs to be), and I
have no need of docker, kubernetes. If I want a new server, I just change a
few parameters and run the script to deploy. It's straight forward and without
magic.

The deployment flavor of the month before were DSLs like ansible, puppet, etc.
Did these solve real problems? I found they added complexity without adding
much. How is the new generation different?

Caveat: I'm not google, and I don't deploy huge server farms. There is a use
case but the vast majority of us aren't that.

~~~
hnarayanan
I'm the author of the original piece.

I echo exactly what you say in a giant caveat way up top in the piece. :)

Most of the first half of the article beyond that point basically tries to
motivate why you'd want to try this beyond just using a classical VM approach.
But the basic idea is that it raises your level of abstraction from working
with machines to working with your application components (on abstracted
hardware).

~~~
jordic
There are a lot more advantages on abstracting the hardware from the app.
First of all, you don't need one machine per service for scaling the app. All
services run on the cluster, and you can add or remove machines, as needed.
(On night you can reduce your cluster size, and grow it during the day).

But also, you can use the same cluster to run your preproduction enviroment,
or use it for CI/CD. (Check deis) As an example, we have an small cluster with
two machines and on it, there are:

\- the main app, \- the preprod environment, \- two more feature branches
(that had to be reviewed) \- And also commercials can deploy playground
environments to make demos and trials.

All are independent apps, sharing resources, on the cluster.

As a summary, on daily, we maintain 7 to 10 independent apps instances, and we
do regular updates (as new revisions arrives). As an example, the trials or
feature branches, are single pods (all included, db, redis, app and worker).

Kubernetes is not for deploying pet projects, but as soon as you start working
on a real project is a must. You can manage complex deploys with a lot of
services.. keeping the costs as lows as two n1-standard-1 machines. And as a
plus on gke, you got monitoring, and logging aggregation.

~~~
drdaeman
I think the questions was "is this complexity necessary?"

If the same (deployment and upgrades for production, staging and testing
environments, for any number of hosts) could be achieved with a small shell
script - doesn't this mean that cluster thing isn't really useful?

I think a lot of "real" projects (smaller ones) work perfectly well without
any complicated cluster management tools, and aren't really hindered by lack
of what those offer.

------
siliconc0w
The problem with these tutorials is they still tell you to run the database
yourself to emphasize the 'component' or one solution fits all idea of
kube/mesos.

However, in practice, if you're using AWS or GCloud this is usually a bad idea
- just use the managed database solutions provided. They have things like
backups, snapshots, restores, upgrades, HA, monitoring, and alerting baked in.
These are non-trivial to do yourself.

~~~
hnarayanan
I'm the author of this piece, and I agree with you in general.

I am personally experimenting with this within the context of containers
because I'm trying to construct something like Vitess[1] from first principles
as an intellectual exercise.

[1] [https://github.com/youtube/vitess](https://github.com/youtube/vitess)

~~~
danpalmer
Thanks for the blog post, it was really informative, will definitely be
sharing it at work.

~~~
hnarayanan
Thank you! And please let me know of ways I can improve it. I already have a
couple of ideas that I've indicated in the conclusion, but more the merrier.
:)

------
smitec
Putting the 'should we or shouldn't we' aside I'd just like to say thanks for
putting together an article that goes all the way from a 'classic' setup to
something running on a host.

I often find that posts like this are a bit too meta and in trying to write
things in a general way they leave out some critical step which makes
replicating their ideas difficult.

~~~
hnarayanan
OP here. That was the plan and I'm glad you appreciate it. :)

I wanted to start from something I assumed people knew, and tried to motivate
why one might want to improve on that. Only then did I introduce the new
solution.

A lot of tutorials I found jumped too quickly to the 'how' without spending
enough time on the 'why' I should care.

------
jamespacileo
Hey Harish, thanks for your talk at the March London Django Meetup.

I'm starting to play around with Google Container service as you suggested,
having a good time.

~~~
hnarayanan
Welcome!

------
Mizza
It's interesting that there seem to be two parallel and opposite trends in
applications deployment going developing right now.

One is containerizations, where developers are responsible for maintaining fat
application stacks that can easily be redeployed and moved around.

The other is towards serverlessness with things like [Django-
Zappa]([https://github.com/Miserlou/django-
zappa](https://github.com/Miserlou/django-zappa)), where scalability is
handled automatically by cloud providers.

My bias is quite clear - deploying apps should be easy.

