The snippets are not false, but there's so much context missing it's easy to worsen the situation, especially for beginners which seem to be the target audience.
First, this guide should emphasize the need to measure before doing anything : django silk, django debug toolbarsm, etc.
Of course, measure after the optimizations too, and measure in production with an apm.
Second, some only work sometimes : select_related / prefetch_related / iterator will lead to giga SQL queries with nested joins all over the place, and ends by exploding ram usage. It will help at first, but soon enough one will pay any missing sql knowledge or naive relationships.
Third, caching without taking the context into account will probably lead to data corruption one way or another. Debugging stale cache issues is not fun, since you cannot reproduce them easily.
Fourth, celery is a whole new world, which requires workers, retry and idempotent logic, etc.
Finally, scaling is also about code: architecture, good practices, basic algorithm, etc
Which is darn hard if you are a beginner in a framework, loops in loops still bites me after reality does the integration test for me. This is especially true when you try to do a simple thing as a beginner. By scaling I am just talking about normal production, going from 2 developers to a couple of thousand customers.
To mind it's a part where the Django guide could be expanded a bit, in order to help scaffold a simple but "open to the future" code architecture.
For instance I would warn against fat models and propose a very light "service pattern" architecture
Probably 80% of notable performance problems I’ve seen in the kinds of systems that things like Django and Ruby get used for have been terrible queries or patterns of use for databases (I’ve seen 1,000x or worse costs for this versus something more-correct) and nearly all of the other 20% has been areas that plainly just needed some pretty straightforward caching.
The nice thing about that is that spotting those, and the basic approach to fixing them, if not the exact implementation details, are cross-platform skills that apply basically anywhere.
I actually can’t recall any other notable performance problems in those sorts of systems, over the years. Those are so common and the fixes so effective I guess the rest has just never rated attention. I’ve seen different problems in long-lived worker processes though (“make it streaming—everything becomes streaming when scale gets big enough” is the usual platform-agnostic magic bullet in those cases)
A bunch of TFA is basically about those things, so I’m not correcting it, more like nodding along.
Oh wait I just thought of another I’ve seen: serving large files through a scripting language, as in, reading it in and writing it back out with a scripting language. You run into trouble at even modest scale. There’s a magic response header for that, make Nginx or Apache or whatever serve it for you, it’s a fix that’s typically deleting a bunch of code and replacing it with one or two lines. Or else just use s3 and maybe signed URLs like the rest of the world. Problem solved.
Knowing SQL and how relational databases actually work is one of the best superpowers a backend developer can have.
If you want to go deeper than your database manual, the best place is Andy Pavlo's db course, freely available at youtube. I don't write databases, but after watching it I understand trade-offs and performance considerations much better, and feel much more comfortable reading Postgresql manual.
> Probably 80% of notable performance problems I’ve seen in the kinds of systems that things like Django and Ruby get used for have been terrible queries or patterns of use for databases (I’ve seen 1,000x or worse costs for this versus something more-correct)
ActiveRecord pattern saves you a few lines of code now, and explodes your foot off later.
I have Django code which creates a tar file on the fly from a list of requested files and works well. It doesn't use intermediate storage. The tar format can be pretty simple. I got most of the way into implementing a uncompressed zip version, but then I realised that tar was good enough for my site.
Mmm. If you had the right library, might be able to stream it as it’s being created which might help at least with perceived performance, but yeah, that’s a fun one.
FWIW, I'd advise against template caching. It's awkward to cache bust, and a network round trip to your cache will almost certainly be more expensive than the Python operations to render the template, even with stock Django templating which is slow.
The only place it's possible worth it is if you do a lot of database queries from your template rendering, and you're therefore caching database results (as rendered text). In that case, it's an easy patch. However a much better solution is to fetch all database results up front.
In my previous company we had a very significant Django codebase with plenty of templating, and found that using the templating system for (lazy loaded) database queries or caching was more hassle than it was worth and avoided it as much as possible. Treating template rendering as a pure CPU bound function was always better.
My point was that you shouldn't be doing DB queries in the template. If you're doing the DB queries before templating then you should also be doing the cache queries before templating too.
It would be nice to include the generated sql queries along with the code samples though. I've been on a similar path recently and being able to see the queries was really helpful (even the ones that failed!).
And also, when possible, try to use a key manager over environment variables.
Using a library like keyring [1] is a significant step up from a .env file sitting in your dev environment.
In other words:
- Store secrets in settings.py (bad)
- Store secrets in .env file (better)
- Store secrets in OS-level key vault (even better)
When the secrets are in a plaintext .env file, that file can get leaked in many non-obvious ways. Your antivirus uploads a copy, your IT department runs backups, someone on the team clones your git repo to a OneDrive/Dropbox folder and puts the .env file there. Then any of those services that has a leak, or any of the services those services use has a leak (improperly configured S3 bucket, etc), your secrets are leaked.
This sort of article seems perfectly poised to be useless to beginners (no context, doesn't tell you how to use the things) and experts (no nuance, just listing basic features) alike. Who is it for? Why does it exist? Why is it posted here?
The basic outline of this post isn’t bad, the problem is that’s all there is - a basic outline. If you haven’t dealt with these problems before the checklists are meaningless. If you HAVE dealt with these problems before the checklists are redundant
First, this guide should emphasize the need to measure before doing anything : django silk, django debug toolbarsm, etc. Of course, measure after the optimizations too, and measure in production with an apm.
Second, some only work sometimes : select_related / prefetch_related / iterator will lead to giga SQL queries with nested joins all over the place, and ends by exploding ram usage. It will help at first, but soon enough one will pay any missing sql knowledge or naive relationships.
Third, caching without taking the context into account will probably lead to data corruption one way or another. Debugging stale cache issues is not fun, since you cannot reproduce them easily.
Fourth, celery is a whole new world, which requires workers, retry and idempotent logic, etc.
Finally, scaling is also about code: architecture, good practices, basic algorithm, etc
I'll end by linking to more complete resources : - https://docs.djangoproject.com/en/5.1/topics/performance/ - https://loadforge.com/guides/the-ultimate-guide-to-django-pe... - https://medium.com/django-unleashed/django-application-perfo...