> Use celery immediately if you have any "long lived" tasks such as email
Hey, quick question from a relative newbie who is currently trying to solve this exact problem.
Besides Celery, what are good options for handling long-running requests with Django?
I see 3 options:
- Use Celery or django Q to offload processing to worker nodes (how do you deliver results from the worker node back to the FE client?)
- Use a library called django channels that I think supports all sorts of non-trivial use cases (jobs, websockets, long polling).
- Convert sync Django to use ASGI and async views and run it using uvicorn. This option is super convoluted based on this talk [0], because you have to ensure all middleware supports ASGI, and because the ORM is sync-only, so seems like very easy to shoot yourself in the foot.
The added complication, like I mentioned, is that my long-running requests need to return data back to the client in the browser. Not sure how to make it happen yet -- using a websocket connection, or long polling?
Sorry I am ambushing you randomly in the comments like this, but it sounds like you know Django well so maybe you have some insights.
Use anything except Celery, is my vote. Even if that "anything" is something you roll yourself.
Celery is mature, but has bitten me more than anything else.
For scheduling, there are many libraries, but it's good to keep this separate from Celery IMO.
For background tasks, I think rolling your own solution (using a communication channel and method tailored to your needs) is the way to go. I really do at this point.
It definitive is not using async, I think that will bite you and not be worth the effort.
Django ORM has supported async syntax for some time now, and it can work fully async starting with Django 4.2 (and psycopg3). There are still a few rough edges (such as not being able to access deferred attributes from async contexts) but there are workarounds.
I usually use `asyncio.create_task` from async views for small, non-critical background tasks. Because they run in a thread you will lose them if the service crashes (or Kubernetes decides to restart the pod), but that's fine for some use cases. If you need persistency use Celery or something similar.
Django combined with an async-ready REST framework such as Django Ninja is very powerful these days.
Hey, quick question from a relative newbie who is currently trying to solve this exact problem.
Besides Celery, what are good options for handling long-running requests with Django?
I see 3 options:
- Use Celery or django Q to offload processing to worker nodes (how do you deliver results from the worker node back to the FE client?)
- Use a library called django channels that I think supports all sorts of non-trivial use cases (jobs, websockets, long polling).
- Convert sync Django to use ASGI and async views and run it using uvicorn. This option is super convoluted based on this talk [0], because you have to ensure all middleware supports ASGI, and because the ORM is sync-only, so seems like very easy to shoot yourself in the foot.
The added complication, like I mentioned, is that my long-running requests need to return data back to the client in the browser. Not sure how to make it happen yet -- using a websocket connection, or long polling?
Sorry I am ambushing you randomly in the comments like this, but it sounds like you know Django well so maybe you have some insights.
---
[0] Async Django by Ivaylo Donchev https://www.youtube.com/watch?v=UJzjdJGS1BM