Hacker News new | comments | show | ask | jobs | submit login

Completely offtopic, but I replied to the OP (on reddit) and remembered a problem of my own that I'm having, and I can't quite figure out how to solve.

Nginx + Django -- I have wildcard DNS on mydomain.com, and want to support customers such that customer1.mydomain.com gets its own database. I thought I'd be able to use fastcgi_param to inject a variable to fcgi, and then read it with os.environ.get('fastcgi_param'), and then 'IF' the settings.py to import customer1.settings.py (to override the database), but I can't seem to make it work.

Any thoughts?

There are several options here. First, you could use the Django "sites" framework, which has the advantage of being idiomatic and familiar to other Django developers:


Second, you could define a database backend per customer (using the Django 1.2 multi-DB support) and write a custom database router that is aware of your custom FastCGI environment variables:


Third (and best, IMHO) would be to run separate FastCGI processes for each customer, and route appropriately from Nginx. This approach is obviously the most complex setup, but it has the major advantage of letting you run every customer's FCGI backend under a different user id, offering yet another level of protection against data leakage and other security issues.

I like the third option quite a bit (though what I was trying to do looked more like the second) but the problem that I'm having moreso than anything is in trying to GET a fastcgi parameter as a variable in my settings.py.

I'll carry this over to SO, as the downvotes suggest that this is not only offtopic, but inappropriately placed altogether, but thanks a million for your insight.

The other advantage of the third option is that you don't have to write a whole bunch of auth/filtering code to only show customers their own data. Row-level permissions are relatively straightforward in Django, but it's still easier not to have to deal with them.

Well, thanks for the response, despite the downvote. That said, it may seem like a 'bad' idea, as I understand the pitfalls in managing a consistent data schema across n databases, but it also seems like a much more performant idea than building a customer table and having to do an extra join on every read query for every ticket for every logged in user.

I didn't downvote you, bmelton.

    it also seems like a much more performant idea
Why is performance important to you? Presumably to make it cheaper to run, in which case I suggest you weigh the costs of coding and maintaining multiple dbs against the cost of that extra join. Note that the multiple db approach is already costing you since you are still trying to get it to work while you could already gave the single db approach in place relatively painlessly.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact