Hacker News new | past | comments | ask | show | jobs | submit login
Django Settings for Production and Development: Best Practices (sparklewise.com)
97 points by toumhi on March 7, 2012 | hide | past | favorite | 33 comments



It's annoying how there is no canonical way to do it. You need to spend some mental energy learning how to do this when starting up a Django project. Django should really integrate one solution into the django core, and then everyone can just use that. Make it much easier.


I like Django quite a bit, but it boggles the mind how many developer hours have been spent solving the same basic setup problem over and over again.


I feel the opposite, Over the 3 years or so I've worked with django full time there have been very good reasons to have different setups wrt environment, virtualenvs & web servers.

That said I tend to use env vars to identify which environment an app is running in and build everything off that, which branch / db / logging target etc to use.


Having an opinionated, best practice default doesn't have to prevent customization. I have always thought Rails is better than Django in this sense. There's one clear default way to do things, and then you can customize if you need to.


Not the best week to boast of rails opinionated best practice defaults in a django thread.


Yeah, screwing up one thing obviously means that every single other idea they've had is boneheaded.


Environment variables work extremely well when deploying on Heroku [1]. You can't exactly commit a local_settings.py file to your repo, so setting env variables is the only way to exclude sensitive keys and passwords from your source control.

[1] http://rdegges.com/devops-django-part-3-the-heroku-way


> use env vars to identify which environment an app is running in

Be aware about issues between mod_wsgi and that approach - http://code.google.com/p/modwsgi/wiki/ApplicationIssues#Appl...


I'm aware of those, thankfully the things I set wont change within the life a single environment, a single variable to say dev/test/staging/production, could easily set in bashrc & wsgi script.


Am I the only person that uses pure and simple symlinks? https://en.wikipedia.org/wiki/Symbolic_link


"Am I the only person ..."

The answer to this is always no.


The problem with symlinks is that there's manual intevention needed when you deploy.


Not at all. Store settings_dev.py and settings_prod.py in your repository. What will be different on production machine and on local one is the symlink file called settings.py.


A fourth way: the django config is a template, with entries like:

settings.py:

    CACHES = {
        'default': { 'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache',
                     'LOCATION' : [ {% for IP in MEMCACHED_SERVERS %}'{{IP}}:{{MEMCACHED_PORT}}', {% endfor %} ],
                     },
Yes, that's a django template in python code. Upon deployment, my fabfile renders the settings file along with various other config files. This makes sure that {{MEMCACHED_PORT}} doesn't accidentally say one thing in settings.py, and another in memcached.conf.

This also allows me to keep my sitewide settings in a single file or two, namely a module in the fabfile folder.

It feels a little dirty to do things this way (though I can't figure out why), but it's saved me a lot of headaches.


I do something similar with Chef. Chef creates a local_settings.py file that outputs python with the server ips and ports for the various services.


Seems complicated. I always use four simple files:

  settings.py
  settings_development.py
  settings_staging.py
  settings_production.py
Where the first one has the defaults (usable for local development), and the others contain any overrides needed to run with Gunicorn on the corresponding dev/staging/production servers.


We have a very similar approach except we encapsulate settings into its own package just for readability/maintainability. So it looks something like the following

    settings/
        __init__.py
        defaults.py
        dev.py
        live.py
        testing.py
        users/
            __init__.py
            *user specific settings*


I wonder what kind of maintainability benefits do the separate packages bring in this case? To me, having a lot of empty __init__.py files and subdirs reduces readability, so I like to maximize simplicity.


I do something very similar. But I just have dummy files in folder depending on the environment (so I do something like 'touch PRODUCTION' or 'touch STAGING') and based off which file is present I know which settings to serve.

Here's a good example (start at line 207): https://github.com/armon/DjangoProjectExample/blob/master/pr...


I have a common settings.py file which starts with:

  ROOT_PATH = os.path.dirname(os.path.abspath(__file__))
  configs = {
      '/path/to/my/local/dev/folder': 'alex',
      '/var/www/www.mywebsite.com/test/private/django/mywebsite': 'test',
      '/var/www/www.mywebsite.com/prod/private/django/mywebsite': 'prod'
  }
  config_module = __import__('config.%s' % configs[ROOT_PATH], globals(), locals(), 'mywebsite')
  for setting in dir(config_module):
      if setting == setting.upper():
          locals()[setting] = getattr(config_module, setting)
Then I have alex.py, prod.py and test.py all in a config subfolder. All under version control, with my settings.py automatically choosing the right environment based on where it's deployed to.


a lot more information here https://code.djangoproject.com/wiki/SplitSettings

I typically use this set up:

- settings.py contains global settings that aren't effected by deployment level.

- at the bottom of that file you have

      try:
          from settings_local import *
      except:
          pass
- in settings_local.py you have your deployment level dependent settings

- then just git ignore settings_local.py


That's an

    except ImportError:
right? ;)


I've tried a few different strategies and found this method works best for me.

In my fabfile I have environment 'setters' that precede regular commands. So I can do 'fab qa deploy' or 'fab prod deploy' and the deploy command grabs the correct settings_local file for the target environment.


Even better than gitignoring your local_settings -

Save your local settings as settings_local.py.exmp and then

    ln -s settings_local.py.exmp settings_local.py
Giving you working local settings, and a versioned copy of them, that is useless unless linked properly.


I think you're missing an 's' there:

https://code.djangoproject.com/wiki/SplitSettings


opps, fixed.

Thanks


I toss most of my core settings into the settings folder and __init__.py and then from there in my development.py, staging.py production.py

I can then, in each of my specific environment files just from settings import * and have access to the direct variables for all of my __init__.py items I setup. Most of the time you'll just have different database and cache settings for these environments.

Before you yell at me about the * import, yes, it normally is a bad idea but in this case it is good. It does often get abused and that is why so many people see it was "bad".


I like how Heroku does it, it's very smart, they basically inject the database settings to your settings.py that code just grabs the connection settings from the environment so you don't need to worry about it.

More info: http://devcenter.heroku.com/articles/django


OK, HN front page effect, the server crapped out. Any advice on how to handle this? Cannot even SSH on the server.


tldr; 'Scaling' WordPress is simple. Install a caching plugin like WP Super Cache.

WP gets a bad rap because it seems to spit out the oh so unattractive "Error Connecting to Database" message under the slightest load. Perhaps it's inefficient, i'm not sure why it seems to die so easily.

But... good news is, there is a simple solution. WP renders everything dynamically on every request. It fetches the post from the DB each time you load the page. But, more often than not, the content is only going to change when 1) you write a new post, or 2) a comment is made to a post.

A simple caching plugin solves this. It will render the page with a specific timeout (3600 seconds is default I think?, 1 hr) and then just serve up that HTML rather than hitting the DB. This will solve 99% of your problems. I've never really seen a WP site die when used with something like this. Duh, because it's just static HTML at that point haha.

Combine this with a PHP cache like APC (my personal favorite opcode cache of the moment) and a fast webserver like nginx, and you're gonna pretty much survive anything.


yep, it's back up again, will install it. Thanks.


For the first one, another disadvantage (mentioned but not as a disadvantage) is that local_settings is not under version control.


Environmental localizations should be controlled by a configuration management system, not the application source version control system.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: