1. Testing. Having automated tests that your developers are ideally writing first makes a world of difference. Throw it into your continuous integration system, and nurture a culture around it. If your tools and language allow it, having a lint or style checker as part of your build process is also very handy.
2. Development / Debug Mode. There should be a switch that you can flip when working from localhost, development, and staging to immediately obtain tons of data to aid you in debugging your application. Django-Debug-Toolbar is the holy grail when it comes to using an out of the box solution. Seeing breakdowns of queries called, time spent doing various functions, quick log lookups, etc is essential.
3. Production Error Tracking. If you're still relying on grep'ing logs or being sent 5xx error emails as your only form of error control you're in the stone ages. Go use a tool or service like Sentry to turn this into a centralized system.
4. Issue Tracking. Along the same lines as version control, this should be a no-brainer. A centralized resource for keeping track of any formalized bugs, errors, or planning future features.
5. Analytics and Statistics. Can you tell me right now the delta of 4xx and 5xx errors in your application between your two most recent deploys? How healthy is your app over time? How many sign-ups, returning users, login failures, and purchases have been made? Tracking key business metrics and actionable events will really give you fantastic insight into your product. It's also a great way to catch critical system errors before they take down your site for hours on end, via tracked spikes in latency, cache misses, queue size, etc.
Concur. It is an immensely useful tool, simple as it may be to set up. There are services that do it for you (getexceptional), but it literally takes only a few hours to throw your own together.
Issue tracking is probably moving too much into management territory for this list, which is why I think they left it out.
It's probably for the same reasons that they don't touch upon server management/configuration, but I have personally found puppet (or other similar tool) very useful. It's basically source control for your server configuration - once you start using it, it seems reckless to not have.
OK guess I'm a moron here but....surely the suggestion isn't that a huge set of env vars are entered at the console manually - they have to be in some shell script or file which produces them...which is then...the config file ! Someone help me see what I'm missing here.
Unless the idea is, "well it's a shell script! that's nothing like a config file!" in which case I'll be searching for the HN downmod button I can't wait to have someday.
If you use `foreman`  to manage application process formations, it will source a '.env' file before running.
You could call either pattern a "config file" but the important part is that its actual Unix environment variables, and there is a safe and secure place to store the variables outside of the code repository.
Then... you can take a huge leap forward when running on Heroku. Heroku has an API to set environment variables:
$ heroku config:add API_PASSWORD=abc123
The really clever part is that 3rd party add-on providers can also set environment variables on your app with a similar API. So if your database is an add-on:
$ heroku addons:add redistogo
$ heroku config
REDISTOGO_URL => redis://user:pass@host-1:9492/
$ heroku addons:upgrade redistogo:medium
$ heroku config
REDISTOGO_URL => redis://user:pass@host-2:9133/
I'd prefer that a project can be set up for development as quickly as possible, so my current approach is to check in default configurations for the development environment that are overridden with environment variables in production.
2) there are several options, a developer (gold CD) virtual is one, where a virtual is set up and configured based on updated configuration and an automated script, or having a environmental script that sets up the development variables on the developers workstation (just make sure it stored in version control independent of the code). Each works well, it just depends on preference, there are a lot of ways one can set it up to work in their environment, just try a few and see which one best fits your development culture.
I have never seen this done before. Is this really a best practice? I mean at some point your app has to depend on certain tools in place by the OS, right?
Examples could be varnish and nginx and postgresql.
You probably don't need to package postfix.
(1) One codebase tracked in revision control, many deploys (running instances, typically including a production and one or more staging sites)
(2) Explicitly declare and isolate dependencies (e.g. GEMFILE for ruby)
(3) Store config in the environment NOT code, "A litmus test for whether an app has all config correctly factored out of the code is whether the codebase could be made open source at any moment, without compromising any credentials."
(4) Treat any service the app consumes over the network as attached resources, a 12-factor app "should be able to swap out a local MySQL database with one managed by a third party (such as Amazon RDS) without any changes to the app’s code."
(5) Strictly separate build, release, and run stages.
(6) Execute the app as one or more stateless processes. "The twelve-factor app never assumes that anything cached in memory or on disk will be available on a future request or job."
(7) Export services via port binding.
"The twelve-factor app is completely self-contained and does not rely on runtime injection of a webserver into the execution environment to create a web-facing service. The web app exports HTTP as a service by binding to a port, and listening to requests coming in on that port."
(8) Scale out via the process model.
"processes are a first class citizen... the share-nothing, horizontally partitionable nature of twelve-factor app processes means that adding more concurrency is a simple and reliable operation."
(9) Maximize robustness with fast startup and graceful shutdown.
"Twelve-factor app’s processes are disposable... should strive to minimize startup time...shut down gracefully when they receive a SIGTERM signal from the process manager...should also be robust against sudden death"
(10) Keep development, staging, and production as similar as possible.
"Twelve-factor app is designed for continuous deployment by keeping the gap between development and production small... resist[ing] the urge to use different backing services between development and production"
(11) Treat logs as event streams.
Twelve-factor app "never concerns itself with routing or storage of its output stream"
(12) Run admin/management tasks as one-off processes.
How does one goes about implementing #4? I can abstract away a REST API call for Avatars, but how does one do that for a database? Or is this just a fancy way of saying (in Java-speak), use Interfaces.
[Edit: the site is back up and I was able to read it. The premise reads differently than how the parent wrote it. I read it mainly as, the sysadmin should be able to switch to a new host, change the configuration file, bounce the application, and everything should work.]
I like that the article was written as a response to observing problems deploying real apps. I just emailed this link to two of my customers, just in case they did not see it. There are too many good points in the article to comment on.
For example, I would build an app with a dependency on Smarty templates. The main app is in a git repo, but Smarty is required to be also installed on the system. The config file of the app defines the location of Smarty on the system.
Then, Phing will read in that file and convert those to properties. From there, I can wget the versions of those packages, untar them, and place them in lib/vendors/.
This makes it very simple to test with a newer version. Just update the file and rebuild and a newer version will be downloaded and installed.
1) I'm certainly not going to set env variables with database passwords and such by hand. I'm going to have a script to do it for me. I might even call it, oh, I dunno, settings_local.py. In which case, I'm right back at square one, needing to not check this into source control. In other words, how do you avoid having the database passwords in SOME file, SOMEWHERE? Does it "fix" anything if it's named "fabfile.py" instead of "settings_local.py"? I can't see how.
2) How do I deploy multiple apps to the same server? Let's say I've got a linode (or EC2 instance, or whatever) running a test, dev, and production deployment of three different apps. With config files, I just have a different settings_local.py file in each deployments project directory. With environment variables...what, do I use prefixes? app1_test_dbpass = 'foo'; app2_prod_dbpass = 'bar', and so on? Except they say NOT to group stuff together into environments by name. So uh...how do you manage name collisions?
Basically, they seem to have identified a real problem, then proposed a solution that doesn't fix it and doesn't work. What am I missing?
A few good reasons:
1) Language agnostic -- I can use common configuration for Ruby, Node.js, Makefiles, Bash scripts, etc. Without having to find/use a parser for my config file settings, nor worrying about executing a Ruby script from a Python process, or whatever.
2) Real programming constructs -- I can easily test a boolean condition, interpolate variables, and otherwise minimize copy/paste between configuration sections.
3) Available at bootstrap -- Don't need to install Ruby or Python or anything like that. The very first scripts I run can use the environment without compromise. Particularly useful because machines are bootstrapped with Makefiles, which play well with the environment.
I modeled my app env script on the behavior of the `env` command (in fact, I delegate to it). See an example script in this gist:
Note the examples. By delegating to `env`, you get a few features for free:
1) Executing any process within the environment
2) Passing additional environment variables on the command line
3) Printing of environments (great for diffing!)
1. Have a script that defines environment variables your app uses, and name that script ~/.bashrc, user accounts are cheap and provide an excellent way to have isolated environments.
2. Check an example copy into the project itself. The application does not source this script itself. This documents the dependencies of the app.
Disclaimer: I've never deployed on anything that wasn't unix-like.
The production / staging environment chef configuration is stored in a tightly-locked down git repository that can only be accessed from production.
Chef reads the production configuration information, puts the environment variables into envdir-complaint files in a directory, runit uses chpst/envdir to start the process with the correct environment settings... and that's pretty much it.
Just define an env variable or something that contains the location of settings.py, a directory thats managed by something like chef/puppet.
So obviously I need something that contains and applies these settings. What makes this any less likely for me to accidentally commit that than the files they're objecting to?
At the moment, I have some apps deployed with nginx > uwsgi > django, taking advantage of the fact that nginx has built in support for uwsgi these days. This breaks rule 7, since my app isn't just binding to a port - but would I really be better off by using nginx > tornado > django?