You would still want to use chef, puppet, or some home rolled package deployment toolchain, if for no other reason than being able to ensure a consistent state and bring up new nodes quickly.
The idea about doing this kindled when we started deploying Django apps – you can see in the second example that it’s rather intricate. The dependencies are just a matter of `pip install -r requirements.txt`.
Separate the pull into "git fetch" into a shared bare git repository (if you care for efficiency) which is atomic and then a "git checkout" into a new directory using a single symlink(2) and rename(2) which is also atomic because rename is.
> Imagine you’re pulling from git and at the same time, the app decides to import a module that changed profoundly since its last deployment. A plain crash is the best case scenario here.
> On the other hand, deploying using self-contained native packages makes the update of an app a near-atomic,
You can't have your cake and eat it too. Native packages have the exact same race conditions as an (inplace) git update.
It's clear that some of the points can be mitigated using more sophisticated git+fab magic. Nevertheless, the other points remain true and the combination of them all makes me believe that native packages are the better way.
1. activate virtualenv
2. run `easy_install -U app`
and no root permission is required. It also has the added benefit to allow having multiple copies of the same package in the same machine , which can't be done with native package.
 In one deployment, we have staging instance and production instance in the same machine...
I find native packaging to be really helpful in mixed environments. On a daily basis I need to deploy Rails/PHP/Python, and using a native package manager is a nice common denominator.
When I first heard about this strategy I thought it was crazy -- but have since come to really like it.
That’s the whole point of it: have a consistent and reliable way to deploy whole applications including all dependencies.
Until emerge/apt-get/yum/etc add support for virtualenv, I will stick with .egg packages and be python-native rather than system-native.
I've used that sort of setup in various configurations and it's solid. After all, that's what our favorite distos use (whether it's deb-based or not is irrelevant, concepts are similar).
If and when configuration needs to be "shared" or "propagated" between machines, I store in a store of some kind (RDBS, KV).
Of course, this is true of any compiled language that can produce self-contained binaries, but Go has the added benefit of being well adapted to web app development.
Even with Go, you'd have to package up your single binary in order to get the same experience. (Although, that package would be pretty small!)
> virtualenv /vrmd/$APP_NAME/venv
This step could be skipped if you built different debs for different distro versions (in chroots for example, like sbuild does it).
Might be too much of a hassle, though, for a little gain. It just looks slightly wrong to me that you first install a deb and then go and overwrite many of the files the deb installed.
Please look at the FPM wiki, you'll see that it offers many more options for integration than checkinstall (which I used to use, and hated):
And why would you want to create source packages? One of the benefits to generating binary packages is that you compile once and roll it out to hundreds of servers. I'd rather not have to compile, say, Erlang on every target server.
How do you differentiate builds on say redhat / BSD / windows as opposed to Ubuntu ? I would guess vrmd.fabric.fedora.deploy contains a lot of the same code except for pathname changes?
We have no windows and no BSD _yet_. There might come some FreeBSD for ZFS's sake soonish, then I’ll have to refactor. :)