

Speed up your Fabric deployments with Parex - enki
http://paulbohm.com/parex-parallel-execution-for-python-and-fabric/

======
njharman
Why isn't Fabric doing this? This would be basic feature of deployment
software I would think. I vaguely remember the old, old Fabric (before they
rewrote everything) did this.

~~~
enki
i think they've been planning to for a while:
<http://code.fabfile.org/issues/show/19>

i haven't really followed their progress too closely, but they seem to be
close to releasing something for 1.0.

I think they do client-side threading while i prefer to to do fork() on the
server to avoid multiple ssh connections and retain maximum control.

i'll reevaluate when they ship. till then parex works for me.

~~~
goosemo
The branch actually does client side forks, one for each server in a task.
Then each call inside a task is run sequentially, as expected in normal fabric
usage. I've also included the ability to set a pool size, so that one can
manage the number of ssh connections open simultaneously.

~~~
enki
ah - so what you are doing speeds up deploys to multiple servers, while what
i'm doing is meant to speed up an individual deploy.

i mostly just got tired of waiting for an individual deploy to my staging
server.

btw thanks for emailing me! - i'll definitely look at your branch and see if i
can help! (wasn't aware of it until today)

~~~
goosemo
Yeah, thats the goal anyways. I tend to reach out to 50+ servers at a time,
and iterating though that on any tasks that take over a min is mind numbing.

Well through this post perhaps the branch will be better known, but I haven't
really posted much about it in that I though it'd be done sooner. Then any
mention of a branch would be negated by it's inclusion in master.

------
goosemo
Just to note/ask though, this requires that you have a python script on the
server that uses this as a lib? and requires tornado?

~~~
enki
i just use the same fabfile on the server. and doesn't require tornado on
linux (if epoll is available)

