This is a pretty neat hack, but not really good for a true production deployment system. Rsync is a far superior alternative. That being said, git should definitely be incorporated into the workflow such that, for example, you have a "live" branch which always reflects what is to be on production frontend nodes. From there you do 1) git pull origin live 2) rsync to live servers 3) build/configure/restart/etc. Set -e on that script obviously...
Edit: I should also mention, if you are stuck on something like restricted hosting with CPanel which severely limits your deployment options (some of my clients are in this boat), then http://ftploy.com/ is a really cool solution. But you should really get your ass off cpanel asap.
Double edit: Some of the replies below have made some good points that I had not considered which weaken my argument. So while I'm now more ambivalent than dismissive towards the idea of using git to deploy, there are several modifications that should be made to this particular system to make it production-ready. See avar's and mark_l_watson's comments below and mikegirouard's comment elsewhere for some ideas.
Both Git and Rsync are incompetent deployment tools for many reasons I will not go into in a comment as there are quite a few articles expounding on the virtues of not using your VCS. There are also numerous reasons why rsync is inappropriate too (what if you push up a nasty bug and you have to revert? Op, better go revert to my tagged release then rsync again - this is ugly in comparison to versioned releases combined with proper use of the operating system's dominant package manager where you can upgrade/downgrade an application based on its version...)
I generally see four stages in the devops maturation of a programmer:
1. I rsync my code using pre-built commands in Fabric when I'm ready to push.
2. I write code and have hooks on the server to pull the repo when I tag a release in my VCS.
3. I use my language's package management system to build a source distribution that includes all of the necessary static assets, the web application, and any database migration code; I also use a sane versioning scheme to keep track of releases. When I want to push I use a build system that hooks into my continuous integration server and builds a distribution whenever the senior programmer tags a release. It is then made available to the production server in a deb or rpm repository where the senior programmer can then just run an update command (that updates with the new distribution and runs any necessary database migration or post-upgrade hook scripts).
4. You are so big that you've got a custom deployment system built on-top of BitTorrent (ala Facebook) or something similar.
It should be obvious where I'm at - I progressed from being an adherent to VCS deployment, to rsync only, to a proper source distribution release system. I haven't managed the devops for a team/application the size of Facebook yet but I'm sure I will get there soon.
Benefits of versioned archives over VCS for deployments: easily checksum and cryptographically sign; easily integrate with existing distribution specific package databases; deploy without requiring a VCS (and all its dependencies, including maintained and accessible VCS repo-hosting deployment infrastructure), probable security and speed benefits of the resulting (ie. minimalist) approach (both at the level of the host and the network).
Personally I use a combination of versioned archives and named and versioned target environments, each of which can be tested both individually and in combination (including regression tests). This works well for me.
Theres many package management systems which sucks at this.
I suppose then that you mean "rpm" or "debs" or the like. Not the "language package management system" as the previous poster mentioned. Because I've yet to see one that truly support more than tar xzf <list of deps>.
Even when they have signing support none of the packages are signed, anyways.
Anywhere that I have a say in the matter, FTP is disabled. I've been a fan of rsync for years and have a bunch of scripts that can make the whole process seamless. That said, I'm starting to be won over by git deploys.
The reason I've started to like git is deletes. You can handle them with rsync:
rsync --delete
The problem is that some projects have content uploaded in the same file tree (simple CMS installs). This might not be an issue if it was structured differently (symlink to another directory), but sometimes it's what I have. Using "rsync --delete" would remove newly uploaded user content. Yeah, I could use the "--exclude" option as well.
With git, I can just "git rm ..." and the file will be removed on deploy. Content can be mixed in the same tree and hidden with a .gitignore file. File content can be managed separately with rsync, if that's the best way. Just not FTP. Please.
> Content can be mixed in the same tree and hidden with a .gitignore file
Note that rsync also allows fairly powerful in-tree tweaking of details: if you give it the "-F" option, it will look for ".rsync-filter" files (see man page for details).
We've been working on moving away from rsync for our code syncing to
using Git where I work.
I'm not saying there aren't uses for rsync, but your dismissal of git
as not being suitable for a "true production deployment system" isn't
supported in any way. And stating that rsync was "specifically made
for this kind of thing" without comparing any of the trade-offs
involved is just appealing to authority.
Some things you may have not considered:
* rsync is meant to sync up *arbitrary filesystem trees*, whereas
with Git you're snapshotting trees over time.
When you transfer content between two Git repositories the two ends
can pretty much go "my tree is at X, you have Y, give me X..Y
please". You get that as a pack, then just unpack it in the
receiving repository.
Whereas with rsync even if you don't checksum the files you still
have to recursively walk the full depth of the tree at both ends
(if you're doing updates), send that over the wire etc. before you
even get to transferring files.
* Since syncing commits and actually checking them out are two
different steps you can push out commits (without checking them
out!) to your production machines as they're pushed to your
development branches.
Then deploying is just sending a message saying "please check out
such-and-such SHA1" and the content will already be there!
* You mentioned in another post here that rsync has --delay-updates,
this is just like "git reset --hard" (but I'll bet Git's is more
efficient). With Git you can do the transfer of the objects and the
checking out of the objects as separate steps.
* It's way easier for compliance/validation reasons to not get the
data out of Git, since you can validate with absolute certainty
that what you have at a given commit is what you have deployed
(just run "git show"). If you check the files out and then sync
them with some out-of-bound mechanism you're back to comparing
files.
Edit: One thing I forgot, it's distributed. Which gives you a lot of
benefits. Consider this problem, you have 1000 servers running your
code and you've decided that you want to deploy now from a staging
server.
Having trying to rsync to 1000 servers at once from one box (the naïve
implementation with rsync) would take forever and overload that one
box, especially if you wanted to take advantage of pre-syncing things
on every commit so the commit will already be there if you want to
roll out (constant polling and/or pushing).
You can mitigate this by having intermediate servers you push to, but
then you've just partitioned the problem, what if you need to swap out
those boxes, they go down etc.
With Git you can just configure each of the 1000 boxes to have 3 other
boxes in the pool as a remote. Then you seed one of them with the
commit you want to rollout. The content will trickle through the graph
of machines, any one machine going down will be handled gracefully,
and if you want to rollout you can just block on something that asks
"do you have this SHA1 yet" returning true for all live machines
before you "git reset --hard" to that SHA1 everywhere.
You've described some admirable utility that can be achieved by using Git. However, it can all be accomplished with other tools and without needing the entire deployment history stored on each production machine.
As for your comment about being "back to comparing files", that's all Git is doing internally anyway. You can do the same with other deployment tools and sha1 hashes etc.
Sure it can be accomplished with other tools, but if Git is sufficient
introducing other tools just increases the complexity of your stack,
and the complexity of e.g. validating that a Git tag corresponds to
what claims to be rolled out as that tag.
> and without needing the entire deployment history stored on each
> production machine.
This is a constraint a lot of people seem to think they need but they
don't actually need. If someone gets your current checkout they'll
have current code / passwords (if you accidentally checked in a
password but removed it you should change that password). Getting
the code history will just satisfy historical curiosity. Hardly a
pressing concern for an attacker.
> As for your comment about being "back to comparing files", that's
> all Git is doing internally anyway. You can do the same with
> other deployment tools and sha1 hashes etc.
Yes, but the point is that it just gives you that for free without you
having to hack anything extra on top of your syncing mechanism.
You'd be pleasantly surprised how much checking/validation/syncing
logic that you have to write around e.g. rsync when syncing a Git repo
just disappears entirely if you just use Git to sync the files.
Note that git could be used as the developer/ops-facing deploy interface, while under the hood you do something more complicated/robust like Capistrano or rsyncing to multiple machines, or whatever.
Maybe you start out on Heroku. Then you switch to your own machines and use this simple hack, or Dokku or something. Then something home grown. The complexity of deploy scripts can grow while the interface stays the same.
I've been using this method of deployment for several production sites for a couple of years now. I don't really see why rsync is better, or how using git is meaningfully different from having a live branch that you Rsync from. As long as you're checking out into a detached work tree it is functionally identical to rsync.
Part of my reasoning is that rsync is specifically made for this kind of thing, whereas git is specifically made to synchronize coding among multiple developers. So my argument is partly theoretical and less practical.
But for an argument based in pragmatism, rsync has tools such as the --delay-updates flag, which allows your entire deployment procedure to become a pass-or-fail atomic operation. This kind of assurance slows my hair loss as a systems administrator. AFAIK git has no such tools, but I'm certainly open to being corrected.
I'm a huge Git fan and use it every day, but Git was never designed as a deployment tool. There may be situations where you want the entire history of your development to be included on your live server, but often this just isn't appropriate. Also, when deploying to multiple servers you have to invent adhoc methods to handle configuration differences. Even native ssh seems like a more prudent deployment method than Git. This smacks of using the closest hammer at hand, rather than choosing the best tool.
> There may be situations where you want the entire history of your development to be included on your live server, but often this just isn't appropriate.
Are you concerned about being wasteful with disk space? Or is there some other concern here? Some security issue perhaps?
I once committed my DB settings (Mercurial) and noticed my mistake only later. It's very hard to get it out of the history. Ofcourse it could be fixed but this is one example.
Imho version control could be used for deployment but only when you use the release-branch of your project.
And ofcourse NEVER put your config in version-control ;)
And of course NEVER put your config in version-control ;)
I'm not sure I'd make that blanket statement. Version control seems like a great place for configuration. It allows you to centrally manage configuration details and provides an audit trail for debugging. You just want to make sure it is in a separate, secure repository and not mixed in with your app development.
FWIW, removing things from history entirely is easy with git using git-filter-branch. The real problem is realizing that you need to do that in the first place.
> And ofcourse NEVER put your config in version-control ;)
Why wouldn't you want to version your configurations in general? Maybe I am not sure what you mean by "config" but in general version controlled configuration is always good. I even set up git in my etc sometimes to track changes I make to it manually (not in production, on my home machine).
I think what everyone is getting at here is to be smart about when to use git for deployment. Rsync is great for small static sites the same way git is. Now, I'm not going to use it on my medium to large size web app (actually I do for the testing server but that's another story). It's just another way to get a site deployed. I think its great for small static sites and prefer it over rsync for no other reason than I'm already using git and its just an extra git push when I'm ready. I actually use different remotes for deployment rather than a branching strategy.
Mostly it's because it seems people are using Git to deploy without a good reason. At least I haven't heard of an advantage enjoyed by those using Git for deployment.
There are some obvious disadvantages, so what is the compensation? It seems the only reason is that it's easy to type "git push". But of course any deployment method can be wrapped in an equally easy script command.
Okay, I have to admit to knowing of one advantage, and that is that only the delta of your changes will be transmitted over the wire, rather than a complete checkout. It's just that in practice the savings aren't usually enough to warrant the potential downsides. For my money i'd prefer rsync or any number of other solutions.
The way we have it set up in capistrano, git is used as the distribution mechanism. It offers a lot of flexibility in that you can deploy a tag or a revision hash or whatever without having to worry about consistencies between users' machines or having to deal with an external packaging machine.
Once the repo has been fetched, we just check out the right tag/revision and do a local copy from the git repo into the app directory. At this step you can exclude .git if you want.
This process has an advantage over direct git checkout in that if you (heaven forbid) ssh onto the server and directly modify anything, you won't end up with conflicts.
Are you 100% sure that there is nothing you are exposing via your git repo that you want to keep away from the person who manages to hack your server or discover some means to reach the repo externally?
Getting hacked is not inevitable, but if you treat your systems as if it were you'll be a lot safer if it does ever occur.
If you push via git or via rsync, you're typically going over SSH in both cases. As far as the .git directory, my post-receive hook also does a "cp -R" of the files to the actual web-served directory (there's a build step in between anyway), so there's no .git exposed. As far as security, as long as one knows to handle the .git directory, there's no difference.
1) You can go through the trouble of identifying and resolving all the edge cases that you encounter when using Git as a deployment tool. Keeping in mind that one of those may result in an embarrassing security disclosure. Woops.
2) You can use a deployment tool that was developed for that purpose, has existed for years, and has had many sets of eyes on it; many of which are inevitably more experienced than you. And you still may end up with an embarrassing security disclosure, but the chances are better that you'll hear about it through responsible disclosure channels first, rather than waking up at 3 AM to the voice of your boss/client asking why the site is redirecting users to buy Viagra at a discount.
A bonus third choice:
3) You look at existing deployment tools and ask yourself "I wonder why they do that?" Then, maybe ask around a bit. Once you've got a good idea of all idiosyncrasies involved with deploying software, then you embark upon building your own tool. I think you'll find that simply `git pull`ing from your httpd document root and `rm -R`ing the .git/ directory won't be your final solution.
The audit of commits in the general case (Checking for errors in the code) and the audit for the deployment case (Checking for sensitive data that may be exposed in a security breach) are different audits. I don't think many tend to do the second kind of audit.
Also, minimizing your exposure in case of a security issue is probably a good idea, so the convenience of deploying with git may or may not be worth this extra exposure.
I agree on the configuration issue, but you can export the working tree[1] in the hook to avoid including the Git history (and in fact it is a sensible choice).
The configuration can be handled in the post-receive hook too.
I advise against using Git as a deployment tool for serious development (you should use Puppet instead), but for quick hacking and personal projects it's perfectly fine.
When you Git push to a live server as the article in question suggests you're sending the entire repo history not just the latest commit. The remote hook only checks out the latest change, but the entire history is sitting right there on the live server. Which is silliness in most cases.
As is suggested in the SO article you link, it's more appropriate to locally export the version you want to deploy and use ssh (or whatever) to transfer it to the live server. Nobody ever seems to try to justify _why_ you would want to use git-push, they just go about explaining how you do it.
In this example they are checking out into a work tree that is detached from the central repo, which is bare (or could be.) Therefore there is no .git directory in the work tree and the history is inaccessible from the outside world. I don't see how having the entire history sitting in a private directory on the server is silly, as long as only you can access it.
Honestly we are just talking about transferring files here. However you automate it, as long as it gets the files from point a from point b, is fine. I happen to find it most convenient to use git since that is how I send and receive code changes everywhere else, and it seems foolhardy to introduce another file transferring tool without a really good reason why. Moreover it lets me very easily tell exactly what revision is sitting on the server, and also causes me to pause before I push. It is also really easy to integrate git, through hooks, with a continuous integration setup.
Many of us like to have some recent past releases sitting on the production servers in order to make instant rollbacks if we discover a bug after the code has been deployed. Git provides that for free.
This is almost exactly what Capistrano does. What's hilarious is that this entire thread reads like a discussion that could have lead to the development of Capistrano.
* Capistrano can deploy via git (it uses export)
* Capistrano keeps a configurable number of releases around in case you need to rollback
* Capistrano provides an ordered task system with before/after hooks at every one of its pre-defined tasks
* Capistrano can be just as lightweight as using git to deploy:
cap deploy
git push
Two additional characters!
* You may not need all the stuff that Capistrano provides today, but as your project grows, you will need it. Why waste your time with a compromised deployment hack when better tools are available and easy to use?
As I said you shouldn't use this for critical services, but it works great for quick hacking. It's also very convenient for non-production (i.e. testing/staging) machines, to automate continuous integration.
Puppet does add some significant runtime lag unless you add in other orchestration tools. additionally if operations include cross host dependencies puppet is probably the wrong tool to use.
I'm sad that so few people seem to build native OS packages for deployments. My build system creates a release package and sticks it in an apt repo, then puppet installs latest version of package when it runs.
Second that. I think it only comes with maturity. If there is a small start-up and they have a couple of Ubuntu servers they'd start with just syncing their source repo, running 'make && make installs' by hand via ssh.
After a while if they use python, node or ruby, maybe start generating language specific packages (pip, virtualenv, etc).
Next phase is when the need for upgrade/downgrade transactions comes about, handling transitive dependencies (my package needs another package, which in turn requires a third package to be upgraded, which is a base system package to be upgraded). Now 'make && make install's looks silly and mess up the file system with left-over files. Deployment ssh scripts become a tangled mess and so on. Then slowly they think "It would be cool if there was a system created that can transitively handle package versions, and maybe provide transactions with pre and post install scripts".
If they are lucky, someone will point them to apt or rpms or they'll write a broken version of those things from scratch.
Would make your deployment system very OS dependent. All other things being equal, I would generally prefer to have a completely platform agnostic mechanism over something platform specific.
Late reply, sorry. I went with reprepro to maintain my apt repo.
It's nice, but has one major drawback which may be a showstopper, depending on your use case. It can only keep one version of a package at a time in each distribution.
I use git to deploy about 30 sites and have found it to be a really useful workflow. It's particularly useful over SSH when using key-based authentication.
For my post-receive hook, I always add a tag to mark a deployment:
git tag deployment-`date +'%Y%m%d%H%M%S'`
You can see all past deployments with a git log:
git log prod/master --oneline --decorate
On all my developer machines, I have them add a `git-deploy` script to their $PATH, which looks a little something like:
#!/bin/bash
git push $1 +HEAD:master
git fetch $1
You can just run `git deploy prod` (assuming your deployment repository is named 'prod').
The extra `git fetch` will pull down the auto-generated tags so you can see them locally w/a simple `git tag`
Edit: Forgot to mention, that since git ships w/a bash shell for Windows, most of this should work for Windows-based dev setups as well.
You might be interested in checking out git-deploy. It's a tool we wrote to manage tag creation and completely pluggable rollouts/rollbacks with sync hooks you write: https://github.com/git-deploy/git-deploy
It's basically a more advanced version of what you're doing.
git is a SCM and should not be on production systems.
instead you should have a build server which builds up a package (rpm, deb, tarball?) which is then used to deploy across the production environments.
you should also not compile JS/CSS etc on production system that is what the build server is for.
anything installed on a production system should be 'required' for the app to actually run.
-
that said, you can use capistrano (and other tools like this) to update 'demo' environments and dev environments (with git) however the actual TEST and STAGING environments should mirror the PROD environment (packaging).
I'm planning on writing about this in more depth later, but this is essentially the route every cloud hosting company is taking right now and I think it's a bad to only allow that kind of deployment.
Don't get me wrong, I love Capistrano, git deploy hooks, ruby gems that do deploys (heroku), but most cloud hosts are only offering this mechanism to deploy apps. FTP became popular because of the ease of use for designers and webmasters. You don't always need to deploy your entire application for simple changes. Another big one is the ajax file editor in the browser.
For trivial changes a simple file change would suffice. When you do an entire deploy for app like this, depending on your dependencies and payload, it could take a long time. What if you had the wrong price and need to make a change immediately? Of course maybe now there are multiple environments which play a factor too.
I do realize that was before we had multiple web servers running the app and that is part of the reason, but there are still ways to make it work (file mounts).
I'm hoping more deployment options in the future and that cloud hosts realize the need is still there from traditional hosting.
My thought is that could work together with other tools. Say you do your next deploy with heroku, it says there are unsaved changes and asks you to first do a 'heroku pull' to pull the changes and you can then commit them or you could blow them away with 'heroku push -f'.
I'd say 80% of the sites online are managed by one or two people. They may need the scalability of the cloud for traffic bursts, but we can't say cloud is the future if all of our existing tools and workflows are completely broken.
For the past year I was building a cloud competitor to Heroku. We had a traditional host (like HostGator) and we talked to those customers about moving to the new cloud infrastructure and all the benefits to why. Most people said it was too complicated and were stuck in their work flows (FTP and file managers). Which is why I wanted to chime in with FTP is not dead.
I'm trying git to get my feet wet with it but man this looks way complicated to me. I didn't even know about "git config" or even that there was a checkout command for git. I usually cd into the directory I want to turn into a repo and use "git init" then after changing files, run gitk or if in Eclipse I use EGit. I'm not even sure what happens in gitk when I do a commit, is it that long "push master origin" stuff? Does that mean master is my local repo and master origin is like the overall master? I guess with SVN it's clear even at the command line but with git there are just so _many_ options. Then there's custom scripts to make all this work? I'll stick with scp or rsync for distribution for now. The author might want to look into Hudson or Jenkins, they work wonders.
git push/pull is easy, but there's no getting around the fact that Git is a distributed version control tool, not deployment software. Using Git for deployment is probably fine for simple deployments where you're just getting a bunch of static files onto a single box, but as soon as you stray into the realm of non-trivial web application deployments then things change. Factors like database migration, dev/prod environment parity, dynamically spinning up new server instances and continuous integration etc. mean that the act of simply copying your files become the least of your worries. Sure Git will play an important part in getting a snapshot of the codebase from a dev's workstation into the deployment flow, but that's where it ends and tools like Chef and Puppet take over.
My thoughts exactly. This workflow is nothing more than hacking Git using post-commit hooks to do something it was never built for.
If you want to deploy using Git then the smart thing to do is to use one of the many continuous integration tools out there that were built specifically for this kind of workflow. I use TeamCity to run my tests and to build/deploy my website whenever I push to my default branch. This works really well for some of my sites, and although I'm looking for a way to refine this so I can also deploy database changes between local/staging/web servers I can't think of a better way of doing this.
The issue here is that there is still numerous web hosts who don't grant you SSH access, so you're not able to set up any git repository there anyway and are still stuck with FTP. Hopefully this will go away soon, or at least more in the direction of Heroku and the likes.
Take Hetzner (large German provider) as an example - while they do offer managed servers and root servers, those are much more expensive. I'm not advocating the use of such products, but merely pointing out that they're still around a lot.
I came up with deliver https://github.com/gerhard/deliver to address this very problem. It's bash utility that automates git-based deploys and comes with pre-built strategies for the most common deployment scenarios: generated sites (think Jekyll), shared (WordPress, PHP etc.), ruby, node-js, S3 etc. I did a talk on it at my London Ruby User Group in March: https://speakerdeck.com/gerhardlazu/deliver
That's very cool. I really dig the minimalism of it. I'm currently using a system that involves about the same amount of config and leans on Capistrano for the heavy lifting, but I think I might investigate using deliver for my next project.
I'm not sure I like the approach of serving files from your repository. I'm not sure about how git works in detail, but are repository updates even atomic?
When I deploy my website, I use a different approach: My webroot is just a symlink. My deployment script exports the repository to a directory with a unique name for every commit. When the export succeeds, the symlink is updated to point to the new directory.
The advantage: Changing to the new version is instantaneous. If something should go wrong, I can immediately revert by changing the symlink back to the old dir.
No, they are not. During the push, there will be a short amount of time in which some parts of the website will be operating on new code, while others will be operating on old code. If many components are in play (ie using libraries) you may end up breaking things if a new request comes in at the right time.
You can combine this approach with updating the code with a git pull to get all the benefits of git. Capistrano, the conventional Ruby deploy script, does this by default.
Here's one reason not to use git for deploy - if you don't want your source code on production servers where clients can access it, or where it could be found by hackers.
I work on a closed source system, so we will never deploy our code via git and then build on the server. So, in this case build locally (or on a build server), and rsync from there using deploy scripts.
I'm surprised Fabric [http://fabfile.org] has not been mentioned in this thread. I'm not a Python developer but I love Fabric specifically for a tool to handle deploying code. If you feel like a Git deployment is lacking, be sure to check out Fabric, especially for multi-server deploys.
Fabric is great for running commands. But what a lot of programmers do not realize is that deployment is a process. That process involves running automated tests, packaging all of the assets, using package managers to upgrade/downgrade based on versioned release schemes, database migration, post-upgrade scripts, and quick/painless rollback (downgrade) if there's a major blocking bug for users.
FTP still works. FTP isn't 'broken.' This 'replacement' adds huge unnecessary complexity and doesn't work on nearly as many servers as FTP does (which is all of the servers)
And yes, I have deployed with git, so i'm not speaking out of complete backwards ignorance. I can still see a use for both.
two connections
Not a problem if you're only occasionally updating one site at a time, though I'd agree it doesn't scale up the way git would.
nat issues, no standard
valid points. I've never had issues with either but I don't work on the kind of projects a lot of HN users do, so I really tend to only care if it stripped the line breaks or not.
it doesn't work on "all of the servers" either. All of my servers have ssh/scp available and will never have ftp.
I stand corrected.
It is good to actually see the arguments against FTP at least.
Yeah, Python and other technologies are also 90's or even older - what is your point? Proven technologies that work are belittled to promote today's agenda?
I would tend to dismiss this kind of articles and suggestions even if they are OK - only because they promote by appeal to a fashion.
A beautiful thing that this demonstrates is that no matter how old a concept is, there's always room to explain it clearly so that those who didn't already understand it have the benefit of finally being enlightened. I learned this many years ago as an author, thinking that my most basic ideas weren't worth writing down. It turns out that what's obvious to one person isn't obvious to everyone else.
Thanks for the reminder and for the clear explanation of how git deploy might work!
In the Drupal community, we're all stumbling over each other to find the best Git deployment strategy. I'm surprised that Git deployment would still be news for anyone. I don't know who this article can reach that's not already competent enough to be using Git (at least for dev).
On the other hand, if your site is just static (HTML/JS) files, I think it makes great sense to use Git to deploy, as there is no configuration to worry about.
Was looking for an email address for the author! And failed. Anyway just wanted to comment - that there's no publish date attached to the article - or one that is at least obvious. I have no idea when it was authored.
And here I was thinking that this was obvious and most people used something like Capistrano... It's shocking to me the amount of people not using some sort of SCM-based deployment method. =P
I'm more surprised that people aren't using something more reliable and proven, like the package manager their system already uses. Why introduce another piece?
Seconding git-ftp. I use it almost daily and find it excellent for deploying on shared hosting where ssh either isn't provided or is a hassle to set up. git-ftp is an awesome, awesome tool.
No, because it needs to be checked out (that's the deployment). As a usual git server, yes --bare is the way to go. In this case you could alternatively use --bare in a ~/repos/example.com dir and set the $GIT_WORKING_DIR environment variable to ~/www/example.com for checkout.
Edit: I should also mention, if you are stuck on something like restricted hosting with CPanel which severely limits your deployment options (some of my clients are in this boat), then http://ftploy.com/ is a really cool solution. But you should really get your ass off cpanel asap.
Double edit: Some of the replies below have made some good points that I had not considered which weaken my argument. So while I'm now more ambivalent than dismissive towards the idea of using git to deploy, there are several modifications that should be made to this particular system to make it production-ready. See avar's and mark_l_watson's comments below and mikegirouard's comment elsewhere for some ideas.