Hacker News new | comments | show | ask | jobs | submit login
Deploy a website using git in Ubuntu (kramerapps.com)
50 points by conradev 1783 days ago | hide | past | web | 21 comments | favorite



Don't use this approach; your http-accessible directory does not need a git repo living inside it.

Just have your script export the repo contents by using `git archive` —thereby exporting just the files you created, without any git machinery—instead of this approach.


Be wary of having .git accessible to the outside world. If it is routed, possibly due to default web server configuration or an oversight, it can be rather easy to fetch the complete (or in the case of packed refs, nearly complete) history of your application server by walking the git objects in reverse from refs/heads/*. This could reveal database configuration details or other particularly interesting things.

The server need not even be set up git over http (via git-update-server-info or what have you) for this to be an issue.


Is git push atomic? I'm reaching the point with a PHP application where enough people are using it at any one point, every time i push an update (using lftp --mirror over sftp) someone gets caught in the crossfire.

My next step is deploying to a temporary folder and swapping symlinks around, but if git/hg can do an atomic update that would be much simpler.


I do not believe that checking out all files in a repository is supposed to be a single atomic action in either git or hg.


I didn't think so. Thanks for the clarification.


If you have APC (which you should), you can turn off apc.stat to make PHP ignore changes to actual files and keep serving scripts from the (now stale) opcode cache. Depending on the structure of your app, this might be good enough to keep things running smoothly during the second or two that it takes for git to update the working tree. When you're done, atomically clear the opcode cache. Turning off apc.stat also helps your pages load faster.


No. In fact, git doesn't even swap individual files atomically (which is what I had expected it to do): during updates the files momentarily disappear, which caused serious issues for my site and made me move back to rsync deployment recently (which has the other security advantage of not keeping the entire history of the website on the edge servers).


Using symlinks is the way Capistrano does it and has the benefit of being both atomic and easy/quick to revert if something goes wrong.

There's no way git checkout can be atomic.


please do not use the suid binary approach, it is something really avoidable in various way. Check the --touch-reload option of uWSGI, you can create an empty file (writable by the user making commits/push) and use it to signal reloading.


Great article!

My apologies if this is too off-topic, but it's interesting to see everyone's virtualenv setup. I normally have my project/site/app in my environment folder, so it looks something like:

    projectenv/
      bin/
      include/
      lib/
      project/
but I've also seen (as is the case in the OP's example):

    project/
      env/
      __init__.py
      database.py
      ...
or even having a separate folder for all environments:

    envs/
      env1/
      env2/
      ...
    project1/
      ...
    project2/
      ...
Would anyone say there's a "preferred" setup?


I don't think there's a "right way", but I prefer the second format. In the first format, it's difficult to delete and recreate the entire virtualenv without also deleting your project files. And in the third, your environments are unnecessarily separated from their respective projects.


for the love of technology, stop with the titles that say do XYZ in Ubuntu. Pleaese use Linux instead.

I think everyone knows how to use a package manager and find the equivalents for 'apt-get install'


Different distros have different filesystem layouts, different package managers, even different package names. It's useful to have one worked example. Especially when Ubuntu works hard at being accessible to beginners, it makes sense for that example to be Ubuntu (since the users most likely to need hand-holding are on Ubuntu).


Having a bare repository on the same server as a "deployment" repository, where the repositories are configured to

1) automatically pull changes into the deployment repository from the bare repository

2) automatically push changes into the bare repository from the deployment repository

feels like a hack. Maybe git needs a concept somewhere between a normal and a bare repository, where the files are checked out but aren't teated like a normal work tree, i.e. you can't commit anything, and on a merge any local changes are thrown out?


Use sudo rather than making a single purpose setuid program. Also note well the .git, .gitignore accessibility as DHowett mentions.


I do something like this and like the workflow quite a bit. I copied it from this guy: http://joemaller.com/990/a-web-focused-git-workflow/

In the comments there's also a nice set up for running a staging server from the same hub repo.


ok, here's a question.

what if the "bare" git repo (/srv/git/helloworld.git) belongs to a user "git" and the non-bare setup (/srv/www/helloworld) belongs to use "conradev" ? this (different users) setup is what one would prefer when multiple people are working on the project.

git pull hub master would fail in that case as it would be executed by the git user. so you'll probably have a lot of hard time figuring out permissions. what are your suggestions now ?


This could be solved reasonably well with ACLs. Give the git user selective write to the tree rooted at /srv/www/helloworld.


but the plan/aim is to make it all very simple to setup (even for noobs :)). cmon, ACL knowledge is not something wide spread.

So, we need to keep things simpler!


Sometimes there's a limit to how simple something can be. And sometimes you don't want to/shouldn't hide the complications.

I think that introducing users to ACLs, permissions and security is appropriate at this step. You can have a script/process in place that can take care of this but you should explain what's going on and why. Because if there's a security issue, they can at least have some kind idea of what happened.


A comment and a question: 1. Unless you are going to make commits or any changes in the deployment directory (not recommended), you don't need a git repository there - 'git checkout -f' to GIT_WORK_TREE should work as a post-receive hook there 2. Would your post-receive hook deploy for a push done to any branch or a specific branch??




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: