Really, it's the best way to deal with releases. People develop in their own branches of the code, and them when features are ready, we merge back into the Trunk.
We have a machine (actually a VM) check the code for revisions, and when there is a checkin, it downloads it from svn (cvs should be similar), and then builds, and uploads.
You could do jUnit tests, etc at the same point, to ensure that everything is in sync.
Thanks, and the article is great. With many dynamic/VM languages though you usually don't need a build server, but only some sort of a testing server.
And I think uploading to a production web server should be made by a human operator rather than automatically, once the team makes sure the update works.
I use subversion. Way easier than CVS. It was created by the guys that created CVS, but they realized the limitations of CVS, so they build something better and easier to use.
(1) Your htdocs directory is a CVS directory and you sync your site (cvs up) right there, under htdocs. Development/testing can be done elsewhere.
(2) CVS+FTP: you upload your files to htdocs manually; your local copy of the site is in sync with CVS. How about concurrent development? There should be only one "uploader" operator then.
(3) You don't have a local copy, everything is on the server (is this possible? where is the testing site?).
+1 all the Subversion comments - I would never use CVS for new development nowadays.
Anyways, your question is really about directory layouts, so here's what I do:
I have everything in SVN, under a relatively standard webapp directory structure (eg. /src for dynamic code, /static for static content, /i18n for localization files). My webroot is set to /home/www, with one directory per virtual host. Inside these directories are symlinks only. So I have /home/www/www.bootstrapacitor.com/static/images symlinked to /home/bootstrapacitor/svn/static/images, etc. For uploaded data, I have directories in /var/, so /home/www/www.bootstrapacitor.com/static/logos is symlinked to /var/bootstrapacitor/logos.
When I bring new code over, I just do a subversion checkout, and everything is automatically in its proper place. Eventually I'm planning to write a webscript that'll auto-deploy to staging and production, ensure that any necessary symlinks are in place, and check for database compatibility, but I haven't gotten around to that yet. I guess in the real long term I'll be ditching all webserver filestorage for a distributed filesystem like Mogile or Akamai, but that's way off in the future.
Thanks for the insight, I never thought of symlinks on the production server.
My question though was whether my entire webroot should be a CVS check-out directory or it should be uploaded from, say, a build machine. Both ways look Ok, but there may be some advantages of one of these methods that I'm not aware of. That was my question.
I always used the upload scheme, which worked fine when I was alone or with an ultra-small team. For the upload method though there should be only one "uploader", and with small teams it's usually the team leader.
There's no practical difference between checking out directly onto the CVS server vs. checking out locally and then uploading. You get the same files either way, it's just that they're sent via CVS one way and FTP the other. I suppose that you may not want your source code travelling in the clear over the web, but you should be able to use SFTP or tunnel CVS over SSH. I know you can do it with SVN - I connect to our SVN server via HTTPS.
If you do make your web root a CVS directory, be sure to block access to CVS metadata. I basically whitelist a few known extensions (.php, .html, .css, .js, .png, .jpeg, .gif, and .swf) and block access to anything else. You don't want to give random websurfers access to your CVSROOT.
Sure, I forgot when was the last time I used an unencrypted protocol with production servers. CVS works through SSH, Dreamweaver now supports SFTP too.
One last question: does Subversion require a daemon on server-side or it can also work on the shell level like CVS?
UI-wise, svn is a shell command like CVS. Tortoise also has a Windows shell extension for it (like CVS), and I think it's supported out of the box in Eclipse and some other IDEs.
As for what's going on on the server - SVN has a bunch of possible access protocols. There's a native svnserve protocol that IIRC runs a daemon server and accepts connections over the net. There's a local filesystem module. There's also a WebDAV module that lets you access it through Apache. That's what I use: I have a subdomain setup that virtual-hosts through Apache, with Subversion piped through SSL and all other content blocked.
By "shell-level" I meant a cvs client, be it GUI or a shell command, can connect to a server that doesn't run any special daemons except sshd or telnet. Essentially, the cvs client logs in via telnet or ssh as a user and works on the file-system level. The less components you have to run something the better, always.
I looked at their web site but still don't get it - does SVN require something other than sshd/telnet on the remote server? But never mind, I'll figure it.
Just keep a repository as your production directory and update that to the preferred tag (say, your latest stable release). You can work off a branch of the trunk for development and testing.
Really, it's the best way to deal with releases. People develop in their own branches of the code, and them when features are ready, we merge back into the Trunk.
We have a machine (actually a VM) check the code for revisions, and when there is a checkin, it downloads it from svn (cvs should be similar), and then builds, and uploads.
You could do jUnit tests, etc at the same point, to ensure that everything is in sync.
It's not exactly identical, but take a look at http://www.joelonsoftware.com/articles/fog0000000023.html
Joel points out some good reasons why build machines can really help.
-Colin