If the gauntlet of (unit-integration-acceptance) tests (in the master branch, naturally) all pass, the code is deployed to a staging server. Then there are some tests which run against staging. If those tests all pass, then the artifacts for the new version of the site are copied into a new directory (side by side) on the server. finally, a powershell script tells IIS to serve from the new directory.
End result? Deployments are zero-downtime non-events tha happen multiple times daily. Rollbacks are trivial (and rare). Any not-yet-ready for prime time code can be checked into any other git branches.
Database changes? I have a tiny bit of code, called from Application_OnStart that checks to see if it needs to do any CREATE TABLE or ALTER TABLE statements.
Sure, I had to create all of this stuff myself, but it's all crazy simple, reliable, and does just what the project needs.
Maybe someone could make some product to handle all of this, but the flexibility of linking together the best tools for the job wins for now.
Also, it's just easy.
It's absolutely possible to do this, and do it well (the company I worked for previously did it for years before finally moving to capistrano), but the point is, that means every dev team in every company is spending precious time figuring out how to do this, writing the code, testing it, debugging it etc. When the developer who wrote it gets hit by a bus, someone has to go through his/her code and learn it so that it can be maintained.
The alternative would be to have a standardized tool (like capistrano) that is commonly used.
Really, the only gap I had to fill between the high quality parts that I'm already using (nUnit, TeamCity, etc.) was a tiny script to copy files + reconfigure IIS
1. Make a dated folder and copy the new build to the folder on all servers.
2. Point IIS to the folder on all servers.
Just in case a backup powershell script is kept. It works by just pointing to the previous dated folder of the build.
Not sure why this is such an issue. Solutions can get complicated and I don't think any tool can work properly in all scenarios especially when downtime is a concern. Thats why scripts exist.
Currently my solution is on Amazon, and if there was a way I'd integrate the load balancer with the scripts so that when a server is being updated load balancer doesn't send requests to it. However IIS is fast enough so this is just part of a wishlist.
For QA builds, I manually trigger a CC.NET job that builds and deploys, but with msdeploy it overwrites what's on the QA server, providing no rollback capability.
For production builds, I trigger a cc.net project that just builds, then I manually push the bits to production servers, copying files from build server to production servers, setting up side by side versioned folders, and update IIS home directory for the web site. I'd like to automated, but transport is my biggest roadblock.
I'm curious, how does the new version get "deployed" to a staging server and is it "copied into a new directory" on the production server (UNC, FTP, ?).
Where does it copy the files from? I set up a TeamCity artifacts dependency, so the "push to production" script doesn't run until the latest known good artifacts are downloaded from the TeamCity server.
* Subversion for source control
* Jenkins for building/testing/deployment coordination (each project has a Build, Push to Test Server, Integration Test Suite, and Push to Production set of jobs)
* NSIS to build installers
* Migrator.NET for database schema migration and rollback
* PsTools + NAnt + Robocopy + Misc. Utils to deploy to remove server
1. Pull code from SVN
2. Build the code
3. Run unit tests
4. Build the database migrations
5. Build the installer
6. Copy the installer to the test server
7. Use PsExec to run the installer
8. Copy back the install log to the build server
9. Build the integration test suite
10. Run integration tests (usually Selenium + NUnit)
11. Copy the installer to the production server
Sure, it's not an "integrated" solution, but I don't really see how I'd have needed less control or granularity if I were building and deploying on Linux and didn't use Ruby.
But you're using that setup to deploy your code - this is the missing piece to me. Why would you push from your Build Server? It has nothing to do with deployment (in concept).
Moreover you say that "Rollbacks are trivial" - which is hardly the reality for most people. How do you rollback a push from your BuildServer? Manually?
How do you alter your DB Schema? What if your deploy screws up your DB Schema? I guess what I'm asking for is a bit more detail here - you're sort of waving off everything I posted about (and experienced over the last 12 years)...
Your build server has everything to do with deployment. Your build server should be the only place production builds should ever come from. What you seem to be suggesting is replicating a build server for deployment. To me that seems like a violation of SOC and DRY -- but applied to infrastructure rather than code.
Once you break out staging from production, which you should do anyways, I don't think there's any issue left from the things you listed. And the additional benefit is I don't have to learn another framework like Capistrano.
With that said I do think deploying w/ ASP.NET is a pain, but not for the reasons you mentioned. But because getting the configurations working in all the right places is painful, IMO.
It does work, it is convenient, but that's not a Build Server's function in concept.
Which things RE deployment did I leave out? I was considering the config as part of the coding...
You could cut the build machine out by building locally and pushing the binaries out via git, but that's just really an implementation detail.
Rob is nonetheless correct that deployment in ASP.NET is poor. I think you explained one of the reasons why: static languages. It seems like you've convinced yourself that something is convenient because you feel that there's really no other solutions. I think you right and wrong...You are right because it isn't going to change, so deal with it...You are wrong because there are alternatives which have changed.
I do tend to agree that the focus on deployment should be interaction with the source control repository and doesn't require a build server, I just like that added protection of the build server being the one to do that interaction when it comes to releasing code.
Why wouldn't you?
Seriously. The role of what we call "build servers" has been dramatically expanding. When I started in this business, nightly automated builds were the state-of-the art. Then we went to automated build systems that would build whenever new code was checked in. Then the build systems would run tests as part of the build. Then the build systems would spit out reports of automated test coverage so you could know if you were missing something you thought you had covered.
Also, I wouldn't ever deploy production code that wasn't built by the build system. So I'm having trouble with the "nothing to do with deployment" argument. It has everything to do with deployment.
I don't consider my TeamCity install a "build system" I consider "building the code" a subset of the overall Continuous Deployment system that's powered by TeamCity (which is amazing, cross-platform, and free for small teams)
The end point of all of this is that instead of releasing every few months or weeks, I can release every few hours or minutes. Basically, every single change that isn't demonstrably broken gets released without us even thinking about it. It's transformative.
What's the safest change you can make to a stable production system? The smallest change possible.
When do your customers want to get their hands on a bug fix or a new feature? Right fucking now.
If something does go wrong, how many changesets do you want to go through to try to find the problem? As few as possible. One is ideal.
How many new features and fixes do you want to have to roll back if something goes wrong? As few as possible. One is ideal.
Some specific questions:
Rollbacks are done by just re-pointing IIS manually. I've only had to do it once. Alternatively, I could have backed out the change and pushed that through the continuous deployment system.
As far as DB Schema changes go... Again, the safest change to make is the smallest change possible. Generally, I make DB changes that are backwards compatible (adding new tables, adding columns to existing tables) so if I need to roll back the code, it's just good. If you're tightly coupled to a lot of logic in stored procedures (I'm not) you're going to need to make their changes backwards-compatible (generally by using default parameters) or embed a version number in the procedure name (e.g. "InsertEntityV2", etc.) to let the two versions of the proc live side-by-side.
If I need to make some sort of breaking schema change, I would have to do a scheduled downtime and handle that particular deployment mostly manually. As far as I can tell, there's no real easy way around that problem (IMVU also does their schema-breaking changes outside of their continuous deployment scenario).
I do understand it's what you (and me and others need to use) - but it's because there really isn't another choice.
Thus my post.
If I didn't require a "build" step, I would still want some sort of solution that could do things like pay attention to checkins, run automated tests, deploy to staging, run tests against staging, deploy to production, run tests against production, and let me know if/when anything doesn't work as expected.
Based on the limited set of tools I have experience with, I probably would still use TeamCity, because it's great at all of the non-building tasks it needs to do as well. But I'll check out Capistrano based on your post.
I will say this: If there's some kind of requirement where Dev1 will overwrite Dev2's changes on the target server, especially if we're talking about production, then you're probably doing it wrong. We never deploy from a developer's box or ad-hoc copying of files. Anything that needs to be deployed needs to be in source control, and needs to be deployed from the build box. I don't care if someone forgot an ASCX template and it takes a half hour to redo the whole thing.
I do wonder if some kind of multi-tenant app that holds application versions in assemblies and can roll up or down between them at will is the future though.
There are a couple gotchas that generally require writing an MSI helper DLL but it's no biggie. The only PITA is that if you precompile during the build stage you have to know the path of the application in advance.
Then you just use a little VBScript to let PSExec work it's magic. Maybe I should put together an MSI that installs all the stuff you need to make it work.
Email me if you want some help setting up an MSI / PSExec based deploy system.
Edit: to expand on my sentiment, this doesn't really seem like the sort of thing that an msi was designed for; and, having to potentially crank out some one-off vbscript seems scary at best.
Deploying can't get much easier than clicking a single button in VS.Net... Rolling back IS an issue but it can be mitigated by versioning one's source code and testing the site locally before deploying.
every single change had to be accompanied by a full and tested rollback script, and given often the deployment was messy (lots of integration/hardware mixes/etc) writing these was difficult and time consuming. ... but, they saved my arse on more than on occasion.
moral of the story: always have a backout plan. Stuff goes wrong. not often, and if you're good, then rarely, but when it does you need a way out.
There was, however, a working version of the code sitting in the directory alongside this horrifically broken monster. I knew the configs were correct because they had been working not 5 minutes ago, and I hadn't changed them. So I updated a symlink (this was on Linux/Apache, but it would apply on IIS/Windows too), and everything was happy again, though using a slightly older codebase.
How hard can that be?
Evidently very hard. Despite spending the entire evening trying to get them to communicate and push files, it simply didn't happen. In the end I ended up writing my own deployment-system based on source-control, Samba and rsync in Bash. It was easier, it worked and I know why it works.
If that is easier to get working than a "one click" solution, the authors of said solution better get a bigger button. I can't seem to click this one.
Apparently there are some bugs which prevent parameterized deploys from working properly. Also, the documentation for Microsoft.Web.Deployment is very poor.
That said, Rob has a good point about MS owning the entire stack and it's pretty odd how they can't seem to put together a working deployment-process.
Whatever you do: Don't say MSDeploy. It's a joke and getting it working requires more voodoo than cooking together your own ad-hoc stuff, like I did.
One of the comments on the blog made me lighten up though:
Rails Developer: if only you had a good MVC framework!
Rails Developer: if only you had a good View Engine, none of that aspx crap!
Rails Developer: If only you had a good pacakage manager like RubyGems!
Rails Developer: If only you had a good depolyment tool like Capistrano!
Another 'button pushing' solution from Microsoft is the _last_ thing we need. People follow 'best practices' that are not actually pragmatic for the need at hand far too much already.
Rollbacks are easy because we just switch back to a previous 'release' snapshot branch.
Works well enough for us so far.
Maybe appharbor should open source their deployment routine because I sure don't have any issues with deploying my apps there and rolling back when I need to.
Just make something that works, doesn't require acres of XML, does configuration properly (web.config transforms don't work on app.config btw which sucks and you cant msdeploy anything related to clickonce).
Batch files + robocopy are more useful at the moment than anything OOB.