> Continuously deploy. Every commit should be instantly deployed to production.
ah... no. Maybe this is just viable for a single developer, as a substitution for continuous integration. But with multiple developers checking in, on a complex system your site will be down. A lot.
If you're frequently having problems where new deployments are bringing down the site, then doing that more often isn't the answer. As un-fun as it may be, you should be looking at how you could have caught the problem before it went live, whether that entails more testing, more robust design or something else.
You're right. If everyone behaves the way they do with larger deploy cycles even after moving to an extremely short deploy cycle then you would have lots of downtime.
The point wasn't that it solves downtime, but that it causes you to more readily understand sources of downtime. You then have to use that information to make your development and deploy process resilient to those classes of failures. Rinse. Repeat.
For example, at my company we would spin up a new engineer and within about a month they would take out a database (or worse) with a query that would be extremely slow. What did we do to fix the problem? We capped all queries to 20 seconds, after which we would fail the page request. In the best case, a developer pushes a page with a slow query in it, and the page fails enough to cause the revision to roll back. In the worst case, the feature the developer was working on would be broken but the rest of the website would continue functioning.
And yes. yes. yes! You should be catching your problems before they go live. The list of techniques is staggering: unit testing, regression testing, functional testing, fuzz testing, exploratory testing... but those methodologies only go so far. Continuous Deploy goes further.
Agree'd. It seems like the author either has no customers or very understanding customers. Pushing every commit to production is just non-sense by any metric. What about commits that muck with the data model where bugs might potentially destroying production data? What about complex developments that consist of more than one commit?
This almost reads like a link-bait. I somehow doubt the author really believes what he's writing there.
Well, it's certainly not "no customers"... Timothy is referring to IMVU. Whether the customers are understanding or not is another issue. However, when we implemented this "cluster immune system", we had outages all of the time. But every single time, we took steps to prevent that class of failure from happening again, and now we deploy code to the cluster twenty times a day.
Well, I'm not against frequent deploys. I just found the article a bit light-hearted in tone - as if no testing at all was going on.
No matter how many classes of failures you have ironed out, in any reasonable complex system you will still regularly have regressions that are not caught by a quick "two eyes" check.
"What about complex developments that consist of more than one commit?"
Obviously since you use git or another DVCS, you can do your mucking about locally in a branch, and when it's ready to go, squash it down into one atomic commit.
Well, he didn't mention that in the article. Such a model generally requires a staging server which implies integration testing, though. Unless he suggests the release manager (or even developer de jour?) just merges the stuff on his local machine and pushes it out as he sees fit...
ah... no. Maybe this is just viable for a single developer, as a substitution for continuous integration. But with multiple developers checking in, on a complex system your site will be down. A lot.
If you're frequently having problems where new deployments are bringing down the site, then doing that more often isn't the answer. As un-fun as it may be, you should be looking at how you could have caught the problem before it went live, whether that entails more testing, more robust design or something else.