
Why Continuous Deployment? - peter123
http://startuplessonslearned.blogspot.com/2009/06/why-continuous-deployment.html
======
russell
Continuous Deployment is a very intriguing idea. I haven't tried it, but in
the the limit it makes sense. A few years ago I worked for a company that did
large batch releases, scheduled every 3 months. The first month was spent
negotiating with product management over product requirements and writing
functional specifications. The next month was writing code. The third month
was fairly rigorous QA, so much so that there was usually another month to fix
the problems that QA found. The breakdown was 25% design documentation, 25%
programming, and 50% fixing bugs that should have been caught in programming.
The quality of the code that made it to production was very good, except for
one time where a single bug brought the company to its knees for a week.

More recently, I did a consulting gig a company that released every two weeks
with a one week QA cycle that allowed for a round of critical bug fixes. This
was much smoother. There were rarely any show stoppers and even those were so
fresh in eveyone's minds that they were quickly fixed.

Neither place practiced TDD or even rigorous unit testing. Testing was pretty
much walking through manual scripts. It seems to me that continuous
integration to production requires TDD or something close to it. I also
question whether it is appropriate for an application with fiduciary
responsibility, but I would love to be convinced otherwise.

~~~
abstractbill
I don't practice TDD _or_ unit testing, but I deploy code on average more than
5 times every day.

Despite not doing TDD or unit testing, I have enough confidence in my code
that I spent a month in Greece on honeymoon last year with only a Blackberry,
while an average of about 50,000 concurrent users were connected to the chat
system I wrote for justin.tv. In that month the system didn't have a single
problem.

My "secret", if I have one, is that every change I release is very small
(maybe 5 or 10 lines of code - 20 lines at the very most). I read and re-read
my changes thoroughly before I deploy, and test the new behavior many times.
The evolution of the chat system I wrote has been a huge number of very small
steps.

~~~
biohacker42
_and test the new behavior many times_

Manually? You can save a lot of time if automate that.

~~~
abstractbill
I do have _non-unit_ tests (but only for things that have actually failed more
than once). Specifically, I have a bunch of chat bots that periodically log in
to the production chat servers, talk to each other, ban each other, etc, etc,
and make sure they get the responses they expect.

Yes, new functionality I test manually. I believe the time taken to write
tests for things that have never failed is wasted.

~~~
Afton

         I believe the time taken to write tests for things that have never failed is wasted.
    

Yes yes yes.

I've seen a lot of tests that, if they failed, would have meant that a
thousand other tests would have also failed, and that test would have been
simply more noise.

~~~
LargeWu
Then you're either a)not testing correctly or b)writing code with too many
dependencies, or c)both. If a thousand tests break from a single code change,
this should be a code smell that says your architecture needs work.

This is where testing gets its real value. Finding bugs is just a side effect
of testing. The real value is that testing helps you _design_ good code.

~~~
run4yourlives
I think he means the reverse though. It's not that a single line of code
causes thousands of failures, it's that you'll never get that particular test
to fail unless pretty much everything else is failing to, ergo it's a useless
test.

I don't necessarily disagree with your point, just clarifying.

------
mrduncan
A post from about 5 months ago going into some detail on IMVUs continuous
deployment process which may be interesting to some:
<http://news.ycombinator.com/item?id=475017>

------
zacharypinter
I like continuous deployment and want to use it on a few websites I run.

However, it seems to depend on a large user set in order to "test deploy" the
application to a subset of the user base.

Any thoughts on using this for projects with a small (< 1000) user base?

~~~
eries
I think the principles apply, but the specific practice may have to change a
bit. If you look at the five-part "cluster immune system" I normally use, only
one part is dependent on a large user base, and that is incremental deploy.
You can still use all the automated testing, sandbox, continuous integration
and real-time alerting practices just as well.

I would suggest using something like Five Whys to figure out how different
your situation really is, rather than assume you'll have a lot of problems due
to a small user base: [http://startuplessonslearned.blogspot.com/2008/11/five-
whys....](http://startuplessonslearned.blogspot.com/2008/11/five-whys.html)

You might find that the smaller customer base is actually a big advantage,
since it allows you personally get to know many more customers and therefore
find it easier to find and debug problems in realtime. Just a thought.

~~~
trapper
For this we thought about recording user input, then replaying it on the new
build as a pre-deploy step and looking for errors in logs etc. Obviously this
would only work without major ui/schema changes. Have you tried anything like
this?

------
richcollins
The interesting idea that Eric introduces here is the concept of deploying
plumbing for big new features continuously. This lets you work and small
batches while making big changes.

~~~
blader
Insightful observation.

