
Ask HN: Is TDD/BDD hampering your startup's agility? - bdclimber14
I'm a fairly big fan of agile software development, but at the risk of being deemed an agile heathen, I'm beginning to doubt the benefits of test-driven development (and BDD) for a lean startup. Developer friends drinking the agile kool-aid swear by BDD. In theory, it's great--saves you time by identifying bugs at lower-levels and guarantees that everything works.<p>In practice though, I find myself spending a disproportianate time writing and getting test cases to pass compared to just writing good code and testing in a browser.<p>- For functionality that I know works, I sometimes have a difficult time writing and passing tests especially when the functionality is unique (e.g. Facebook login?).<p>- When we pivot or iterate, it seems we always spend a lot of time finding and deleting test cases related to old functionality (disincentive on iterating/pivoting, software is less flexible).<p>- Test cases (namely Rspec) are just plain slow to run (seconds staring at a screen before getting positive or negative feedback).<p>- There always seems to be about 3-5x as much code for a feature's respective set of tests as the actual feature (just takes a lot of damn time to write that much code).<p>- Most of the code in a lean startup are hypotheses that will be tested by the market, and possibly thrown out (even harder to rewrite test cases with slightly different requirements over writing from the beginning).<p>- Refactoring breaks a lot of test cases (mostly with renaming variables).<p>I do think TDD is great for client work, but for lean startups, I'm not so sure.<p>For a startup that is iterating very frequently and trying to reach a product-market fit, I find TDD to be harmful and actually impede agility. Speed trumps reliability here.<p>Like security (budget vs security) are speed and reliability 2 points of a continuum? Where is your slider as a lean startup?
======
DanielBMarkham
I'm an agile coach and startup junkie.

TDD/BDD doesn't fit the mold of startups. Here's why:

TDD/BDD assumes you know the problem and are coding to create a solution. In
startups, however, you do not know the problem. Sure, you can _imagine_ some
kind of customer that might want some kind of code to do something or another.
But that's all pipe dreams. You have no idea if method X is useful or not.
Sure, you might know if method X meets the requirements of your imagination,
or your architecture dreamed up in your imagination to deal with your
imaginary customers with their imaginary problems, but you really don't know.

So it's a waste of time. Startups are all about providing value -- not
flexibility, not bug-free code, not code you can hang on a wall and take
pictures of. Value. If you want to be the guy who can make business solutions
happen, the guy that customers can come to with any problem and you can make
it all happen, you need to bone up on this stuff. But in the business world,
you've already got the money, your job is to make the solution happen. In the
startup world, you don't have the money yet. Big difference. Big, big
difference.

Look at it this way: 19 out of 20 startups fail. That means that odds are that
you will never see this code again. You'd be a fool to spend any more time on
it than absolutely necessary. But the math works out completely differently in
the commercial world, where most things you write stay around forever.

What I found over and over again with Agile is teams and individuals buying
into the marketing/herd mentality of agile and forgetting about the
adaptive/iterative nature. Everybody wants to either use a recipe book or just
repeat their last project that they thought was really cool. "True" agile
means ditching whatever isn't working. Pronto. There are no sacred cows.
Everything is on the table.

~~~
petercooper
_TDD/BDD assumes you know the problem and are coding to create a solution. In
startups, however, you do not know the problem._

This seems like two ideas conflating into one. The business 'problems' (1) and
the problem you're solving in a particular piece of code (2). These aren't
usually directly related. For example, if I wanted to build a YouTube killer,
the TDD process isn't relevant to the high level "problems" I want to solve
(which, as you say, one might not know of yet).

TDD comes to play (from a developer POV) when I actually start to write some
code that, say, transcodes a video or provides an authentication system. In
those areas, the problem is obvious and contained and TDD can work well.

 _That means that odds are that you will never see this code again. You'd be a
fool to spend any more time on it than absolutely necessary._

Which does not necessarily mean TDD is "a waste of time". If practicing TDD in
a particular situation will result in _fewer hours_ spent developing a
feature, the upfront "cost" of those hours is not a waste of time if you spend
more hours debugging your way through building a non-tested equivalent.

A developer should have a feel for which way works for them. In my case, I
know that the time I spend is ultimately lowered through using some TDD
principles (though not all) vs tiresome debugging of untested code. The reward
cycle of TDD is commonly overlooked. "Code->Yay!->Code->Yay!" beats
"Code->Code->Code->Code->2 hours of debugging->FFFFUU!!" any day.

 _"True" agile means ditching whatever isn't working. Pronto. There are no
sacred cows. Everything is on the table._

Agreed, but that runs counter to making absolute statements like "TDD doesn't
fit the mold of startups" and that "it's a waste of time." It _can_ fit and it
can _save_ time (just not _always_ , sure, or if you're 'doing it wrong' for
your situation).

~~~
ashr
Ditto. Couldn't agree more, in fact, I was going to write something very
similar.

At high level also, Start-ups do _need_ a TDD philosophy. For Start-up
business ideas, the market is _the_ test suite (to take the software analogy
further) that is already there and changing the execution plan to make sure
you pass those tests is vital.

~~~
wisty
I'm not sure I agree that the market makes a great test suite. It kinda hurts
when you fail.

But yeah, there's other things you can use, like - A/B testing (for web apps).
You can automate "code smell" detectors, and get a continuous graph on how
good your code seems to be.

Of course, things like converters and math functions (anything non-GUI) can
benefit from unit tests.

------
petercooper
Considering how many hours I've saved not having to "debug" after drinking the
TDD koolaid, my answer is no.

The problem I notice, though, is many people will either TDD the
"right/official" way or not at all and that's a false dichotomy. If a
particular type of testing is slowing you down or causing you to be less
productive, don't do it! But stick with the tests and processes that _do_
allow you to be quick but without abandoning testing in favor of the old "code
and pray" approach.

For example, on a recent project I built it almost entirely in the models only
(using TDD) before even hitting controllers or views. It only took a few days
to tack those on at the end and I didn't bother doing any testing of them
beyond some cursory "make sure the site generally works" integration tests. (I
see the value in controller and view tests but.. well.. they'd have slowed me
down and the models were far more important.) In a contrast to that, I have a
5+ year old project I _retroactively_ added lots of integration tests to. The
models are untested but at least I know if a change screws the app up in a big
way because so many different use cases are tried in the integration tests.

TLDR: With TDD, stick with the stuff that works and tone down the stuff that
doesn't. Don't feel you have to do things the official/"cool" way - come up
with your own processes.

~~~
jenrawson
God damn, dude. You broke it down. Very nicely said. A couple questions. Do
you think that you might do “Selenium”-like testing on older, legacy systems?
What type of testing would you do for brand-new systems? And what type of
testing would you do for the mature, but green and healthy, kind of system?

~~~
oomkiller
I think Selenium and other related tools are what he is talking about when he
says "integration" testing. I'm a Ruby/Rails guy, so I use
Cucumber/Steak+Capybara for my integration testing.

------
joshcrews
I'm doing BDD as the lead (of 2) Rails developers at our startup, and _its the
reason_ we can go so fast.

Some differences we're doing from your situation:

We use Cucumber to cover the whole web app(but not flash or video processing),
and only have some small rspec model specs on important methods involving
billing.

Cucumber coverage is also very powerful per line of test code. We have 1000
lines of cucumber covering 5000 lines of code.

We aren't covering everything by tests. For example, I would have given up on
the Facebook login test coverage, and just written some tests that mock a
facebook-logged in user but not covered the actual login funtionality itself.

If we were not doing BDD, my time estimates for each ticket would have to
double because the hair-pulling debugging time would skyrocket and kill
productivity.

I would also hate working on a team that didn't have test-coverage because
developer B might build something I don't understand or know about but I
inadvertently break it and we find out 4 days later.

Another benefit: we can ruthlessly refactor and tear out code because the
tests immediately identify if something broke.

There's also more payoff for your tests over time. The longer your project
lasts, the more those tests pay dividends. Even they seem painful now, they
are an investment towards maintainable code in the future.

My advice: keep going with TDD/BDD and consider Cucumber for everything but
your most important business-logic methods.

~~~
Lewisham
_There's also more payoff for your tests over time. The longer your project
lasts, the more those tests pay dividends. Even they seem painful no, they are
an investment towards maintainable code in the future._

I think this is the most important thing. Tests are an investment, but once
you've done them, every time you run them from that point on is free. The
amortized cost keeps getting better and better.

Once you pull in a Continuous Integration engine, or even turn it up to 11 and
implement continuous deployment, those tests really do pay for themselves.
It's just intimidating in the short-term.

Whenever I've gone rogue and thought "Sod it, not today", it's invariably
bitten me on the ass twice over the pain it would have been to write them.

~~~
aquark
> Tests are an investment, but once you've done them, every > time you run
> them from that point on is free.

True, but they have their own maintenance cost. When features evolve in the
future you have to pay to update the tests as well. Worthwhile, but something
to bear in mind.

~~~
chromatic
_When features evolve in the future you have to pay to update the tests as
well._

Very true, but when features evolve in the future, you may have to pay the
debugging costs if they evolve in ways you did not intend with regard to other
features.

------
kgo
I always say, somewhat some tongue-in-cheek, and somewhat intentionally
provocatively, that if you can use stuff like TDD and pair programming, then
you're probably working on a boring problem.

And I think there's some truth to that. On a macro-level, how would you even
begin to write tests for a search engine or some stock market bot or other
notoriously hard problem?

search_on("avatar").should_return("<http://www.imdb.com>)

best_stock_for(:percent_return, 200).should_return("cisco")

?

These problems an inherently non-deterministic. How do you even begin to write
a test for that?

On a micro-level, sure, maybe you're working on a single component. And TDD
would help you come up with the interface. But if you don't even know if the
answer is a genetic algo, or simulated annealing, or using mechanical turk, or
whatever, there's really no point in even trying to freeze the interface.
Which is what TDD really does as much or even more than it verifies the
resulting code. It defines the interface ahead of time. It's a way to trick
developers into writing specifications without using that nasty imprecise
context-sensitive language know as english.

But then again, right now we're rewriting a pretty critical piece of code.
We've thought a lot about how it works. We had a few meetings about the new
approach. Wrote up a quick email with a basic API. And doing pairing and TDD
from there, well that's actually working out pretty well. And I'm confident
we're getting better code quicker because of the approach.

Ultimately it gets back to the statement that real developers ship. In some
cases, BDD and pairing will help you ship higher quality software quicker. In
other cases it won't, and it'll end up wasting money and time. And real
developers will then use their tools accordingly, and not dogmatically.

~~~
nostrademons
Google has tests that read almost exactly like your examples.

Of course, the whole search engine isn't specified by tests. However, if IMDB
isn't in the top 10 results for [avatar] or barackobama.com isn't in the top
10 for [obama], something is seriously wrong and a human should look into it.

The rest of your post is pretty good. Not sure why you've been downvoted.

~~~
bdclimber14
These tests would send a red flag if they failed, but they aren't good as TDD
tests.

First, they are tied to a current time's context. Maybe next year, IMDB is no
longer a good source for movie information, and it's so bad that it's on page
2. It could happen. In theory, your test cases should be consistent.

Secondly, the TDD process is always to write code to pass tests. Well it's
pretty easy to write code to return IMDB, but it's really hard to write test
suite, that when coded against, would produce Google. That test would look
like:

search_on("avatar").should_return("www.imdb.com") if
is_really_good_result(imdb)

Once you are Google, then you should have these tests to make sure things keep
working. However, I think it is really hard (in a bad way) to do TDD on hard
problems.

~~~
tooky
You can't test for a particular outcome given an unknown set of input data.

If you have a known snapshot of data, you can set expectations for how you
want your system to behave under those circumstances.

If you create a world where IMDB would rank highest for your indexing
algorithm for the term "avatar", then you can expect that when you run a
search it will be returned as the first result.

------
jenrawson
I can't believe I came across this post. I have been fascinated by a podcast
interview with Kent Beck, the creator of TDD. I've been listening to it for
two days. The interviewer asks Kent Beck: Is there a time in a start up's life
where TDD is inappropriate? Kent Beck responds: Yes. There is a time when you
are trying to generate a lot of ideas. You need to think of a lot of ideas so
that you can find a good one. In order to do that, you have to work fast- many
of the things you build, just won't work out. (Or rather, you lose interest in
them) During this phase of a project TDD can slow you down. Those are his
words, not mine. Although… keep listening. Kent has quite a bit more to say on
the topic.

Find it here (scroll down to the link for "Show #74").
[http://startuppodcast.wordpress.com/2010/07/10/show-74-kent-...](http://startuppodcast.wordpress.com/2010/07/10/show-74-kent-
beck-on-lean-startups-tdd-and-startups/)

Or, subscribe to the show here: [http://itunes.apple.com/podcast/the-startup-
success-podcast/...](http://itunes.apple.com/podcast/the-startup-success-
podcast/id293268482) Look for episode 74.

------
ekidd
I've used BDD in a startup, and it increased my development speed. Here's what
I did:

\- I wrote the specs before I wrote the code. Essentially, I used Cucumber to
define how users interacted with the site, and I used RSpec+Shoulda to define
how low-level APIs worked. This rarely took longer than testing by hand: I
just wrote something like "When I click on 'Sign in', Then I should see 'You
are signed in.'", and that was it.

\- I kept a watchful eye on the size of the tests. If the test-to-code ratio
ever drifted far from 1:1, I figured out why and fixed it. A 3:1 or 5:1 ratio
is a sign that your BDD/TDD process has gone _way_ off the rails, at least in
my experience. Common causes are (a) not using Shoulda to test models, and (b)
relying on controller specs when you should be using Cucumber (or Steak).

\- I used Cucumber for specifying user interactions, and RSpec for testing
models. I only wrote controller specs for weird edge-case behavior that was a
pain to test with Cucumber. _Edit:_ And I virtually never wrote view specs.

\- Refactoring was easy, because I could tear into the code and trust the
specs to report any breakage.

I agree, however, about the speed of Ruby test suites. I _hate hate hate_
waiting for specs to run. I get some mileage out of autotest and Spork, but
not enough for my tastes.

~~~
bdclimber14
I always thought cucumber was, well, cumbersome. I stick with Rspec, which you
could argue saves time. The 5:1 ratio mainly hits when testing models,
especially scopes. Maybe the problem is that I've written tests for all
validation, attributes and scopes. For a model, that can be a dozen lines of
code, but for the test suite, it could be hundreds.

I do you Shoulda and I don't do controller/view specs (so not
mocking/stubbing). I do integration (i.e. request) specs.

I've always had a problem with testing view output like you mentioned e.g. I
should see "You are signed in." I'm always making last-minute copy changes,
and sometimes would make a change like this to "You are logged in". It's a
very simple change, but potentially could break some specs. I'd have to run
the suite, see what failed, look at the line numbers, figure out why, then
realize it wasn't that my app wasn't functioning properly, but the assertion
was tied very closely to a transient message. Again, maybe this is just isn't
a good way to test.

I'll agree on refactoring. Nothing is scarier than going in and changing
internals, hoping nothing breaks.

------
mcantor

        Refactoring breaks a lot of test cases (mostly with renaming variables).
    

This is a red flag to me. Your tests should not know or care what variable
names are used by your code.

It sounds in general like you might not be doing TDD correctly. Your test
cases shouldn't be slow to run, either. Are you actually hooking up with the
database in your tests, or are you isolating them properly?

~~~
MartinCron
The other red flag: This is 2011, there are good automated refactoring tools
out there that make renaming variables/methods/classes trivial.

~~~
mcantor
Yeah, there's one built into Vim!

    
    
        :bufdo %s/^R-w/new_name/gc

~~~
billmcneale
This is exactly the kind of action that leads to build breaks.

Don't do it, use a real IDE.

~~~
dasil003
Unless you're using Smalltalk, then no, I'm not willing to use a watered down
language for the simple ability to make variable renames bulletproof.

~~~
joske2
There are other dynamic languages besides smalltalk that have a good IDE.
Jetbrains provides IDEs with refactoring support for JavaScript, python, ruby,
php and others.

~~~
dasil003
not bulletproof refactoring though.

------
jlouis
I've never bitten the Apple of TDD/BDD but:

* To me, you should balance the amount of testing on several things. Not all code is created equal - some are plain old quick tests which are meant to be thrown away. Other code is something you expect to be running for a long time.

* The most important balance is this: If you leap quickly over testing of a piece of code, it _may_ or _may not_ cost you more time in the longer run. In other words, not testing increases the risk variance of the code having a bug further down the road. You have to evaluate if that is going to be a problem or not. The problem may also occur because your code is too slow. With a good test-harness it will be easier to optimize and sometimes the tests can be used as a start for benchmarking.

* On the contrary, if you feel the grass is greener on the other side of the road, you may test too much and thus never move fast enough to getting something done. It will cost further down the road, but it hinges on the premise that you will not discard both the code _and_ the idea and rewrite (so tests needs to rewritten anyway).

* Personally, I rarely use a TDD approach. I rather like property-based testing: I "fuzz" out errors. I've just written a protcol encoder and decoder and there is an obvious test: (eq orig (decode (encode orig))). So I automatically _generate_ 1000 "origs" and test that the above property holds. To me, this is much more valuable than TDD/BDD - but I've never been a fan as I said.

* Sometimes the idea of BDD is to shape your process and thinking pattern. In that case, it hardly looks as if it a waste of time: Had you not BDD'ed, well then you may have been in the unlucky case where you implement a lot of code only to realize that you implement the _wrong_ idea because the API has to be different and serve you differently.

~~~
eengstrom
Much of the same sentiment jlouis sums up above plus I always keep in mind:

Time, Quality, Cost, Scope

If what your coding will have limited impact on use or functionality,
scalability or performance later down the road... Fine. However, it is my
experience that standard approaches are standardized for the greater good and
health of a "more mature company". As a manager, director and head honcho I
sure don't want a developer making that kind of evaluation. It will work for
you now, but for your own sake and others later down the road; artfully
comment your code!

------
staunch
I rarely use TDD for prototyping or even the first version of a project. I
tend to only write tests on my second pass through a chunk of code (generally
when I'm refactoring it).

Works for me perfectly well, and I don't give a damn what the TDD True
Believers think.

~~~
bphogan
I think even the true TDD "purists" would say you're doing it exactly right...

Robert Martin usually says "You are not allowed to write any production code
unless it is to make a failing unit test pass."

When I'm writing code to solve a problem I've never solved before, I don't
write tests. But then I scrap it and write tests first, implementing code to
pass the tests. It may sound wasteful, but I have enough experience behind me
to know that those tests come in mighty handy about six months later when I
want to add in some crazy new feature I didn't think of before.

Heck, I even do that with new features - I'll branch, write it, see if it
works, then branch again off the master and implement it again with tests.
It's not much extra work really, and I often do catch little mistakes I made
in the "prototype" version.

But it's taken me a long time to get used to TDD, and I feel like I'm still
learning. I occasionally find myself over-testing. Like anything else, it's a
discipline, but I find it so worth it.

~~~
mcantor
Actually, I think when Martin says "production code", he means "non-test
code", as opposed to "code that will be released to production."

~~~
bphogan
Yes, and I take that to mean anything that's non-trivial or experimental. I've
heard him speak on the idea of experimentation before, and that's what led me
to the methodology I use today.

------
grandalf
A few things I've noticed:

Developers should not be paid to write tests, only code. If the tests are
worthwhile, then they'll get written anyway.

I've seen some developers who write lots and lots of pointless tests... hmm
does Model.find(:all) return all the items in the test db? Ok one passing
test, does :first return one? Ok, another passing test. I'm not exaggerating.

If your test codebase is full of stupid tests that are actually testing your
framework, and if your test suite takes 5 minutes to run, maybe that's why
your team has so much time to read HN.

Good, useful tests will test the most critical 10% of the codebase at most.
The "money" paths that are critical to your core business. Things like credit
card processing, account signups, password resets.

Many of the critical 10% of tests may very well be integration tests, not unit
tests. There is no reason to write unit tests if the big problems would be
caught by an integration test before a deploy.

If your testing ideology makes all this sound like hogwash, then you probably
work in a cubicle where it does make sense to do test your codebase more
broadly.

~~~
bdclimber14
I know you're not exaggerating because I know a lot of them. They live hard,
die hard by "if you code it, test it" -- which includes these trivially simple
statements. Maybe not Model.find(:all) because that isn't new code, still
framework, but definitely testing all the attributes of a AR model based on
the DB schema.

------
newobj
This is not a one-size-fits-all issue. Depending on the team, you might be
able to write code that's 90-99% correct with little test coverage. Depending
on the problem space, code that's 90-99% correct might be good enough. In
others, it might sink your company.

You might lose 1% of "customers" due to bugs, but you could also easily lose
1% of customers due to bad copy or UX. Is that tested as rigorously as the
code? Could the time you spent writing tests/specs have been used to implement
and analyze A/B tests?

Etc.

------
bryanlarsen
One symptom of the BDD kool-aid is Cucumber. Cucumber is very useful if you've
got a customer in the loop who doesn't speak Ruby. However, if everybody who
is viewing/writing the tests speaks Ruby, then maintaining the Gherkin
translations is a waste of time, and a "leaky abstraction". Webrat by itself
presents a very clean, concise, readable syntax, so just use it by itself for
integration tests, or use one of the other alternatives, like Steak.

[http://mrjaba.posterous.com/acceptance-testing-and-
cucumber-...](http://mrjaba.posterous.com/acceptance-testing-and-cucumber-
alternatives)

------
akronim

        I sometimes have a difficult time writing and passing tests...
    

That sounds like a design issue - if you don't design the code to be testable
you'll probably find it hard to test. Even programming _in the small_ e.g. at
the method level, you should be thinking "how will I be able to test this"?

------
wfaler
I find the thread starter and the highest ranking comments to be seriously
deluded, and here's why: I find that TDD almost 99% of the time speeds me up,
lets me iterate and test my thesis much faster than without, and for this very
reason, I almost always write my code with good test coverage.

The times I have omitted tests, I have always come to regret it, having to re-
write the code from scratch for it to be up to par.

A few reasons for this: \- Yes, TDD is belated gratification - the first few
cycles of write-deploy-open browser and test are quicker than writing a test.
But as your functionality grows, instead of linearly incremental effort to
write new test code, your manual regression testing grows exponentially. \-
TDD actually HELPS dealing with change: when you refactor functionality, you
have instant feedback as to what still works and what doesn't. Though features
change the whole system and code base rarely do. See previous point. \- TDD
helps writing minimal, flexible architectures that are adept at change, as
systems are de-composed into, well, testable units! \- the "prototype" code
almost always ends up being the production system. What is easier once that is
the case without tests: trying to write tests for code that isn't very
testable, rewrite the system, or just live with testing costs that are much
higher than that of the competition?

I have actually seen startups slowly die due to the first two points that I
raise. But if you think it's still a good idea to skimp on testing for the
sake of expedience, good luck to you, you're going to need it..

------
azm
I have realized that all kool-aid is quite useless for most situations and
thus have reduced my tests to two simple things: \- write unit tests for units
that actually are complex and do need it \- focus on getting as much coverage
as possible on functional and system tests

------
billmcneale
When you're a startup, getting to market should trump everything else, and TDD
gets in the way of that.

Don't listen to agilists who tell you that untested code is unprofessional.

First of all, hundreds of thousands of untested lines of code go to production
every day and they work fine.

Second, agilists usually go by the false fallacy of "Either you're using TDD
or you're not testing". Which is obviously false: you could also be writing
tests last. Which works just fine.

------
famousactress
I've done TDD to a variety of degrees on different code bases, with a variety
of success. I think when you achieve the right rhythm and approach for your
particular code base and team, TDD can make you go faster. If it's not helping
you build quality software quicker than you could without it, don't do it.

A number of these points aren't familiar to me (trouble finding testing for
deleted code? harder to modify tests for changing requirements than start from
scratch? Renaming variables is hard?). These comments make me wonder if you've
been treating your tests the way you treat your code.

When TDD has worked best for me it's because I've spent a lot of time
thoughtfully putting some organization into my tests, making sure they're
ridiculously fast to write, and ridiculously fast to run. Your source code
becomes slave to your tests, that's the whole point. The fact that your tests
are in your way, suggests that you're doing it wrong. If you were doing it
correctly, and TDD was failing you.. I think the symptom would be your
operational code getting in the way instead.

------
candl
I am programming mostly for myself only (so far) and thus I haven't done
anything big but some of the points you made remind me how I feel about TDD
(from a different, but in ways a similar perspective).

I rarely start with a concrete goal: to clarify I make a general overview what
I want, but not the pathing I should take to achieve it. When I am coding I am
often exploring, I want to try new things, and then after a while I settle
with code that I am pleased with. But before that happens I can go through
several iterations of changes. Writing tests before writing code is one thing,
but adjusting the tests afterwards to accomodate the changes (which may be
big) is a hurdle and slows down progress. In addition in a case such as mine
where I am doing all the work alone and thus I know every corner of the code
written so far gives little benefits.

I can imagine of course that TDD is a great tool when there's an assignment
for a client with specific tasks to accomplish, but in other cases 'get
something working first' is better I guess.

------
MartinCron
Here's my slider, which is working really well at my startup.

1\. _Speed trumps reliability_ Also, manual testing of stable features is
wasteful.

For things at the business logic layer, I have a suite of (many) unit tests
that verifies that all of the domain objects do the right thing. They are
small, fast, don't break when I refactor the code (automated refactoring tools
FTW) and easy to work with.

I have another set of integration tests which talk to some external services
(twitter, fb, etc.) these are slow and aren't as core to the business logic.

I have yet another set of tests that test the real database (I use simple test
doubles in the unit test layer).

Whenever we update code in our source control system, a TeamCity server
builds, runs the unit tests, integration tests, data tests, and then does a
zero-downtime deployment to our production server. Immediately after that, we
run a handful of tests against the server to make sure that the server isn't
totally broken.

This sounds like a lot of work, and it is, but it makes us able to deliver _so
much more quickly_ than any alternatives. Continuous deployment means that no
time is wasted on manually pushing code to servers. Also, it means that the
changes made are the smallest and safest changes possible.

And, most importantly, it means that you don't have to spend a lot of time
manually testing existing stable features just to get a baseline of
reliability.

Do I strive for 100% code coverage of all classes? No. There's a continuous
cost-benefit analysis going on. If something is tricky or would create brittle
tests, I don't have automated tests for it. If something is really core (e.g.
proper enforcement of game rules) I sure as hell am going to write tests for
it. Right now, I'm at 72% test coverage.

Trust your common sense here. It's not all-or-nothing. You can get meaningful
speed-and-stability-improving-value out of having some tests without having to
test every single line of code.

------
samratjp
It just depends on what you want to test for. I find balance in testing for
the most important security features such as authentication and stuff that'll
most likely change very little.

You can speed up RSpec so much by offloading it to Spork, a test server of
sorts that loads your environment.

------
iuguy
I can't speak for speed and reliability but I can speak for budget vs security
and in all honesty, unless it's going to kill people otherwise, screw security
in your first iteration and get it out.

    
    
        If you don't ship you don't have a startup.
    

If TDD/BDD is getting in the way of shipping, then ditch it. Like security,
you can always absorb the debt and introduce it later. To put it another way,
if you spend all this time doing it right, ship (eventually) and it never
gains traction then what have you gained? On the other hand if you ship a
buggy (and presumably fairly insecure) product but it does gain traction then
you should pay down the debt because it's working.

~~~
tzs
You only get to make one first impression. If my first impression is that you
got the fundamental requirements down, and now just need to add features and
deal with growth, then I'll stick around.

If my first impression is that you ditched fundamental requirements, such as
security, then I don't care how many features you have at launch--I can't
trust your site. If your idea is good, I'll wait for someone to rip it off and
go with them.

~~~
iuguy
You are of course correct. However, there's a difference between enough
security to launch (i.e. what comes with your framework like XSS, SQL
injection protection and basic common sense etc.) and spending lots of time
doing extra security stuff (like HTTPS everywhere, making sure cookies are
properly scoped etc.) that could be spent getting your startup out of the
door.

There has to be a balance, which is something quite a few fellow security
nerds miss. The value security brings is in protecting data. If you have no
data, then there's not much value in security. Likewise, if you have sensitive
data then it's worth going the distance to secure it.

------
momoro
Skip unit tests. Only do acceptance / functional tests. W/o any tests, your
stuff will break constantly. W/ unit tests, you will waste years of time.
Acceptance tests (e.g. cucumber) fill the gap.

------
balakk
I have a question to the functional gurus out there:

Do you use unit testing in a functional programming context?

The reason I ask is that programming with a REPL fundamentally changes how you
typically write programs - the bottom-up mentality. You don't even write a
test-first, you test first! The tested program is then assembled into a unit.
I feel there's much less incentive to write a test, if you use a statically
typed language and use a REPL as it is meant to. Am I wrong in my thinking?

~~~
akkartik
I've been doing some programming in lisp/arc, which isn't exactly functional
but I try to stay as side-effect-free as possible. And I've been having a lot
of fun doing TDD. See, for example, <http://github.com/akkartik/wart> which
has about as many LoC in tests as in code.

------
pwim
It sounds like the issue is with how you are writing you tests. For instance,
you state "refactoring breaks a lot of test cases". As the normal TDD cycle is
test-code-refactor, refactoring shouldn't break your tests. Without seeing
your actual code, it is hard to give you advice, but it sounds like your test
cases are too coupled with the internal workings of your code, rather than
testing the interface.

~~~
bdclimber14
While refactoring, it's always the unit tests that break. Maybe instead of
having a (Rails) scope on a child class, I move that to a method on the parent
class. The integration tests don't know about the models, and the exact same
results are returned. But all my unit tests are haywire because they were
testing the models.

~~~
TillE
If your refactoring causes test code to break, then it also causes "real" code
to break, yes? All code using that interface needs to be fixed appropriately,
which can typically be done with automated tools.

If there's no corresponding production code using the same interface, you have
a problem: you're probably testing at the wrong level.

------
grumpycanuck
In the presentation shown at
[http://www.startuplessonslearned.com/2008/09/customer-
develo...](http://www.startuplessonslearned.com/2008/09/customer-development-
engineering.html) (specifically the last slide where he talks about the Five
Why's system) it seems to me that the guru of Lean Startups is a advocate of
automated test suites, and therefore a believer in TDD.

~~~
kgo
Automated test suites don't automatically imply TDD. Many people use automated
test suites and CI without TDD.

------
listrophy
To put it in math terms, TDD/BDD gives you

    
    
      output = C1*(e^t-1)
    

while non-TDD/BDD gives you output

    
    
      output = C2*log(t+1)
    

That is, you get better speed at the beginning without TDD/BDD at the cost of
slower output as the codebase grows. With TDD, you generally start slower, but
increase output velocity over time. (and let's not point out semantics here...
the equations will hold for awhile, then flatten out).

So, where's the intersection? I claim it's usually at about the minimum viable
product or before.

And of course, this all exists on a continuum. So don't TDD things you don't
understand. Instead, spike on the new technology outside your app, then bring
it in with "gentle" TDD/BDD.

If you're sold on TDD/BDD like I am, the key is to work to increase C1. That
is, get better at these disciplines. You should be able to write tests quickly
and have them run quickly.

And frankly, during a pivot, I'd rather have obsolete tests (pointing me to
obsolete code) than obsolete—and hidden—code. Obsolete tests scream "Fail"
when they're obsolete. Code does not.

------
stcredzero
_Refactoring breaks a lot of test cases (mostly with renaming variables)._

Please tell me about this. My experience is that refactoring tools can often
apply the same refactoring to the tests as to the code. For OO projects, the
only variables visible to a test should be temporary variables to hold test
objects. Renaming those shouldn't be a hard refactoring.

------
reid
If you have slow tests that are hard to write, you're already in trouble.

For example: my recent Node.js projects use Vows. More complicated test
details are encouraged to become small functions that are reused over and
over. (Vows calls these macros.) For testing HTTP servers, I wrote a set of
macros, called Pact, that make my tests very concise.

For other big important pieces, I isolate myself from upstream changes by
creating an interface into the dependency, then testing the interface instead.

Instead of changing lots of tests, you change a macro.

The result: very fast feedback from tests that are easy to add and change,
especially when refactoring or when your plans evolve. (Using a new
dependency, going sync to async, etc.)

I'd love to see these benefits in more places.

<http://vowsjs.org>

<https://github.com/reid/pact#readme>

------
KentBeck
In a startup, you need to engineer to minimize the latency of validating
features. Sometimes tests help with that (like when you're dealing with a
complicated algorithm), sometimes they don't. I wrote a poker engine recently
and I had many tests for the engine itself, a few tests for the tricky parts
of the UI, and no tests at all for the system as a whole.

The challenge is that when you get into scaling, you need to begin engineering
for throughput, which requires a completely different engineering style
focused on higher throughput and reduced variance. This style is well
supported by "test absolutely everything" TDD.

Oh, and then when you want to do a tangential experiment, you want to go back
to latency-oriented engineering, but without destabilizing existing code.

In short, I'd say yes, it's plausible that overuse of TDD could be hampering a
startups agility.

~~~
jenrawson
So when you say "minimize the latency of validating features" do you mean that
you want to reduce the time between building something and getting feedback
from your users? Can you elaborate on this point? What is latency-oriented
engineering?

~~~
KentBeck
I think you have it right. Latency-oriented engineering is a style of
development that minimizes the time through the entire loop from idea to
learning to feedback from real users to learning based on that feedback to the
next idea. What you do to achieve this is very different when you have a bare
idea and no customers or when you have a million daily users. The goals is the
same--minimize the loop.

~~~
jenrawson
Hi Kent,

I wanted to reply to your comment...

How do I apply the theory of latency-oriented engineering to the following
real world problem? I have an idea for a book that might be called “Simple
Code". It presents a decision-language to help software engineers make good
design decisions. My idea is in the very beginning stages (4 days in) so it is
vague; but it would incorporate XP theories and practices, especially TDD. The
set of rules in the decision-language might be similar to rules in a game. I’m
not sure if this “idea” of mine is a book at all. It might be a web-based tool
for searching "reliable" sources or it might simply be a new language. The
project is going to be my first attempt at merging my art with my software
engineering skills. Whatever form it takes, it will be inspired by art,
nature, and minimalism. If it's a book, it will be small enough to read in
bed. How would I use latency-oriented engineering to minimize the loop from
idea to learning to feedback from real users? How much of this project do I
have to imagine/articulate/build in order to get feedback from real users? How
do I get that feedback? Also, how do I justify the expense, i.e. the time
needed to complete one loop, to my investors (my husband)?

Thanks.

------
frobozz
I disagree with your point that "most of the code are hypotheses to be tested
by the market". Most "pieces of software" might be, but I contend that most of
the code in most software is under-the-hood stuff that objectively either
works or doesn't. If it works, it has nothing to do with the market's opinion.
On the other hand, if it doesn't work, then regardless of what a good fit the
idea of your software might be, it will colour the market's opinion against
you.

Using the market to test a hypothesis, and using them to test whether your
code works are two different things. The former is a great idea, the latter,
not so great. Mixing the two is a bad idea.

------
vlucas
Like anything else, there's a balance. Don't let yourself go all-out to ensure
you have 100% test coverage of every line of code.

There is always a cost-benefit argument that should be replaying in your head
over and over. If it's a critical part of your application, then make sure it
has test coverage. If it's a simple part that's basically just doing CRUD
operations with almost no custom code, it's probably not worth worrying about
in the short term. Just make sure you cover the primary flex points and custom
algorithms or libraries, and you'll be fine.

------
edderly
"Refactoring breaks a lot of test cases (mostly with renaming variables)."

this seems to be the easiest thing a refactoring tool would handle. Why
wouldn't you re-factor your tests at the same time as your code?

------
gnubardt
If RSpec is too slow to wait on you could use autotest, which automatically
runs tests when a file is saved. There's also a plugin for it that uses
growl/libnotify to display the results.

~~~
petercooper
A common problem at the moment is RSpec being slow with Rails 3 and Ruby
1.9.2. If you do things the "normal" way, Rails ends up having to load twice
and you're guaranteed to spend at least 10 minute looking at nothing.

The solution is to use Spork to have a Rails test environment running
permanently (it also does caretaking between sessions) and then RSpec will
access it over DRb. Couple this with a good .watchr file to run only the
relevant specs when you update them and you get very quick RSpec tests. It's a
bit of a pain to set everything up but it's worked great for me.

~~~
petercooper
I was inspired to write up how to do this:

[http://www.rubyinside.com/how-to-rails-3-and-
rspec-2-4336.ht...](http://www.rubyinside.com/how-to-rails-3-and-
rspec-2-4336.html)

------
silent1mezzo
I only start writing tests once my startups profitable and stable (if that
ever happens). TDD is just not as important as getting a product out the door.

~~~
petercooper
_TDD is just not as important as getting a product out the door._

There's not always a "do TDD, develop slowly" vs "do no TDD, develop quickly"
dichotomy.

If you're doing it right, TDD should speed you up. It adds some guaranteed
extra time to your plans _up front_ but significantly reduces all of the
hidden costs of wasting hours debugging bugs that unit tests would have picked
up.

So I might know developing a particular app might take 10 hours without tests,
15 hours with tests, but if I waste more than 5 hours fixing a myriad of bugs,
the TDD approach still wins. (The constant reward cycle is an important
psychological factor too, though, even if the times are equivalent.)

------
IvanAcostaRubio
Knowledgeable people in TDD code faster and better using the technique.
Practice takes you there.

In the mean time, test the parts of the application that do not change. How
much can the payment flow, sign in or sign up change? Make sure you have test
for those in order to catch regressions.

Know where you are. Are you building to last? Are you building to test an
idea? balance.

Build better abstractions.

Practice. Practice. Practice.

------
bradleyjoyce
in my experience, yes, TDD/BDD slows me down (partially because I'm not 100%
on it yet) BUT it's sooooo much more painful to me to add it back to legacy
code than start with it fresh in a new project. I have two relatively large
existing projects where I would kill to have solid test coverage, but to
implement it just feels so daunting I haven't taken it on yet.

------
sandeepshetty
Here is Kent Beck's take on the four phases of startups where he talks about
how these different phases require different development practices,
principles, and technologies:
<http://www.threeriversinstitute.org/blog/?p=252>

------
avih
TDD can be useful when complemented by an isolation framework, such as
Typemock Isolator. It'll save precious time & most importantly for a lean
startup - scarce financial resources.

~~~
avih
For more information about Isolator, check out <http://www.typemock.com>. TDD
can save startups resources -- performing unit tests will save them money and
lead to a better product and thus better reputation.

------
mkramlich
I do MDD. Market-Driven Development. It's the latest craze! But secretly I
think the cool kids have been doing it for hundreds of years, we just forget
about it from time to time.

~~~
bdclimber14
Nice. I'm going to throw this term out next time I'm with my agile junkies.

------
mkramlich
TDD/BDD isn't hampering my startup(s) agility because I'm not letting it. We
don't do them. I only write real code I actually need to do something real.
This is pretty useful when you're pre-revenue and your feature set or
implementation choices may need to change drastically and/or be abandoned
entirely. Less ballast the better.

