
Ask HN: Help me see why everyone seems to love TDD? - thuddle88
I&#x27;ve been a developer working for large corporations for the past 4 who are more concerned with &quot;fast&quot; delivery than &quot;correct&quot; delivery. As a result of that, the concept of Unit Testing is somewhat foreign in the environment (I think we have somewhere around 40% test coverage, our unit test suite takes ~40 minutes to run on a dev machine, and new unit tests rarely get written). Unit tests aren&#x27;t &quot;valuable&quot; to the customer, so they are pushed out in favor of &quot;value&quot; code (features) and the reliance on manual QA testers to catch defects (which doesn&#x27;t happen).<p>Imagine you were hired as a consultant to come in here and fix our process. How would you sell TDD to the bosses&#x2F;managers who only see it as extra work for the same value? I&#x27;ll admit I&#x27;m a little fuzzy on the benefits as well. I&#x27;m slightly ashamed to admit that even on my side projects I&#x27;m like &quot;I&#x27;m totally gonna do TDD you guys!&quot; but that rapidly falls by the wayside.
======
HenryTheHorse
#1 A change in your current process must be "sold" in the business context of
$$savings (faster dev cycle, lower number of bugs, lower cost of support) or
increase in developer productivity/utilization or increase in customer
satisfaction.

#2 Inadequate unit testing almost always correlates to greater defects in the
support/maintenance cycle. So I would recommend gathering some data around it:
for e.g., number of open support tickets, hours spent fixing such defects,
regression testing etc etc. This must translate to either $$ or Hours "wasted"
or additional head-count (required to deal with tickets).

#3 Identify a small but meaningful project to pilot out the TDD process. You
will eventually have to develop your flavor of TDD within the constraints of
your organization (people, tools, process...)

#4 Run the process for a few months and then compile the "before and after"
data. Be prepared for disappointment. Process improvements are not always
quick or visible at the surface.

#5 Management avoids things they don't fully understand. Most managers hear
"better unit testing" and hear "more time and effort". Instead, they should
hear "lower downstream issues, lower support costs, better builds, happier
users".

So, can you help them understand the issue better? Just make sure YOU
understand why unit testing matters to your company.

~~~
mattferderer
I agree, this answer is pretty solid. I honestly think you could just start
with #5. You can use the term "Technical Debt" when describing the debt that
is created by not using unit tests. As in you are creating more work & cost
down the road.

Here is 1 of many good Uncle Bob articles on TDD -
[http://blog.cleancoder.com/uncle-
bob/2016/03/19/GivingUpOnTD...](http://blog.cleancoder.com/uncle-
bob/2016/03/19/GivingUpOnTDD.html)

This quote from the article is slightly off topic, but I think it is a great
one you could tweak & use with Management:

"Look. Suppose you ask me to write an app to control your grandmother's
pacemaker. I agree, and a week later I hand you a thumb-drive and tell you to
load it into her controller. Before you do you ask me: 'Did you test it?' And
my response is: 'No, I chose a design that was hard to test.'"

------
narrator
The deal with TDD is you write twice as much code and you have less than half
the bugs. As the project grows, it gets harder and harder to maintain and
change unless you have good tests. Most projects without good tests that I've
seen grow to a certain size and then become very difficult to make progress on
because people are too scared of breaking things and the releases inevitably
take a very long time due to all the QA.

One great thing about TDD is that I can pick up a project I Haven't touched in
a very long time or am new to and work on it and as long as all the tests run,
I know I didn't break anything. Without tests, the intimate knowledge of the
requirements of a system tend to get lost and the project quickly becomes
unmaintainable without a guru and active development.

I'd also say that having some tests from the start is much better than not
having tests. The main reason for this is that to have tests, the code has to
be developed with testing in mind. The second thing is that if you have a
build system and a framework for writing tests, you can easily add new tests
for tricky bugs or corner cases.

If you really want to go fast and loose, you should at least have a happy path
integration test, a test for all the dao methods and at least a happy path
test for the controllers. Any parts of the code that are tricky or
algorithmically clever should also have tests. As your QA team runs into
regressions, write tests for those regressions. This is a way I've found to
amortize the test writing somewhat. It's important though that the test
running is integrated into the build at these levels and is run before every
release.

~~~
k__
Problem is, tests don't scale well, if you project grows, your test suite
grows too. The fear of breaking something will become the fear to not only
rewrite the code but also having to rewrite an enormous amount of tests.

Also, having "more" things break other things in a big project is more of a
modularization/encapsulation kind of issue, that doesn't get better or worse
with tests.

At the moment I'm looking into static typing as alternative. I suspect this to
be a "tests-in-code" kinda thing, which could eliminate many unit tests. Also
with modern type inference and structural typing (I'm trying TypeScript) this
could scale better than "write twice as much code".

~~~
narrator
It seems like you're talking about the problem of having overly fragile tests.
Getting a test to break when something is actually broken is something that
comes with experience and studying good tests.

For example, a novice test writer would serialize a Java object and compare it
against a previously serialized object byte for byte. This would break as soon
as there was a trivial change to the object. A better procedure would be to
run pre-test and post-test assertions on the properties of the object that
were actually important.

If you have good tests and change something and a bunch of the tests breaks,
that simply means you broke a lot of stuff. At least you'll know what's broken
instead of having it thrown back and forth over the wall between Dev and QA a
hundred times to get it out the door.

------
mangeletti
TBH, I don't know anyone who does TDD. I only know a few handfuls of
developers, but so far the only people I've briefly met that do TDD were ones
selling it; the consulting company that came into the company I worked for.

This consulting company tried to "teach" us TDD at the same time as I think
they were learning it. Not a single developer out of the ~40 on our teams
bought into it, because in almost all cases the code for a given example
(during these 90 minute seminars) was always more confusing and obfuscated,
and the unit tests didn't truly match the business case, because TDD is a bit
like waterfall, in the sense that you're assuming you know what the code needs
to do before you write it (how many times, especially for a large project,
does your code change before you finish and are ready to write tests?).

After the 3 months of consulting, they left us and we continued on writing
code and unit tests (in that order) just as we had before.

I think that TDD changes your code for the worse, except in very small,
controlled cases. For e.g., a large web app, I think it just creates a mess.

If you think of all unit tests as regression tests, you'll have a good time.

~~~
k__
I did a new "production" project where I did full TDD.

Ended up with about 200 tests.

As things go, specs changed and part of this application had to be rewritten.
Most of the tests started to fail, not because the code was wrong, but because
they tested for different spec. So I needed to rewrite all those unit tests
again.

Sure, I catched "some" refactoring bugs, but most failed tests weren't bugs,
just tests that became wrong. This made me question the whole TDD approach in
production.

------
mbrock
TDD came out of agile programming which is also more concerned with fast
delivery than NASA-style correctness. It's a tool supposed to help you go
faster.

If you want an agile unit test suite, it has to run in seconds, not minutes.
TDD will make this very clear. If you have problems with the test suite being
slow, you have to fix it!

A test suite that developers actually like to run is an enormous value, maybe
especially for new team members. It gives you confidence when making changes
and adding features.

A basic purpose of TDD is to make it obvious when code is hard to test, so
that you're always encouraged to write units that can be unit tested without
difficulty. That's a skill that takes some practice and learning, and you need
to pay real attention to it. When your team understands how to do it, that's
when the test suites start to make real sense, and it becomes pretty much a
pleasure to write the tests.

I don't know about being a consultant but I'd like to say that comprehensive
testing is a necessary standard part of professional programming just like
serious manufacturers verify the quality of components. It can also reduce the
stress and anxiety of developers which leads to a better work environment and
better quality.

As a coder in a team with half legacy and half well tested code, I know it was
a delight to work on the modules developed according to good TDD practice.

~~~
gbog
Test suite being slow: you shouldn't have to run all the tests all the time,
just bask the code where you are working, run the full suite and take note of
the failed tests, then just run constantly these and a few more while coding.
Then run again the full suite before going to lunch.

~~~
mbrock
That gets annoying when you're working on something that touches several
parts, and if your tests are slow altogether they'll probably be slow enough
to be tedious even when running just 10% of them.

------
brianwawok
Have you actually lived in a TDD world? Actually sat down and did the entire
thing? I don't think it is worth it.

Tests are good. Fire manual QA team and hire people to write automated
integration tests.

But actually designing your code around tests? It tends to lead to brittle
systems that do a lot of dumb things. Write the code what works best, then add
some tests around it where required. 40% coverage could be enough for some
apps.

~~~
dingaling
> Fire manual QA team and hire people to write automated integration tests.

From experience, this leads to coders coding-to-the-tests. It compiles, it
passes, ship it. Does it doing the right thing under stress? Who knows.

Manual QA staff, _proper_ QA staff, do crazy non-linear things that stresses
code like customers do. I'll bet they would have caught that bug in the other
front-page story about the animal feeders failing without an Internet
connection; I had a tester who pulled-out the cat5 on many occasions.

I think I've said it before: automated tests are the entry test to the QA
process, not the exit diploma.

~~~
brianwawok
So you talk a lot about stress testing. There is 0 reason developers can't
code stress testing and add as a step in the release cycle.

You know why this never happens? Performance is never a requirement, almost
always an afterthought. How many specs list required throughput or 99.99
percentile latency?

No one cares about perf until it's bad. So if we fix that, put in the spec...
We can automate perf testing and perf monitoring. Sure have a perf team code
this. But I don't need a QA guy to run a perf test for me.

~~~
stestagg
It's not that, so much as manual testers will /notice/ that code is slow, and
ask questions about it, whereas automated tests will run the script no matter
what (unless someone thought about it in advance)

Likewise, you /can/ code an automated test to simulate sudden network loss,
but a human will just randomly decide to try it, without someone pre-
programming him

------
jamesroseman
In my (somewhat limited) experience, there's no such thing as writing code
without testing it. The question is: are you testing this manually every time
or are you automating it? I think that's a valuable way to phrase it to a non-
technical manager, phrase in terms of limited situations:

A. I never test my code. When we go to ship the code, it breaks, and I have no
idea where it's broken. So now I'm forced to take pieces out to see which
piece is broken, then take functionality out of that piece until I find the
piece that's broken. This is a process that actually takes longer than writing
code, and it prevents me from working on new projects. If this is given to
someone else, it will take them twice as long because they don't have the
context of how the pieces work, and they'll wind up asking me about it
anyways.

B. I never write tests for my code. After every piece I write, I test the
project in entirety to see if that piece works and does what I want it to do.
After the project is done, suppose we want to change that piece. Because
there's nothing written that will test these pieces alone, someone now has to
do work I've already done, which will keep them from working on something
else.

C. I write tests before or as I write my code. I know when the pieces I write
are done when they pass those tests. It takes around as long to write tests as
to write code, so this process is faster. When we want to change a piece later
on, we just change the tests to express the new desirable functionality, then
rewrite the piece to pass those tests.

When I'm troubleshooting my code, I'm not adding value to the product or
company. Therefore, the shortest amount of time spent troubleshooting is in
turn the most profitable. C is clearly the shortest amount of time, because by
writing tests I ideally spend no time troubleshooting. In reality, things will
slip through the cracks, but those cracks are a lot smaller.

------
pmontra
Strictly speaking I don't do TDD because I don't write tests first. I write
code first, then I write the tests. If I went tests first sometimes I would
have to design the algorithm anyway because I wouldn't know what to test.

However, I tend to write tests very early as soon as some code runs. Why?
Because without tests I would feel like driving a car at night without lights
and with no brakes. Sooner or later I'll crash into something and discover
that I'm way out of route.

The reason is the same if you do proper TDD.

I also use test coverage tools, to know what I'm not testing. Sometimes is not
important, sometimes it is. Given constraints on time and budget it's
important to have data to back the trade off you must make.

(Question to native English speakers: you do a trade off or you make one?)

About those 40 minutes: it's way too much. Even 4 minutes could be too much.
Is your application that big or are you testing its parts many times with
different tests? Maybe you could slim down the test suite. Unit tests for the
single components and an integration test for all the application. In the case
of a web application you would use one of the many tools that drive a real
browser through all the pages of the app, one test per use case. That could be
slow but the unit tests would be fast. You can run the unit tests on your
computer and the integration test on a continuous integration server.

Edit: -do +make trade off

~~~
michaelmior
This is generally how I approach writing unit tests as well. Perhaps I'm doing
TDD wrong, but I find myself running into the same problem as you that I often
need to do too much design up front to write a test. The alternative is that I
write the test using the simplest possible interface and then go back and
continually change the test as I write the code. I've tried this but I haven't
really found any benefit to this.

My approach is generally to write a passing test for a simple use case after I
know that case is working. Then I'll write some tests for some more complex
cases and fix any of those which fail.

~~~
IanCal
The risk with this is that your test is possibly not checking the thing you
want. I have often found a test that passed even when the code wasn't working.
Writing the test beforehand, when you know it should fail, helps get around
that. You expect it to fail, then write code, then expect it to pass.

I like to at least test my test by commenting out the bit I just added, or
somehow manually breaking the code somewhere to check it really does fail when
the code doesn't work.

------
amoerie
Well, I'm not entirely sold on TDD just yet, but after applying it for a few
years now, the benefits seem to be:

\- Requires you to think about structure before you write the actual code.
Code that is unit testable is usually better code. (but not necessarily)

\- Requires you to think about the scenarios that you will support beforehand.

\- Ensures that every file is unit tested. Without TDD you can have untested
files. Why unit tests are great is another topic entirely.

\- Ensures that tests cannot return false positives. TDD requires you to write
and run your tests before any real code is written, so all tests should be red
in the beginning. If you write your tests afterwards, your test could be green
when in fact the code is incorrect.

What I don't like about TDD is that it pushes your code towards very easy to
test code, which is not always better code. I'm more inclined nowadays to just
start with the actual code, refactor iteratively until I'm satisfied, and then
write the tests. This reduces the total amount of work, because otherwise you
also need to refactor your tests every iteration, which only gets written at
the end with my method. The downsides are that unit tests can be forgotten, or
that you can write false positive tests. In my experience though, this rarely
happens.

The most important part - imho - is to have meaningful unit tests.

Truth be told, I value solid integration tests more than unit tests, because
they actually test the input/output of your system, which at the end of the
day is the most important thing. Unfortunately integration tests tend to be
more fragile and require more maintenance. On the other hand, they are more
resistant against refactoring, while unit tests essentially have to be
rewritten when any kind of refactoring happens.

Nowadays, I write both unit tests and integration tests, which is _expensive_
to write and maintain, but in return they provide you with peace of mind and
confidence, which is priceless imho.

~~~
collyw
Integration tests catch more bugs.

In fact unit tests are fairly low on the scale, according to this:
[https://kev.inburke.com/kevin/the-best-ways-to-find-bugs-
in-...](https://kev.inburke.com/kevin/the-best-ways-to-find-bugs-in-your-
code/)

~~~
flukus
Bugs caught is only one axis though, they other is effort. Unit testing is
much lower effort, particularly for edge cases (what happens if dependency x
throws exception) which may be hard/impossible with integration tests.

------
brudgers
Robert Martin runs a consultancy. Selling TDD was a differentiator in the
market. This doesn't mean that TDD doesn't stem from his
ethical/moral/philosophical values. It doesn't mean that using TDD doesn't
make him a happy programmer.

What it does mean is that there is a business case for TDD in Uncle Bob's
business. Making a convincing business case is how one convinces the
bosses/managers.

The metrics depicting the process as in need of fixing are not business
metrics. Customers/users don't care how long the tests take to run or how much
coverage the unit tests provide. In the context of a business Test Driven
Design, Agile methods, Quality Assurance are not ends in themselves.

Redesigning business process is expensive and hard work and prone to failure.
Saying "We should start using TDD/CI/Agile" isn't a design. It's too easy and
no where near enough work.

Good luck.

------
Singletoned
The problem is going to be that you have (almost definitely) built up quite a
bit of technical debt. If you start testing now, you will start to expose the
technical debt and take a productivity hit. Selling your boss on "TDD
increases developer speed" is going to back fire.

I would try to sell them on more automated QA testing. That's a much easier
sell, and the benefits should become obvious sooner (particularly when they
can start firing QA testers and use that money to hire more automators).

Then you can start TDDing the QA testing. Map out the automated QA test for
the feature in advance. Then developers can develop to the QA test, and will
know when they have finished.

It's a long process, but turning around the short-termist mentality takes a
long time.

------
venusiant
1\. TDD can make delivery timelines more predictable, as there is less chance
of regressions being introduced as new features are added. This predictability
can allow a team to scale up to larger and more lucrative projects.

2\. TDD enhances the capacity for necessary refactoring to take place which
reduces the complexity of the codebase. By reducing complexity we can increase
development productivity.

3\. TDD itself can speed up development. A developer will necessarily have to
test the feature he is building. Manually testing this feature will be time
consuming and repetitive. An automated test allows the developer to quickly
test the specific element that is being built in a repeatable and verifiable
manner.

4\. You are saying that the QA team is not catching all the defects. What else
is new, but if you are delivering poor quality product to your customers then
this may have commercial consequences in the future, if not already.

5\. TDD is best practice and is necessary for the professional development of
the team. If TDD is not supported it will negatively impact the ability of the
business to retain good developers.

But aside from that, if we are talking about software development inside large
corporations, we are probably talking about half-baked internal projects that
aren't going to go anywhere due to a variety of reasons. The are probably a
whole host of things to change within any large corporate of which the
introduction of TDD is but a minor concern.

------
ebbv
We switched last year to start writing unit tests for all new code and
refactoring old code to be testable and write tests for it whenever we go back
and touch it. Here's the benefits we saw from doing this:

1) Development is not really slowed down by any meaningful amount. You might
invest more time up front, but further development is faster. The time
investment in unit tests at the beginning pays for itself very quickly. With
anything of a reasonable size, especially anything that involves multiple
developers, the unit tests will speed up development over all.

2) The obvious one; you find bugs faster. If you write thorough and good test
suites, you will find bugs before you release the code (obviously) and you
will find bugs when you go back and touch the code again. I'm not going to
claim you'll find every bug and never release a bug into the wild again,
because that's not true, we're human. But things will be vastly better.

3) Maybe most importantly, it forces you to think about your code more and
write better code. In order to make your code properly testable that will mean
that your code has to be properly divided up into testable units. That means
goodbye monolithic code, hello division of concerns.

It's silly but if you're having a hard time convincing management, sometimes
they respond to the fact that any "serious" development team at a "real"
enterprise writes unit tests. Managers are sometimes very susceptible to
pointing out that they're doing things in an amateurish way, because they like
to think of themselves as professionals.

------
PaulKeeble
As you develop more on a software product it gets bigger, it gets more
features and those features interact with each other. We can argue about
whether its linear growth, sub linear or more but its going to keep
increasing.

Every 2 weeks say you get a new release which has all the features of the
previous releases + 2 weeks more. Given a constant supply of QA testers how do
you manually test an ever increasingly larger featureset? You don't you skip
the regression testing.

Almost all software studios without TDD working with short cycles do risk
based testing, they test the new stuff and the things it might break. They
don't test the major features if they don't think they were touched. So either
you aren't refactoring and hence your code is rotting or you are and that risk
based testing is missing areas that did have changes in practice.

TDD changes that, it turns the ever increasing QA cost of regression and new
tests into development of tests and the growth is in machine running time for
the tests. It flattens the human effort meaning your software is better tested
over time and all the old features are tested every release reducing the
chances of regressions.

If you don't release fast (12 months between releases) then its less of an
issue because there is usually time to have a manual test team go at it and
potentially delay release with bugs and regressions and such. But its one of
the "G Forces", in order to develop at ever shorter cycles and be more agile
to change you need to convert the human effort of testing into machine time
and the only way we know of ensuring that happens is TDD.

------
spotman
Why everyone 'seems' to 'love' it the same reason functional programming and
postgres are darling technologies here on HN.

It's more culture than anything else. People have been testing code for years
and years, and people will keep making ways to organize teams, workflows, and
every half decade a new Bootcamp, Redmine, Asana and Jira will come out.

At the end of the day, you have to have something in place. The bigger your
team is the more you need it.

TDD is simply one school of thought for how to ship less bugs.

Some people do it, most people do not do it strictly.

Most of the teams I work with this day and age are no longer very strict on
TDD because for their specific products they are shipping it slowed down
development enough to notice.

But some people love it and some people I think it may make faster.

Personally I look at it as a way to solve a problem, likely when assembling a
new team. If you don't have issues with speed and quality and coordination
then I wouldn't try to add TDD. But if you do it's one of a handful of
potential solutions , and depending on your team and product may or may not be
a solution.

------
ysavir
A lot of people love TDD. A lot of those people also misunderstand why TDD is
useful. TDD _is_ useful towards an end, but it is not the the only means of
achieving that end. Let me explain what I mean:

As a personal rule, for every 1 minute I spend doing something, I make sure to
spend 5 minutes thinking about what I'm doing. Does my intended solution solve
the problem? Is it missing anything? Does it introduce any new issues? Does it
solve the problem, but lead the code down a bad or inflexible road?

Writing and shipping code fast can be tempting, but that speed often impairs
insights of what that code actually does, or what implications the code may
have down the road. Spending more time thinking than doing might seem less
productive, but that is only true in the context of immediate results; in the
long run, being considerate and deliberate with your code will have huge
payoffs.

The major benefit of TDD is that it forces developers to think about their
code before writing any of it, and later reap the fruits of that deliberation.
If you do that thinking anyway, without needing TDD, then the process (writing
tests first) might not be as practical for you.

As with all things, it comes down to knowing your own limits, and using tools
and processes help soften those limits.

If you're trying to "sell" it to your manager, keep that same thing in mind.
What problems does the company experience regularly that would be solved or
minimized by TDD? If the company gets along fine without it, then TDD might
not be necessary. When pitching it, be sure to have actual, relevant facts
ready: How many bugs make it to production? How often is a feature requested,
but existing code makes it difficult to implement? Do people often find
themselves asking "I wish I had thought of that earlier"?

~~~
collyw
"The major benefit of TDD is that it forces developers to think about their
code before writing any of it"

So does 13+ years of experience for me. Farting about setting up test
frameworks doesn't usually help me think about the problem. I do sometimes do
TDD, but it depends on what I am writing.

------
ap22213
Most importantly, for a larger code-base, unit tests allow a team to make
structural and behavioral changes with much less risk of breaking existing
functionality. This is especially true when one has a lot of business and
domain rules to maintain. One small code change to core logic can have great
impact.

Also, I have a lot of customers to support. Unit tests allow me to have
confidence that I won't get 'the middle of the night call'. Largely because of
them, my small team is able to work 40 hours a week and focus on adding new
value rather than putting out fires.

Unit tests provide self-documentation. When starting to get familiar with new
code, I first look for unit tests. If written properly, they provide great
examples of primary use cases.

Finally, unit tests allow me to design better systems and interfaces. I can
take a class or library and work on it in isolation from other parts of the
system. Starting from consumer interface first, I can focus on what is really
important and to make that interface much more usable. I can add parts
incrementally without worry of breaking that core.

Like most things in software, TDD has trade offs. At first, implementation is
much slower, and it requires significant knowledge of unit test best
practices. I've seen a lot of bad unit tests: ones that don't test important
things, ones that rely on external systems or libraries. I've seen developers
get caught in 'unit test paralysis' \- where they avoid tackling more
important things because they are too focused on testing edge-cases, etc.

There's an entire art to unit testing. It requires pragmatism. For small
projects, they're probably overkill. But, when your system grows to have many
dependencies, some several years old, and someone has to make a change - ahhh,
they're the best.

------
raverbashing
Taking 40min for the unit tests to run points to a problem, however, I've seen
my fair share of "TDD geniuses/advocated" that love doing things that slow
everything down (like writing hundreds of minuscule testings where 2 or 3
things could be tested in the same test, big setup methods, etc)

~~~
lcarlson
Hmm, having a test do anything more than one thing sounds like a functional or
integration test.

~~~
raverbashing
Define "one thing"

Suppose you want to test a function that capitalizes the first letter of words
in a string: I would test

'' => ''

'a' => 'A'

'aa bb' => 'Aa Bb'

3 tests, which does not mean 3 functions (that will have the overhead of setup
and teardown)

~~~
flukus
I would make this a parameterized test for those common use cases and specific
test for others ("o'tool" => "O'Tool").

The overhead of setup/teardown should be practically zero. Even having them as
3 separate functions should be take fractions of a millisecond.

The important thing is, it should be obvious why a test failed, you don't want
more then 1 or two asserts.

------
dalke
I've seen two main reasons why people love TDD.

First, testing is hard. It's a different mindset than development. TDD places
testing first, which forces people to think about testing while doing
development. From many accounts, this is an eye-opener which gets people to
understand more about the whole process.

Second, it's a negotiation tactic. Developers usually have much less
negotiation experience or influence in a project. It's all too easy to get
something "working", then buckle under pressure to ship, even if the code
isn't fully tested. Pressure is highest at the end of a project, so placing
tests first means they won't be cut back as much. Also, developers can point
to the methodology as a way to deflect the pressure from themselves.

Personally, I've tried TDD and found it to get in my way more than it's
helpful. I mostly write my tests after writing the code, in an iterative style
so the tests are usually written shortly after the development. (I have
another stage of testing when I write the documentation and cookbook - that's
where I usually find the code which is implemented "as specified", but where
there sharp corners I didn't identify until I could use the code as part of a
system.)

I have no answers to your underlying goal. One observation: you wrote "Unit
tests aren't "valuable" to the customer". Have they considered value to the
company?

------
TruffleMuffin
To me, TDD is like a framework (Backbone, React etc). You can do it well and
get a lot of value, and you can do it poorly and it serves no purpose. Ill
provide an answer to your question, from a business perspective.

The value of TDD when done well is that you are ensuring (to a relatively
strong degree, but not perfect) that you don't regress on issues. This isn't
something that a human being can do when your system becomes sufficiently
large and your introduction of features/deliveries accelerates.

For example, if it takes 1 hour for your QA team to 'system test' your
application, and you introduce a new feature every day, that consumes a huge
volume of resource over the course of a year. You also run the risk of human
error, forgetting steps etc. At the end of the year, your QA team now take 4-5
hours per day to 'system test' your application. Its all they are doing all
the time. This is pretty demoralising work.

If it takes 1 hour for one person to write tests that serve the same purpose.
Those tests can run within in much smaller time frames and use far less
resources than a manual 'system test'. That is a massive saving to the company
as over the course of a year you will save hundreds/thousands of man hours and
wont degrade morale.

TDD should give you confidence that your application is working as you intend
it to be, its a tool and it can be used poorly, but when used well you should
see the benefits.

------
anupshinde
The #1 benefit of a TDD is - Identify bugs earlier. Nobody wants to ship bugs
to a customer. This is much more beneficial when we have larger dev teams. And
add CI that runs tests and reports build/test errors. Overall the team becomes
more efficient too.

However, I have found that much of the TDD approach needs to be tweaked to the
requirements. TDD can become a huge overhead - and overheads don't get
maintained.

For example, when writing an API service - the endpoints are important and
expected to work correctly. Yeah we need to test those specs with tests. But
the API endpoint might call 10 different methods within the code. But TDD will
mention each method should be tested - this is where the overhead begins. And
then people are very opinionated about how to define a unit/method. When TDD
focuses on code-test-driven-development instead of functionality-tests-driven-
dev, the overheads add up significantly. (NOTE: I am not at all discounting
the benefits of code-testing and code-coverages. Just saying these tend to be
value-enabler-overheads that many times yield very little)

The #2 benefit for developers is ability to develop faster. However non-TDD
devs will not immediately buy into this and will need quite some practice
before they reap benefits. Also I have noticed certain kinds of projects are
better off starting without TDD and introducing tests benefits only after the
code grows to a xyz size (and when refactoring makes sense). If it was an
application without unit-tests, I wouldn't try to fix it. However, I would
write tests to make sure application performs in a certain way in a sandboxed
environment - even if it meant testing the functionality via some headless
browser.

------
flukus
First of all, if your test are taking 40 minutes then they are integration
tests, not unit tests. Not that there is anything wrong with integration
tests, but they have different use cases. Integrations tests are much slower
and can't test the edge cases like unit testing can, so they aren't suitable
for TDD.

Secondly, I wouldn't sell TDD as such, I'd sell unit tests. Sometimes you
write them before the code, sometimes after. There is no one size fits all.

Test coverage is a completely useless metric, so don't focus on it at all.

To sell unit testing, find the places with the hairiest logic "if a and b, do
x, else do y, if y fails return z". This is where unit testing shines.

Next would be how to setup a test fixture. An individual test should take
minutes (or less) to write. But to do this you need to have the test fixture
setup properly. Create a happy path so that the tests can run without errors,
dummy data, mocks, etc should be in the fixture setup, then the individual
tests can just change these values and add to them.

Finally, the tests themselves. How to bring things down following the single
assert principle so that changes should rearely break unrelated tests.

------
enord
I don't know if what i'm about to write will sell TDD to non-technical
managers, or even technical manages of a certain philosopy, but--

TDD is (among other things) a way to have developers write formal (as in
"machine readable") specifications for particular features of a computer
program, _ahead of time_. It does this without bothering the developer with
intangible (abstract, not-part-of-the-syntax-of-programming-language) late-
undergraduate CS concepts like "invariant", "pre-" and "postcondition" etc.

This makes the test not only an affirmation of feature compliance, but a
_reasoning tool_ in and of itself. Since tests are specifications written in
the target application language and even run in the same runtime, tests can
serve the dual purpose of reasoning about the abstract logic of the
application as well as the implementation details of the chosen toolchain.

Now, bugs in either tests or toolchain will affect the test results, so while
you can kill two birds with one stone, you might break a window in the effort.
You now have to make tests for the tests and turtles all the way down.

------
claudiusd
TDD is a technique for accelerating delivery by focusing your coding efforts
on only what is necessary to deliver a feature (i.e. pass the tests). You get
high test coverage as a result, but having to maintain a large automated test
suite has its downsides.

It sounds like the problem has more to do with lowering defect rates, which
TDD can certainly help with, but so can pair programming, code reviews, and QA
testing.

If TDD is your goal and you know how to use it well, just use TDD to improve
your own process and throw away your tests before you deliver (or stash them
locally). Seems silly, but you don't want management breathing down your neck
for writing all of that "non-valuable code". Wait until they notice how
productive you are then show them how you do it. They'll be like, "why the
hell aren't you committing those tests?" Maybe you'll get a small slap-on-the-
wrist, but from then on you'll have leeway to spread the TDD practice.

------
mping
If the managers fail to see advantages of testing approaches, that normally
means they are not mature enough to understand the cost of defects.

Most of our process/engineering effort is to prevent defects from happening as
soon as possible, because they are costly. Simply put, if a dev costs $$/hour,
and he spends a day fixing bugs, the bugs cost at the very least 8x$$, plus
the cost of not doing more important stuff.

Mature organisations understand the cost of defects (the later you fix, the
bigger the effort) and the advantage of fixing them early. TDD is just a
technique (just like proper architecture, proper process, clear analysis, etc)
to prevent defects because they are costly.

Run through a couple of defects that you may have found lately and make the
calculations for the cost. Explain that if you lower the rate of defects, you
dont spend as much. Finally show that TDD can lower the rate of defects.

------
ebiester
So, I'm seeing a few different things here...

1\. There is a difference between selling unit tests and selling a development
method that is guided by writing the unit test first. Unit tests are great for
providing self-documenting code and examples. A developer can look at the test
and see what is expected to use the code. Unit tests run quicker than
integration tests, but both are needed. (With the right unit tests, you need a
fraction of the slow tests.)

You sell unit tests as extending the life of the product by years because the
developers in ten years will understand how the system works by its tests.

2\. You sell test driven development to developers by doing it with them, side
by side, and observing the output in the end. Test driven development is also
called test driven design. By writing the test, making the test pass, and then
clean up the code, you get a quick iteration loop that allows you to make
changes as you go and know your tests have your back.

An experienced TDD developer develops at the same speed as a non-TDD
developer, because the TDD developer develops the toolkit to write tests
quickly. They get caught less in analysis paralysis because they can just make
a change, see what it looks like, and know if it works because their tests
pass or fail.

But just like when someone learns another programming paradigm, it takes time
to get there. TDD is slow for most people because they never get good at TDD.

The disadvantage to those code bases is that making a design change can
sometimes feel slower because the time to read the existing tests, understand
which breaks need to be addressed by code and which need to update the test --
that can be drudgery. I've lived in 100% unit test code bases and it can be
irritating to make a big change.

(Some TDD advocates say that your tests should never break, and you should
work in abstractions that ensure that your tests never break. This can lead to
designs I don't prefer and really weird intermediate states.)

My personal opinion? If you're in dynamic languages, you need more tests than
in a statically typed language. If you're in a functional language, you need
fewer tests still. The more your compiler will do for you, the fewer tests you
need.

------
radarsat1
I do sort of half-TDD if you want. I guess it's not really TDD. When I'm
working on something, I always end up having to test it, so I try often to
save these short functionality tests in a structured way so that they can be
re-run. This often saves me from introducing problems in the future.

So it's not really test-driven, but one has to test, right, so why not just
keep the tests around for later verification.

It also forces you to write code that _can_ be tested unit-wise, which is
nice. I don't know if I want my development to be test-DRIVEN, but I do like
tests.

------
wmil
The biggest advantage of tests is that you can move (possibly new) devs on to
unfamiliar parts of the code base and they can be reasonably confident that
they aren't breaking things with their refactors.

------
Diederich
For certain kinds of tasks, where I can see pretty clearly going into it what
the outcome needs to look like, TDD actually speeds things up for me. It lets
me program more 'boldly' in some ways.

However, the really interesting tasks are those where the actual problem is
hard to understand well enough to write tests up front. In those cases, I
usually just iterate as fast as possible in various directions until I can see
the likely path forward, which allows tests to be written, and then proper
code written to make the tests pass.

------
bitdivine
When tests become too slow I make the code and or tests faster until the tests
complete in under a minute. Running tests in parallel can help. Even so, with
a large set of tests that might be easier said than done but perhaps you can
trim them down to the set that are most likely to be relevant and start off by
testing just those? There are fancy ways of minimising exploration, working
out which tests are most relevant for a given code change, but that is
probably for later. Better to get started and then expand!

------
Encaitar
First thought is the maintainability of the code base. If you are building an
iPhone game that will be out of the charts in 3 months and will never be
updated, then go ahead with feature code delivery and no tests. If you want
your system to still be working in 2 years time and you still be able to add
new features to it in 2 years time then you need tests.

The more confidence you can get in your code change the faster you can add
features. Comprehensive automated, fast running testing gives you this in
spades.

------
lcarlson
I wouldn't do unit testing at all unless it was TDD. Effective unit testing
requires a quick feedback loop with red and green pass/fail alerts.

I'd say really invest time creating a productive TDD environment for yourself
first before attempting to convince business people. Buy into the dogma for
several months. This should really give yourself an idea of the pros and cons
of TDD.

------
rhapsodic
I don't believe TDD adds any value whatsoever to the development process. Nor
do I think pair programming is a good idea. I will not work for any employer
that mandates those practices.

I'm aware that many people see things differently than I do. That's fine with
me. I'm not going to try to change anyone's mind.

------
sharemywin
You need to look at the time it takes to fix and retest defects caught in
Test. As well as defects found and fixed once in production. Then you need to
run a couple projects with unit testing and see if the bug counts go down. Or
check the areas with unit testing versus not and see which have higher defects
rates.

------
lojack
First of all, if your unit tests take 40 minutes to run (unless it's an
insanely large codebase) then there's something wrong with your unit tests.

TDD isn't as much about being "correct" as it is about writing your API (and I
mean internal API -- the classes and methods your software uses internally)
first and then implementing the details. A side effect of TDD should be better
designed software, which should in turn speed up your development.

The reason I brought up slow unit tests first is multi faceted. Most likely,
they aren't really unit tests. You're probably testing the class in addition
to the classes that your classes are touching (i.e. DB connections, etc). This
is probably partially because unit tests are foreign to your developers and
thus they don't know how to properly write unit tests, but additionally (more
important to your concern about delivering software quickly) its probably a
result of your classes not being easily testable. By making your classes more
testable, you end up breaking things up, removing dependencies, programming to
interfaces instead of implementations, etc.

Additionally, its easy to argue why delivering correct software helps deliver
software quicker. If you deliver broken software, which takes down systems,
developers need to spend time fixing the broken software. This time spent is
extremely expensive. It disrupts the day, is a distraction, and depending on
the extremity of the bug, can affect entire teams.

Beyond all that, knowing earlier on that your software is correct helps you
move on to the next thing. When I say this, I don't mean verifying correctness
after pushing to production. As an example, you need to update a database
record based on some input from the user. Traditionally in order to do this
you need to write a method on a model to do the updating, and you need to
write a controller to call that method given query parameters, and you need to
write a view to take user input and post a form. You may start with the model
method, and it'll take you an hour to finally be able to test that your model
method is working properly. If this fails, you have three separate places
where it could have failed, so you need to look through the code to figure out
where exactly it fails. Was it a forgotten csrf token? Did you forget to save
in your model method? Did your controller simply not call the model method?
With TDD, you'd write a test first to verify your model is working. Then you'd
write a test to verify your controller is working (or not? some people don't
test controllers). Then you'd write a test that your view shows the form and
submits. As a result, you'll find out earlier and more specifically where
issues lie.

All that said, its good to learn how to TDD with very strictly writing tests
first, but in practice you need to understand the need to be pragmatic about
it. It often doesn't make sense to write tests first, and you need to
understand that its unrealistic to test every possible case.

------
stevesun21
my personal style to design a system is to focus on design the domain model, I
believe that this is an intuitive way for people to design stuff (not just
software, also in math and ML). This is the reason why I am not sold by TDD
which start with writing test cases first – how come people can tell what is
going to test before they know what they are going to develop.

I tried some projects with TDD, but unfortunately, it is too counter intuitive
for me to follow to design a system.

~~~
joshka
The last D is design...

Write your test as if the perfect domain model existed already, and then make
it work. E.g. does it make more sense to write person.Age = 21, or
person.Birthday = someDate. Alternatively, taking your math example, your
tests are the constraints of an equation that your code solves.

~~~
stevesun21
Based on what I learnt I am not really see the last D is design.

[https://en.m.wikipedia.org/wiki/Test-
driven_development](https://en.m.wikipedia.org/wiki/Test-driven_development)

[http://martinfowler.com/bliki/TestDrivenDevelopment.html](http://martinfowler.com/bliki/TestDrivenDevelopment.html)

------
Nemant
Before trying to sell TDD I would sell the idea of writing tests. That
shouldn't be hard, it's mostly common sense.

------
GFK_of_xmaspast
You need to walk before you can run, and a 40 minute test suite is weighing
you down too much.

------
timmyb
I've found that if you lead by example, people will follow.

------
Anchor
You don't sell unit tests to management. You don't even _tell_ management. The
same way you don't sell using reasonable variable names. You do it because
(if!) it makes you and your team more productive. Coverage metrics are
internal to the development team - they are not management metrics. Shipped
features, open support tickets, velocity, story points, etc. are management
metrics; cyclomatic complexity, test coverage, and length of variable names
are developer metrics.

That said, there is a fairly large difference in experience and skill required
to learn how to write variable names that make you more productive and how to
write unit tests that make you more productive.

It all comes down to TCO of the system to users. In many cases, high test
coverage does reduce TCO, but this is difficult to see during the first year
of the project. And note too that the risk of _increasing_ TCO due to poorly
written unit test code is real. You can do more harm than good by creating
flaky ad-hoc unit tests that are slow to run, difficult to change, and make
refactoring of production code PITA.

The reason we do TDD, is that has some unique benefits. Especially, if you aim
to use "full TDD" in the sense that _no line production code is written before
there is a failing test for that line_.

What your life would be like if every time you interrupt a programmer in your
team, everything she was working on compiled and passed all its tests less
than two minutes ago? And if you'd have an always up-to-date spec that is so
precisely written, it executes? And deployment to production would be a
business decision instead of technical one as you'd be ready to deploy
practically whenever you feel like it? And if you decided to swap MSSQL for
PostgreSQL in your production system, you could do it without breaking a
sweat?

What would your life be like if you did not have to fear breaking anything
when cleaning up code? If you'd be able to keep your system maintainable and
you'd know it?

If TDD is so great, why then, it is not used more often? I can only speculate,
but I think this comes back to the question of developer skills. I have
mentioned some of this before here at HN, but I'll reiterate.

I have noticed that I have to make large refactorings to move things around to
arrange the whole system so that each part can be tested without too much
effort. To do this I have to view most things in terms of the interfaces they
provide. On the test side, I have to write the test code so that the _what the
test does_ is strictly separated from _how the test does it_. This way
changing the system causes only minor changes to ripple to the majority of the
test code. This is by far the most common paint point of a novice TDD'er.

Based on this, it seems that programming with TDD is a distinct skill-set that
requires significant effort to get reasonably good at, i.e., to be more
productive than without TDD. I have given it a try on medium-size projects and
it does pay off in terms of simplicity of the design (I have to manage
dependencies and decouple external systems and components quite heavily), low
defect rates in production/QA, way less time spent in debugger, and relatively
high velocity (this is notoriously hard to measure cross projects/teams, but
at least this was true based on customer and product owner feedback).

However, the problem with TDD is that all of the above (tests decoupled from
interface, interface decoupled from implementation, system decoupled from
external systems, components decoupled from each other, design skills to
recognize this, and refactoring skills to do this fast enough to remain
productive) need to be done well enough at the same time. Otherwise the
approach falls into pieces at some point.

To paraphrase Uncle Bob from some years ago: "I have only been doing TDD for
five years, so I am fairly new to it..." Half of the programmers have that
much experience in programming in general, so the amount of time required to
hone TDD and refactoring skills may not be there yet.

TDD'ing requires months or years of practice to get really productive with,
and has a fairly large set of prerequisites that one has to know in order to
remain sane. It took me several years of experimenting (especially with
different techniques of writing unit tests) before I found a way to be
productive with TDD. I also drew the connection between testability and
program architecture (aggressive decoupling) fairly recently (some four years
ago), and that was one of the last pieces of the puzzle that made everything
work. The system structure is especially crucial for writing fast unit tests.
You really want the dependencies to external systems (DB, UI, Network, MQ, OS,
Library, Framework, or the like) injected and abstracted behind and an
interface.

If your system's design results in your unit tests depend on volatile
components, your system becomes unnecessarily complex. This is because
volatile components change often and these changes ripple to your units tests
effectively rendering them volatile too. Avoiding this problem has been
captured, among others, by the Stable Dependencies Principle
([http://c2.com/cgi/wiki?StableDependenciesPrinciple](http://c2.com/cgi/wiki?StableDependenciesPrinciple)),
which states that the dependencies should be in the direction of the
stability. A related one is the Stable Abstractions Principle
([http://c2.com/cgi/wiki?StableAbstractionsPrinciple](http://c2.com/cgi/wiki?StableAbstractionsPrinciple)),
which states that components should be as abstract as they are stable.

When I started with TDD, my productivity plummeted initially, but the benefits
were too good, and I slowly found the techniques needed to keep up with my old
self in terms of produced features. I dread to think the pieces of code that
send me deep down into debugging sessions due to non-existent test coverage.

Part of the problem is that there are not that many TDD codebases or TDD'ers
around. Also, this is probably not something you can pick up while doing toy
projects or school assignments. The benefits start to show in the 100 kloc and
above magnitudes, and as there are so many ways to paint yourself into a
corner with bad overall design, coupling, unmaintainable (or, my pet peeve,
slow) tests, chances are, you don't figure out all the necessary things
yourself. On top of that, there is no time to learn this much in most dev
jobs, so you are left to learn with hobby projects (which do not usually grow
big enough).

Most TDD experiments result in failures for the reasons listed. This is why
you read so many comments on TDD being useless, wrong, a religion, or Uncle
Bob cult. However, it seems that people who keep practicing TDD have been
programming for more years than people who have not tried it, or have
abandoned the practice. I have yet to meet a TDD practitioner who started
programming that way and has not considered any alternatives. The ideas have
born out of really bad - serious - experiences with existing approaches.

------
lucaspottersky
because it makes you look better than you really are.

just like fashion.

------
dczmer
tldr; read kent beck's original book on the topic before you form your
opinion. TDD is not a silver bullet and it's not 100% required to write
quality software. but it is a useful tool, and you probably are doing it
wrong. if someone taught you at work or school, there is a good chance they
were doing it wrong. if you still don't see the point after kent's book and
the video that i've linked below, then at least you know the most common
arguments for both sides now.

i'm a proponent of TDD and unit testing but i don't TDD all the things. there
is, as in all things, a balanced to be achieved between test coverage and
delivery time. some parts of the system - primarily the business logic - need
thorough testing. other parts of the system, especially those that don't
change often, can require less testing. additionally, many people and
organizations misinterpret the basic concept behind what Kent Beck describes
in his book and end up implementing lots of complicated, brittle tests that
eat development time and slow down the entire process.

#1 read Kent's book. seriously, do this now. don't just assume you understand
TDD because it sounds so obviously simple #2 watch Ian Cooper's excellent
talk: TDD Where Did it All Go Wrong?
[https://vimeo.com/68375232](https://vimeo.com/68375232)

i will say that, even after nearly a decade of writing tests, i didn't really
understand why my tests were so brittle until i watched #2. turns out i didn't
realize what a 'unit' _really_ was, WRT writing tests.

now that you know how and what to test, a few tips: #1 keep "unit tests"
separate from "integration tests": \- unit tests do not do any actual I/O, run
very fast, test "units" in isolation \- integration/system tests run over
entire systems or sub-systems, rather than isolated units \- actually do I/O:
db, disk, etc \- are slow, but you don't need that many of them if you have
good unit test coverage \- this point will be made more clear in the video
linked above #2 write tests to an interface, not to an implementation \- when
you write a test that depends on implementation details, it will be brittle
and cause trouble when that implementation changes \- when you write a test to
an interface, the implementation details should be irrelevant (to a degree).
you care about the "what" (inputs/outputs/exceptions), not the "how" #3
inversion of control and dependency injection are your friends \- the video
linked above should give you an idea of what i'm talking about, WRT hexagonal
architecture \- if you create an object in the function under test, it's hard
to manipulate that object to drive the test \- if the function under test
accepts that object as a param, you can configure it before calling the test
or replace it with a mock object #4 be wary of overuse of mocks! \- when you
end up with tests that are just lots of mock expectations describing the
interaction between two objects, you are testing implementation! \- lose the
mocks and write a system test instead #5 tests are production code! you will
need to put the same effort into making them readable and maintainable. you
might even have to write tests for some of your test support code

~~~
dczmer
omg l2format

