
Giving up on test-first development - ingve
http://iansommerville.com/systems-software-and-technology/giving-up-on-test-first-development/
======
ekidd
From the article: _Because you want to ensure that you always pass the
majority of tests, you tend to think about this when you change and extend the
program. You therefore are more reluctant to make large-scale changes that
will lead to the failure of lots of tests. Psychologically, you become
conservative to avoid breaking lots of tests._

Interesting. I've often found that the _lack_ of tests leaves me absolutely
terrified of making changes to large Ruby and JavaScript applications. I mean,
if a test breaks, then I get a nice message right on the spot. But if I don't
have tests, then the application itself breaks, and I may not find out until
it's been put into production on a high-volume site or shipped to users.

Once an application crosses, say, 25,000 lines of code, it's hard to keep an
entire program in my head, especially in a dynamic language and with multiple
authors working on the code base. Under these conditions, large scale
refactorings or framework upgrades can cause massive test failures, but the
only alternative is to cause massive, unknown breakage.

~~~
BonsaiDen
One good way to the limit breakage in such cases is to solely perform black
box tests on the API level. In case of our Node.js based Backends we don't
ever write a single classical unit test, instead we have a custom Framework
built on top of Mocha which performs tests on the HTTP layer against all of
our endpoints.

This works remarkable well in practice and allows for large scale refactorings
under the hood with little to no impact on the tests. We can also mock
databases, memcached, redis and graylog on their respective http/tcp/udp
level. This in turn means no custom build mocks which could break when
refactoring. The tests itself also contain no logic, they are pretty much just
chained method calls with data that should go in and an expected response that
should come out, along with a specification of all external resource our API
fetches during the request and their responses etc. Any unexpected outgoing
HTTP request from our server will actually result in a test failure.

As for scaling this approach, from our experience it works quite well,
especially when you have lots of complicated interactions with customer APIs
during your requests since the flows are super quick to set up.

~~~
cousin_it
Yeah, I mostly prefer end-to-end tests as well. Though to be fair, they are
often slower than unit tests, because you need to start up the whole system.
And they are worse at pinpointing problems, though that doesn't seem to be a
big deal in practice.

~~~
schrodinger
I like to test the whole system via end-to-end tests, as they're the best bang
for the buck. And then I'll create unit tests for more algorithmic code, like
a parser, sort algorithm, shortest path calculator, financial calculations.
Those also tend to require the least amount of test context setup, making them
less painful to write.

~~~
hinkley
I have almost always worked for customers who enjoyed changing their minds
arbitrarily and often in ways they swore they would never do.

In the face of grossly changing requirements, I've never had much luck keeping
E2E tests up and functioning properly. And people have a bad habit of
investing more time and energy than a particular test is worth in trying to
keep it working or porting it to the new requirements.

Unit tests are cheap. If the requirements change invalidates twenty of them,
you just delete them and write new ones. Easy.

------
jon-wood
Over the years I've gone from writing no tests at all, to being a die hard TDD
purist, and then out the other side to writing some things with tests first,
others with tests after writing the code, and some without any tests at all.

In some situations I have clear view of what I need to build, and how that
should work. TDD is great in that case - write a test for the expected
behaviour, make it pass, refactor, rinse and repeat. The element I think a lot
of people miss when doing this is higher level integration tests that ensure
everything works together, because that's the hard bit, but its also
essential.

Other situations you're still feeling out the problem space and don't
necessarily know exactly what the solution is. There's an argument that in
those cases you should find a solution with some exploratory development then
throw it all out and do it with TDD. If I've got the time that's probably
true, it'll result in better code simply through designing it the second time
with the insight provided by the first pass, but often that just isn't viable.
Deadlines loom, there's a bug that needs fixing elsewhere, and I've got two
hours until the end of the week and a meal date with my wife.

Finally there's the times when tests just aren't needed, or they don't offer a
decent return on the effort that will be required to make them work. I'm
thinking GUIs, integration testing HTTP requests to other services, and
intentionally non-deterministic code. Those cases certainly can be tested, but
it often results in a much more abstract design than would otherwise be called
for, and brittle tests. Brittle tests mean that eventually you stop paying
attention to the test suite failing because its probably just a GUI test
again, and that eventually leads to nasty bugs making it into production.

One thing I'll directly say on the article is that I found his opinion that
its hard to write tests for handling bad data. That's almost the easiest thing
to test, especially if you're finding bugs due to bad data in the real world -
you take the bad data that caused the bug, reduce it down to a test case, then
make the test pass. That process has been a huge boon in developing a data
ingestion system for an e-commerce platform to import data from partner's
websites as its simply a case of dumping the HTTP response to a file and then
writing a test against it rather than having to hit the partner's website
constantly.

~~~
stinos
_Over the years I 've gone from writing no tests at all, to being a die hard
TDD purist, and then out the other side to writing some things with tests
first, others with tests after writing the code, and some without any tests at
all._

Hear, hear! Exactly the same here, for the same reasons as you and the OP
mention.

And observed fom a distance, it's always the same universal principle: purist
behaviour (in the sense of almost religous _beliefs_ that something is a
strict rule, sentence starting with _Always_ etc you get the point) in
programing or to a further extent, in life, is nearly always wrong, period. No
matter what the rule is, you can pretty much always find yourself in a
situation where the rule is not the best choice.

~~~
ctlby
I deeply agree when it comes to programming, but there's plenty of room for
absolutism in other areas of life. Why subordinate strong moral preferences to
hazy cost-benefit analysis?

~~~
grokys
This is becoming very off-topic, but if absolutism doesn't apply in the
highly-ordered world of programming, to me it's even less likely to apply in
messy real-life.

------
crusso
From the article:

 _I deliberately decided to experiment with test-first development a few
months ago._

and

 _the programs I am writing are my own personal projects_

If you have no real experience with TDD and you're hacking away at personal
projects, I can see where you might not find TDD to be useful.

If you're experienced with it and working on a project that is going to be
sizable and for use by others who might be paying to use it, TDD is
indispensable.

I've seen a lot of fads come and go over the years. I've tried many of them
out just to see how they fit my style of working. TDD is one of those
paradigms that has withstood the test of time.

Much more important than code coverage and ability to make changes without
breaking things as mentioned in the article, TDD forces you to think about
your software components from the client perspective first. It's that
discipline that I appreciate more than anything.

~~~
henrik_w
Good points, and very well said!

------
vinceguidry
I like the idea of TDD, but I rarely use it. The problem is that TDD works
well when you already know what you're going to build and how, that way
figuring out how to put new code under test isn't too onerous.

If the company is paying you for it, absolutely take the extra time to TDD,
and do your best to maintain the test base. That's what they're paying you
for.

If you're greenfielding a side project, TDD is only going to slow you down.
Time spent learning your domain _will_ get redirected to "figuring out how to
put X under test", significantly increasing time-to-market. Get your product
to market, find product-market fit, and get some resources to re-engineer your
product with, and don't do it yourself because you'll have more important
things on your plate.

If you're early in your career, spend some time to learn TDD. Don't do it on
your own projects, let someone else pay you to work it out. Don't actually use
it unless someone's paying you to, but learn how it works, what it buys you,
and what it costs.

~~~
pbreit
TDD feels completely unnatural to me. I, like I suspect most humans, want to
build and have a thirst for results.

Also, I think it's a solid point that TDD can overly influence your program
design. Program design should typically be mostly driven by end user
needs/desires.

~~~
mscman
While I agree that over-testing the wrong parts of your program can have an
impact on program design, your end user's needs/desires are usually testable
things. TDD helps make sure that experience is reproducible, even through
sweeping changes to the codebase.

------
jacques_chester
> _I won’t spend ridiculous amounts of time writing tests when I can read the
> code and clearly understand what it does._

Is he arguing that simply trusting yourself not to make mistakes is a
sufficient guarantor of quality? Ian Sommerville is the author of a famous
textbook on software engineering, so it would be surprising if he was.

TDD is actually much more difficult in practice than people realise. I read
Kent Beck's book and thought it sounded like utter horseshit. I tried doing it
myself and decided it was _definitely_ horseshit.

Then I came to work at Pivotal Labs. Now I am distrustful of code that was
written before tests.

As for the argument that TDD distracts from the big picture, this is like
saying that indicator signals and mirror checks distract from driving. Sure,
when you are learning to drive, you feel so overwhelmed by attending to all
the little details that you struggle to drive at all. You become unable to
focus on your navigation.

After a while you learn the skill and it becomes automatic. TDD is such a
skill.

~~~
Chris2048
What changed at Pivotal labs?

And what do you get with regards to specs? It seems to me the methodology fits
how requirements are derived.

~~~
jacques_chester
> _What changed at Pivotal labs?_

The bread and butter of Labs is teaching people pair programming and TDD. So I
was taught those things, and I'm still learning.

It takes a few passes through the whole cycle of story->feature
test->integration test->unit test for things to begin to click, for that way
of thinking to become habitual. It is uncomfortable and frustrating for some
time, as Ian Sommerville found. In fact, because most code is _not_ written
test-first, it is difficult to test. So all that keeps you to TDD is habit
and, to some degree, help from another engineer.

> _And what do you get with regards to specs? It seems to me the methodology
> fits how requirements are derived._

The most popular story-writing style I have seen so far is the As A/I Want/So
I; Given/When/Then. Usually some acceptance criteria are supplied. To varying
degrees of fidelity that the feature tests will typically follow
Given/When/Then.

In our weekly Iteration Planning Meetings engineers are expected to pipe up if
there are useful ways to decompose stories into smaller stories, which makes
the overall situation more manageable.

------
barrkel
Test first makes loads of sense for:

* fixing bugs - reproducing the bug in a test case both confirms you're fixing the thing, and acts as a regression test

* defining a protocol - where you need to glue two things together, e.g. a front end UI and a back end controller, or a model shared between different modules

It's a lot weaker for design. Test-first tends to encourage overly open to
extension abstractions, because you need to make things visible to tests and
make components replaceable by mocks. In the early stages, the weight of
updating tests makes the design overly awkward to change. And early on in the
process is exactly the wrong time to be creating your general abstractions -
that's when you have the least amount of information about what will make for
the best abstraction.

You still need to back-fill tests after good abstractions have been chosen, of
course. Tests are great; test-first, specifically, isn't always best.

~~~
crdoconnor
It's still good, you just need to do it at a higher level.

------
koder2016
_> I’m sure that TDD purists would say that I’m not doing it right so I’m not
getting the real benefits of TDD._

"You are not doing it right! Take this 3-day course for a grand and buy these
books. Also hire a TDD/Scrum/Agile coach for your team for a few months. There
you go! (in Eric Cartman's voice)."

------
pmarreck
_Psychologically, you become conservative to avoid breaking lots of tests._

Psychologically, the FIRST problem is that there is a distinct separation in
his head between "working" code and "test" code. They are essentially married
together. "Breaking tests" is simply identifying now-broken functionality that
would NOT have been highlighted had he made the change WITHOUT that test
coverage.

Basically, I don't understand how one could come to this conclusion unless one
was 1) terrible at writing tests, 2) did too many integration tests and not
enough unit tests, or 3) had the wrong frame of mind when considering test
code as "distinct and separate" from the code under test.

 _But as I started implementing a GUI, the tests got harder to write_

That is due to the architecture of the GUI, not a fault of TDD itself. As this
stackoverflow says, [http://stackoverflow.com/questions/382946/how-to-apply-
test-...](http://stackoverflow.com/questions/382946/how-to-apply-test-driven-
development-for-gui-applicationvc-mfc), "you don't apply TDD to the GUI, you
design the GUI in such as way that there's a layer just underneath you can
develop with TDD. The Gui is reduced to a trivial mapping of controls to the
ViewModel, often with framework bindings, and so is ignored for TDD."

If the GUI is not architectured in a way that makes that easy, then you're
going to have a bad time, admittedly. See:
[http://alistair.cockburn.us/Hexagonal+architecture](http://alistair.cockburn.us/Hexagonal+architecture)
and the Boundaries talk
[https://www.destroyallsoftware.com/talks/boundaries](https://www.destroyallsoftware.com/talks/boundaries)
for examples of ways you can reduce I/O to an extremely thin layer that can be
tested in isolation.

------
talles
_I think we 're in this world I'd like to call guardrail programming. It's
really sad. We're like "I can make change because I have tests". Who does
that? Who drives their car around banging against the guardrail saying, "Whoa!
I'm glad I've got these guardrails because I'd never make it to the show on
time"._

Gotta love Hickey.

~~~
jerf
Who drives their car through an n-dimensional manifold full of Turing chaos?
When programming is as easy and as safe as navigating an essentially 2-D
Euclidean space, let me know.

~~~
striking
Do I have the language for you.
[https://scratch.mit.edu/](https://scratch.mit.edu/)

You can make compelling interactive things probably without even making a
loop.

I think that's the closest we're going to get, at least.

------
nostrademons
TDD is nuts for code without a client or specification. The whole point of
tests is to ensure that the code does what it's supposed to do. When you have
neither client nor spec, _how are you supposed to know what the code is
supposed to do_? There is, IME, a >90% chance that any such code will be
ripped out and replaced as you develop a better understanding of the problem
domain.

I've found it's pretty useful to go back and add tests as you accumulate
users, though (or convince an exec that your project is Too Big To Fail in the
corporate world). You're capturing your accumulated knowledge of the problem
domain in executable form, so that when you try an alternate solution, you
won't suddenly find out - the hard way - that it doesn't consider some case
you fixed a couple years ago.

~~~
mbrock
If you don't know what the code is supposed to do, how are you writing it?

~~~
crdoconnor
Some (quite a lot, actually) pieces of code are essentially experimental -
e.g. trying out an API/libary and seeing what it's capable of or trying an
approach to solving a particular kind of problem or even trying to see if a
particular problem is solvable with a piece of code.

For this kind of coding, TDD makes no sense whatsoever. The 'specs' are as
fluid as the code and having confidence in the code isn't that important.

This is entirely different to creating production hardened systems with very
clear specs. If you don't do TDD on that, you're an idiot.

~~~
mbrock
I don't agree with "no sense whatsoever." Actually, TDD can be a very pleasant
way to do this kind of exploratory programming, precisely because it's
oriented around verifying expectations.

~~~
lfowles
Yep, any time I have an assumption about how code should work, that's a good
starting point to write a test. Even if it's vaguey like "should not throw
when given inputs X Y Z that I expect will be encountered"

~~~
crdoconnor
That's kind of a waste of time if you're only going to run it once and
verification with a REPL or otherwise by hand is easy enough.

~~~
lfowles
That's assuming it works correctly the first time (I haven't had that
experience often, even for "trivial" code :( ). Even for "run once" functions,
I still use a few tests to develop them and make sure my expectations are
correct. With a good framework, setting up a handful of unit tests takes just
about as much dev time as running the function in a REPL.

~~~
crdoconnor
Ok, so let's say you were experimenting with the selenium library to see if
you could scrape what comes out of skyscanner.

What steps would you take?

~~~
lfowles
I'm not clear on what the scope of selenium is or how it works, but for a
general web scraper, I'd identify some targets I want out of skyscanner.
Here's a quick googletest butchery for "I want to make sure my function
returns flights for a known good flight search."

    
    
        TEST(SkyScanner, ListFlights)
        {
            flights = MyFlightScrapingFunction(LAX, AMS, date+1 month, 1 way, 1 adult)
            EXPECT_THAT(flights, SizeIs( Gt(0) )
    
            EXPECT_THAT(flights, Each( AllOf( 
                                           Field(&Flight::from, Eq(LAX)), 
                                           Field(&Flight::to, Eq(AMS)),
                                           Field(&Flight::seats, Eq(1)) 
                                             )))
        }
    

I feel that anything simpler ("is this possible with selenium?") is a
documentation moment.

------
Illniyar
Like all things in life TDD should be taken in moderation.

It's an excellent process to create stable and maintainable code, but it does
not fit every bill.

But abandoning it completely on the grounds that it sometimes makes you write
"bad software" is a bit weird to me, in fact, one of the main arguments for
TDD is that it makes you write better code.

I found that it does make you write better code many times. So like all
things, use when appropriate.

I guess the real hard thing is to determine when using TDD is appropriate.

~~~
galaktor
I like to think that, like with any other technique, with experience comes the
ability to decide when _not_ to apply it. Any old tutorial will show you an
example of when it works, but only with experience will you learn when it
might not.

~~~
Chris2048
Surely a good tutorial could teach that too?

------
andrewfromx
C.S. Degree in 1996, 20 years "professional" programmer and I never once
thought TDD was helping my project. Every time I did it, it was because the
boss told me I had to. Litmus test: every side project I did just for me, I
never did TDD.

~~~
rapala
How many of those side projects ended up being 6 million lines of code
maintained over 5 years? Because those are the kind of code bases I have in
mind when I'm weigthing the pros and cons of TDD or other testing practices.

When reading Uncle Bob and others I have always got the feeling that the
"goal" of the practices described is to have systems that can be maintained
and extended for years by different people and teams. It never crossed my mind
that Uncle Bob would recommend TDD for that tic-tac-toe I wrote to learn
Buzzscript.

~~~
andrewfromx
it's like i'm an artist. You can't tell an artist how to paint. I'm going to
paint my best work when I'm allowed to choose my own easel and pallette and
brushes. Let me throw some green paint on the canvas and make the trees how I
wanna make the trees. Programming is more art than science.

~~~
mazerackham
True, if you're working alone. But 5, 50, 500, 5000 artists trying to paint on
the same canvas? Then the game is significantly changed.

~~~
andrewfromx
[https://en.wikipedia.org/wiki/The_Cathedral_and_the_Bazaar](https://en.wikipedia.org/wiki/The_Cathedral_and_the_Bazaar)
i'll take a Bazaar of 5000 artists any day over a forced Cathedral.

~~~
rapala
Testing practices are orthogonal to the cathedral vs bazaar idea. Linux is a
great example of this. Anyone can checkout the code and work on it, but
upstreaming the changes requires coordination and approval with the community.

The lessons by Raymond are also given in the context of OSS. Unfortunately it
would be extremely difficult to find a team of developers that are passionate
about the kind of stuff that SAP is written to solve, for example.

The lessons of Robert Martin and others are about how to be a professional
developer. When you are writing software for someone else to use and they pay
you money to do it, it is your reaponsibility to think about quality,
extensibility and maintainability. It is not your job to express your self.

My take is that the point of TDD being so rigid is that a professional should
always follow best practices, not just when they feel like it or when it is
easy or convenient to do so. But TDD is an ideal and sometimes, or maybe even
most of the time, there are externalities that make you fall short of that
ideal.

~~~
andrewfromx
really good point about SAP! It's soooooo boring. That's the million dollar
question - how to u inprise passion in a team of programmers all working for
cash and not passion? I'd argue forcing TDD on your most brilliant artists is
not the way. Or telling them that they are not "professional" if they don't
follow what you consider to be "best practices" is a bad idea. That spark of
true passion is worth it's weight in gold. Your best shot is to make an env
where those sparks CAN happen vs. extinguish each little spark before it can
grow.

------
JulianMorrison
As a suggestion, QuickCheck type testing frameworks are good for finding bugs
relating to unexpected data.

Summary of how they work: you say "this program takes a 32 bit signed int and
a string" and the testing framework will throw it a heap of random ints and
strings, some of which match the sorts of classic curve balls it might
encounter (negative numbers, int max and min values, strings that are very
long, empty strings, strings with \0 in them, strings that don't parse as
valid unicode, "Robert');DROP TABLE Students; --", and so on.)

~~~
aliakhtar
Any suggestions on any such frameworks for Ruby / Java?

~~~
JulianMorrison
There are a lot of external links for implementations here
[https://en.wikipedia.org/wiki/QuickCheck](https://en.wikipedia.org/wiki/QuickCheck)

------
neverminder
"...because I think it encourages conservatism in programming, it encourages
you to make design decisions that can be tested rather than the right
decisions for the program users.." \- Couldn't agree more with this.

~~~
collyw
Reminds me of an article written by the creator of Ruby on Rails.

[http://david.heinemeierhansson.com/2014/tdd-is-dead-long-
liv...](http://david.heinemeierhansson.com/2014/tdd-is-dead-long-live-
testing.html)

------
atemerev
TDD only works if the tests are written correctly.

Tests are not about "code coverage", nor about establishing the exact sequence
of things in stone. Tests are about fixing invariants.

When a new project starts, I only know about 10-15% things for sure, and those
are exactly which will go in tests, before writing any new code. I don't worry
about some things in my code are not yet covered by tests; I don't know yet
how they will turn out, so I can't write any meaningful invariants.

In my experience, useful tests are much higher-level than TDD guys prefer.
They routinely fix invariants for the entire system / subproject, not assuring
coverage of every method in class (some crazy folks are even testing getters
and setters — why?)

~~~
tragic
> some crazy folks are even testing getters and setters — why?

Well, I can see the logic if your getters and setters are hiding more activity
than simply retrieving/setting the value of a private field, which is the
point of having separate getters/setters at all.

If you imagine a getFullName() / setFullName(name) pair, for example, that
actually reads from/writes to two different private fields for first and last
name (leaving aside middle names, internationalisation, etc), then there's
some minimal logic there that you might want to test.

In a duck-typed language, when you're trying to ensure a class obeys an
implicit interface, it may also have value.

Apart from that, for vanilla getters/setters, it's a little pointless.

~~~
atemerev
In such case, a correct test should focus on functional invariants, like
getFullName() + " " \+ getLastName() equals to getName() (btw, never do that —
names are much more complicated than that. In some countries, there are no
first and last names at all; others use multiple name designations; some have
meaningful patronyms etc.)

~~~
tragic
I agree (and implied) it was not a great design for actually dealing with
names, but simply that sometimes there is more in getFoo() than "return foo".
Which behaviour you may want to verify with a test.

EDIT: typo

------
penguat
One of the things that people aim for in writing tests is orthogonality -
different tests should not break for the same reason. This promotes the
ability to refactor and change your code. I have also seen massive codebases,
with masses of tests which were rarely run, and which effectively concreted
the code and stopped it from changing.

~~~
Chris2048
But doesn't test redundancy reduce the risk of a test isn't testing the thing
you thought it was?

If the same situation is tested in 2 different ways, a bug in one test might
cause the test to fail to correctly handle all case, but the second test might
still catch.

Maybe auto-generation of test cases would be a better technology?

~~~
jacques_chester
Typically tests can provide fault _detection_ and fault _isolation_ , but to
different degrees.

A feature/e2e test typically provides the most effective kind of fault
_detection_ , because it combines all the components of the real system with
no stubs or mocks. But if it shows a fault, then typically that fault is hard
to identify, because it wasn't previously driven out by a more detailed
integration or unit test.

Contrariwise, integration or unit tests will typically be best suited to
_isolating_ faults, but typically those faults are limited to the class of
things you included in your tests.

This is why we have "test pyramids": a handful of slow, brittle feature tests
at the top, then an increasing volume of faster, less-brittle integration and
unit tests.

TDD is almost orthogonal to the testing pyramid, with one difference from non-
TDD tests. In TDD each line of code was, ideally, driven out by a test or
tests. Fault detection is increased, because each line was written in response
to a manufactured (test-first) fault.

------
mbrock
When writing tests is boring, difficult, and tedious, that's a really good
time to think hard about the way the program is structured, if you have time
for this.

The way to make testing pleasant is to extract more and more behavior into
units with clear boundaries... realizing how to do this was a major event in
my programming career, and I attribute the insight partly to doing TDD.

I don't agree with some posters who say that TDD encourages overly generalized
design. Sure, it encourages some form of dependency injection... but mostly,
it just encourages the creation of coherent and loosely coupled units, which
is a universally lauded best practice.

~~~
jamestenglish
When I finally decided to really decouple and mock dependencies and create
"pure" unit tests was the major "Ah ha!" moment of TDD. I had been trying it
out here and there but never really saw the benefit because I wasn't really
creating nice testable units.

It is hard to say for sure, because the author doesn't give any specifics but
wording like:

"You therefore are more reluctant to make large-scale changes that will lead
to the failure of lots of tests. Psychologically, you become conservative to
avoid breaking lots of tests."

make me think the author wasn't actually making unit tests, instead they were
likely end-to-end tests, or partial ETE tests that were running inside a unit
test framework. I have had many disagreements with other developers that just
because your test runs inside a unit testing framework doesn't actually make
your test a "unit" test.

~~~
Chris2048
If you move functionality around between methods, or combining, breaking up
methods, won't that mean fixing all those unit tests?

~~~
mbrock
Sure, and that fixing is the process of making sure that the code still works,
and that you still have the test suite as a second client of the code.

As with everything there are costs and benefits!

A lot also depends on the quality of your tests. If the team doesn't care
about test quality or doesn't know how to write good tests, you can easily get
these big blobs of incomprehensible and tedious tests...

------
jmathes
TDD is best when you're writing code that talks to other code. So APIs,
database models, etc. Pure functions, and code that has dependencies you can
inject and mock. You should never abandon TDD in situations like this.

It's true that it's harder to write TDD for code with side effects or that
draws UI. It doesn't really make sense to use TDD for this.

You shouldn't conflate the two. Also, "always pass the majority of tests" is a
trap. You should always pass all the tests.

Source: I've been managing and working in automated testing and continuous
integration systems for 8 years, dating back to before the term was coined. I
was the manager of the system, at IMVU, that coined the term "continuous
integration". I've also worked on testing at Sauce Labs and Google.

~~~
kyberias
You state that "TDD is best" in certain scenarios but fail to provide
explanation. Why do you think TDD is best in the situations you enumerated? In
fact, how does "database model" talk to other code?

I'm pretty sure unit tests, automated tests and continuous integration existed
8 years ago in 2008. According to wikipedia, CI was named by Grady Booch in
1991.

~~~
extrapickles
I think they were referring to back-end part of a application or complete
applications that end up as a back-end piece (SQL, HTTP server, etc) of a more
user facing application. This is generally where the amount of state easily
manageable and well defined.

Front end testing is much harder as you have quite a bit of state you need to
manage, and things like "Is this button visible to a user" are hard for a
computer to answer as for a computer you need to render the entire page, then
use machine vision to look for the button and verify the text is a readable
size (not a cheap operation). In the front end, you can't get away with only
rendering part of it, since anything could trigger a modal/overlay, or cause
some z-order/clipping/scaling issue.

------
burkestar
Just like the chicken and egg, it doesnt matter which comes first, code or
test. The key is that both are written, ideally around the same time and part
of same changeset. Refactoring posthoc for testability is tricky and often
brings to the surface poor software designs in the original implementation -
bad coupling, module dependencies, leaky abstractions, etc.

~~~
foohooblue
I think there's something to be said for maybe saying test first. I tend to
find writing tests second a little harder. If its a big complicated feature,
its much harder to go back and try to think of all the test scenarios needed
when its finished, there might be an edge case or a semi obscure case that may
have been apparent at writing the code but gets missed when looking back and
writing tests. But its preference.

------
nerdy
Most people probably felt the same way after only a "few months" (best-case,
perhaps less) of practicing TDD.

And certainly TDD is harder as you approach the GUI, you want to test in vague
ways which don't break with every change. If you thoroughly test all of the
underlying behavior, implementing a GUI is typically incredibly trivial
because everything beneath it is known (and proven) to work. Most of the
article is not related to the GUI.

 _> ...it doesn’t work very well for a major class of program problems – those
caused by unexpected data._

This is a hollow argument. Regardless of development methodology, if
unexpected data isn't considered at all it could have all kinds of side
effects.

Regarding conservatism with breaking tests (many tests failing for one
change), it's likely the result of a structural problem within the application
if it's an intimidating number of failures.

 _> It is easier to test some program designs than others. Sometimes, the best
design is one that’s hard to test so you are more reluctant to take this
approach because you know that you’ll spend a lot more time designing and
writing tests (which I, for one, quite a boring thing to do)_

Not sure how this applies to TDD, if you're writing tests first you aren't
deeply concerned with designing tests because you're imagining what the
interface for well-designed code _would be_ , and then you write it. It
frequently sounds like the author jumps into writing tests without any
forethought.

 _> In my experience, lots of program failures arise because the data being
processed is not what’s expected by the programmer. It’s really hard to write
‘bad data’ tests that accurately reflect the real bad data you will have to
process because you have to be a domain expert to understand the data._

If you don't understand the variety of inputs, how can you possibly validate
them? Programmers _should_ have some domain understanding, certainly program
inputs fall within that realm.

 _> Think-first rather than test-first is the way to go._

I agree; but step 2 should be testing in my opinion. Test first is just the
first tangible work product, it isn't a ban on thinking.

~~~
nerdy
Also check out Bob Martin's response: [http://blog.cleancoder.com/uncle-
bob/2016/03/19/GivingUpOnTD...](http://blog.cleancoder.com/uncle-
bob/2016/03/19/GivingUpOnTDD.html)

------
vannevar
The author makes four criticisms of TDD:

1) That having a test suite tends to makes a programmer conservative, to avoid
breaking tests. But in a team environment, this kind of conservatism is a
feature, not a bug.

2) That there are cases where the code that is easiest to test is not the best
code. In my experience this is the exception rather than the rule.

3) That TDD causes the programmer to focus on the details rather than the
overall design. This is a valid criticism, but the way to remedy it is to
build prototypes and toy code before diving into the actual implementation.
Here again, if you're a lone programmer working on a pet project, you're not
going to see much distinction. But on a larger project with a team of
programmers, such rapid prototyping is very useful. As they say, "build one to
throw away".

4) The author has trouble designing tests for bad input. I'll take him at his
word, but I've never found those kinds of tests to be that difficult; if
anything, they tend to be boilerplate. Certainly when you're talking about
validating complex input like JSON objects, it's not trivial, but there are
libraries to handle most of those kinds of real-world situations.

------
tjbiddle
TDD, for me, is great at the very start. I get the nice high-level bits done,
they're clean, and they get things working perfectly from the start.

But then I want to start moving fast, trying new things, I don't know exactly
how I want to go about implementing things. I end up passing on tests for a
while, until I hit that next threshold - then can go back; make tests for what
I wrote to catch up, and then repeat.

------
Walkman
> But as I started implementing a GUI, the tests got harder to write and I
> didn’t think that the time spent on writing these tests was worthwhile.

I agree. TDD has it's places. Testing GUIs is often not one of them.

> You therefore are more reluctant to make large-scale changes that will lead
> to the failure of lots of tests.

Don't test private APIs. That makes no sense. I find quite the opposite. If I
have more tests, I'm more comfortable making bigger changes, because of the
safety net the tests give me.

> Think-first rather than test-first is the way to go.

Absolutely agree.

I think the "solution" to the TDD-or-not problem in Kent Beck-Martin Fowler-
DHH [0] conversation was that TDD has it's places, sometimes it's really
better than other and helps a lot, sometimes it just get's in your way.

[0]: [http://martinfowler.com/articles/is-tdd-
dead/](http://martinfowler.com/articles/is-tdd-dead/)

------
donatj
I have found very little value in testing things with little or no logic in
them. Getters, setters, those kinds of things and when we need to pivot they
just stand in the way. I'm sure there are cases of critical development where
that kind of thing is very important, but in most cases I think it's just
unneeded.

------
Anchor
_[TDD] encourages a focus on sorting out detail to pass tests rather than
looking at the program as a whole._

I have actually found the opposite to be true.

I have to make large refactorings to move things around to arrange the whole
system so that each part can be tested without too much effort. To do this I
have to view most things in terms of the interfaces they provide. On the test
side, I have to write the test code so that the _what the test does_ is
strictly separated from the _how the test does it_ , so that changing the
system causes only minor changes to ripple to majority of the test code.

Based on this, it seems that programming with TDD is a distinct skill-set that
requires significant effort to get reasonably good at, i.e., to be more
productive than without TDD. I also have given it a try on medium-size
projects and it does pay off in terms of simplicity of the design (I have to
manage dependencies and decouple external systems and components quite
heavily), low defect rates in production/qa, waaay less time spent in
debugger, and high velocity (based on customer and product owner feedback at
least).

However, the problem with TDD is that all of the above (tests decoupled from
interface, interface decoupled from implementation, system decoupled from
external systems, components decoupled from each other, design skills to
recognize this, and refactoring skills to do this fast enough to remain
productive) need to be done well enough _at the same time_. Otherwise the
approach falls into pieces at some point.

To paraphrase Uncle Bob from some years ago: _I have only been doing TDD for
five years, so I am fairly new to it._. Half of the programmers have that much
experience in programming in general, so the amount of time required to hone
TDD and refactoring skills may not be there yet.

So maybe I am saying that you are not doing it right, but I don't really know.
Maybe I am wasting my time writing tests, but anecdotally, I seem to enjoy
extending and maintaining the TDD-based pieces of code more than the non-TDD
pieces in our codebase.

~~~
lgunsch
I agree.

I've been doing TDD for 4 years now, and I would totally agree that it takes a
lot of time to hone the skills. My experiences of TDD on projects is quite
similar to yours. However, it took me a year to simply really understand how
to do TDD in any kind of sane way.

TDD is not something you can easily pick up in 6 months without a lot of
mentoring and training from experienced TDD'ers.

~~~
Anchor
Part of the problem is that there are not that many TDD codebases or TDD'ers
around. Also, this is probably not something you can pick up while doing toy
projects or school assignments. The benefits start to show in the 100 kloc and
above magnitudes, and as there are so many ways to paint yourself into a
corner with bad overall design, coupling, unmaintainable (or slow!) tests,
chances are, you don't figure out all the necessary things yourself. On top of
that, there is no time to learn this much in most dev jobs, so you are left to
learn with hobby projects (which do not usually grow big enough).

------
elcapitan
A lot of discussion around that topic seems to happen without mentioning in
which context the concrete development happens. The article doesn't mention
which language the author works in.

In particular the "don't want to restructure the codebase because the tests
would fail, so I don't write tests anymore" is probably something that you can
easily get away with the more expressive your type system is. You can make
lots of heavy structural changes and refactorings and if it compiles, you're
mostly fine. If you try that in a Rails project, you can basically spend five
times the amount of time to just test and ensure that you catch all the subtle
cases where the dynamic typing in the new code structure leads to new errors.

------
mattiemass
I've experienced very similar problems when experimenting with TDD. Really
appreciate you sharing!

------
programminggeek
Instead of test first, I often find myself doing "make it work, then write
tests". It's all in the context of a single feature branch, so I have pretty
good tests and coverage most of the time.

There is no perfect system or test suite. This is a reasonable 80/20.

------
ZenoArrow
In my limited experience, tests should be written against a specification, not
against code. Writing tests focused on code seems to often lead to tests that
are too close to the implementation of the program, even with test-first
approaches (you end up thinking about how you'll write the code when you're
designing the tests).

By focusing on the specification, you can ensure that design decisions are
made by the coder are fit for purpose.

I also think many tests would be better written as code contracts, you mainly
want to ensure the inputs and outputs are valid, code contracts focus on this.

------
grovr
I think the first and fourth points are applicable to unit tests in general
rather than just TDD.

For the first point, I'm reluctant to make large-scale changes to code which
doesn't have lots of unit tests because the unit tests give me confidence in
what the behaviour of the system should be.

Certainly the third point is something I try to bear in mind when doing TDD.
I've found that having someone do a code review after a feature is complete
gives someone the opportunity to come in from that high level and look at the
program as a whole and check that your design makes sense.

------
ebbv
> Sometimes, the best design is one that’s hard to test

I strongly disagree with this statement. The best design for your program is
always the one that's easiest to test; the one that's modular, the one that
separates out dependencies, the one where methods are as atomic as possible.

If you find yourself wanting to write code that is hard to test, then you are
approaching the problem (and/or the solution) the wrong way.

> The ‘purist’ approach here, of course, is that you design data validation
> checks so that you never have to process bad data. But the reality is that
> it’s often hard to specify what ‘correct data’ means

Again I disagree. It should be really clear what correct data means at a low
level. If you can't, then you haven't fully fleshed out your design, so yeah
it's gonna be impossible for you to test it.

I'm far from a TDD zealot. But on my team we adopted writing unit tests for
everything in the last 6 months and it has been night and day. It is
incredibly useful and the resulting code is so much better.

I think a lot of the author's problems stem from attempting to do this alone
and not having someone else to guide him through how to tackle things that
he's having trouble with. Plus, frankly, he seems to just have a defeatist
attitude about the whole thing.

~~~
gazrogers
The author's hardly a novice when it comes to software design though:
[http://www.amazon.co.uk/Software-Engineering-Ian-
Sommerville...](http://www.amazon.co.uk/Software-Engineering-Ian-
Sommerville/dp/0137035152)

~~~
ebbv
I realize that, but he's still wrong on those points I addressed.

------
insulanian
> ... it encourages you to make design decisions that can be tested rather
> than the right decisions for the program users...

This is exactly why I don't do TDD. I prefer to think about the design and
implement features in non-distracting way. After I have it shaped, I add tests
to avoid regressions.

Where I do write tests first is when I need to reproduce a bug, so I have the
test, again, to prevent regressions.

~~~
Bahamut
I'm in the same boat, but I add tests almost always not too long after
implementing - I think the more important thing is to write the tests.

------
Ace17
_" Because you want to ensure that you always pass the majority of tests, you
tend to think about this when you change and extend the program. You therefore
are more reluctant to make large-scale changes that will lead to the failure
of lots of tests. Psychologically, you become conservative to avoid breaking
lots of tests."_

In what way would it be different when doing "test-after", instead of "test-
first"?

There's an inherent difficulty of writing robust, isolated and fast tests ;
test-first development is not the cause here. Writing good tests is hard, and
it's even harder when your code wasn't designed with testability in mind.

To TDD or not to TDD, that's not the question. You should be able, as a test
writer, to see and isolate things that prevents testability: I/O, timers,
threads, rand(), minimal computation time, clumsy interfaces, etc. TDD just
puts these things in front of your eyes, early.

------
leftdevel
Many people here are missing the point. This is not about not writing tests at
all*, just not to focus on them first.

I usually write tests first when I already know what I want, or when testing
manually is really time consuming. Otherwise I don't care when it happens as
long as they are there before pushing to remote.

------
brightball
I've often found it interesting that as the push for TDD has come to the
industry, the rule that I was always taught to enforce backwards compatibility
in your code seems to have fallen out of fashion.

For example, when Golang makes guarantees that new versions of the language
won't break code working on old versions of the language that's backwards
compatibility. These days, that has become a revolutionary feature while I
always considered it to just be an expectation.

Because I make a point to maintain backwards compatibility, I tend to see very
little benefit from TDD. It slows me down significantly. If I'm working on an
MVC monolith however, eventually NOT having that test suite gets really scary.

Working with smaller pieces and enforcing backwards compatibility vs working
with huge code bases that need a huge test suite is more desirable IMHO.

------
dmitrifedorov
TDD is not going to be a good fit for GUI development unless the application
design specifically avoids the "smart UI". Because the article's author states
"as I started implementing a GUI, the tests got harder to write", I'd assume
his UI is indeed "smart".

~~~
Ace17
Exactly. TDD is about unit testing, and unit tests shouldn't test boundaries
(there are other means to test them).

------
Glyptodon
I don't entirely concur, but I do think there's a real problem with areas that
are hard to test. (As a web developer, I particularly run in to this with
client side/browser side testing -- and yes, there are solutions, but they
often have imperfections and annoying trade offs.)

I also think the benefits of testing have much more of a relationship to scale
(both in terms of system usage/users, system complexity/size, development team
size, etc.) and worst case failure risk that's often unacknowledged. (I don't
suggest _not_ having tests -- you'd regret that real quick. But I do think the
appropriate level of test coverage can vary from project to project and
component to component. Outer layers of onions probably matter, more, etc.)

------
etamponi
The author keeps talking about a "better design" achieved when not using TDD,
and he goes on saying that sometimes a good design is a design that is hard to
test. These are subjective, psychological feelings that, in my experience,
bring only to maintenance nightmares and impossibility of doing any kind of
refactor without the fear of breaking something. TDD might be difficult to do
at times, but it has a very interesting, objective advantage: it brings the
"good design" buzzword out of psychological and subjective interpretations.
With TDD, good design is testable design. Period. Is it easy to check? Yes. It
is objective? Yes.

~~~
barrkel
Testable design is often a poor design, mainly because of language
limitations; abstraction and scoping for tests are not necessarily the best
choice for abstraction and scoping for design.

~~~
icebraining
Can you give an example? I'm struggling to think of a language where that
can't be worked around without bending the original code.

~~~
barrkel

      public class Foo {
        private static class Bar {
            public void someMethod() {
          }
        }
      }
    

Unit test someMethod.

More realistic, concrete examples are harder to describe because they involve
an interaction over time between a somewhat vague problem description and
exploration of a solution state space, where the abstractions chosen are fluid
and slide around before they get into a good shape.

I, for one, tend to write code from the bottom up, i.e. creating hypothetical
abstractions, small tools etc. and start composing them to solve the next
problem up the abstraction stack. Then, when I find it doesn't fit quite
right, I adjust, freely throwing away abstractions, rewriting them, reshaping,
until the level of my tools fits the problem better. I gradually build up my
abstraction level until I can move problem domain level mountains with little
effort. Doing this in the form of tests just doesn't work (for me).

Writing code from the top down isn't the right answer either; not all people
think that way, and besides, it can lead to solutions with very ugly
implementations - the grain of the wood should influence the design of the
house, if you will.

~~~
lgunsch
As Uncle Bob describes in his book, you never even make abstractions until you
have proven duplication. You make your abstractions based off of evidence, not
hypothetically planned out.

Kent Beck in his book doesn't recommend top-down, or bottom-up, but rather
from known-to-unknown. Start with what you know, and work toward what you
don't know.

Edit: The books I refer to are Test Driven Development: By Example - Kent
Beck, and Clean Code - Robert Martin

~~~
barrkel
Abstractions aren't merely for duplication removal. Abstractions are for
symbolic chunking; for thinking about things at different levels.

We don't use e.g. units-of-measure types [1] because they remove duplication
from our code; if anything, they add duplication. They do however clarify our
thinking with help from the type checker.

I think a lot in terms of flows / pipes / functional transforms. So I tend to
try and express problems in that shape, because I have a lot of mental tools
that I can apply, and I know they're extremely easy to test in isolation.
Creating a pipe-like thing means reducing it to a simple common push or pull
stream pattern. But it's not duplication I'm removing here; I'm actively
introducing an abstraction because it has proven power for creating good
software and solving problems.

I try and create few, minimal abstractions that can be applied widely. Take a
cue from functional languages and split out types from algorithms; if you make
your types more general, you increase the reusability of your algorithms.
Classical OO design tends to create a lot of types that are specific to the
domain model. I happen to think that OO designs are usually poor; they tend to
have a high code complexity to implemented logic ratio, and require awkward
composition that leaves details hanging out.

I spent 20 years writing OO software and was a big fan, especially in the
early 2000s. I still work in OO languages, but most OO code I read makes me
wretch now.

I don't recognize either Kent Beck or Bob Martin as particularly noteworthy
for good architectural design (Beck is on the right path with agile though).
In fact, I blame TDD for Java-itis: proliferation of single-implementation
interfaces (has there ever been a worse idea more often propagated by cargo
culters?), poorly abstracted object graphs, leaky implementation abstractions,
and more.

> Start with what you know, and work toward what you don't know

I've been programming for more than 25 years. A lot of stuff has changed in
that time; what hasn't changed is patterns of abstraction. So I start out with
abstractions, chosen from experience.

Read Norvig on TDD [2] - his experience matches mine. If you know a lot of
software tools (i.e. abstractions, algorithms, approaches), you can apply them
to a problem. TDD is a poor tool for creating new tools, though. And if you
rely on code duplication for creating a tool, well, you're going to have a bad
time with hard problems.

[1]
[https://blogs.msdn.microsoft.com/andrewkennedy/2008/08/29/un...](https://blogs.msdn.microsoft.com/andrewkennedy/2008/08/29/units-
of-measure-in-f-part-one-introducing-units/)

[2] [http://pindancing.blogspot.co.uk/2009/09/sudoku-in-coders-
at...](http://pindancing.blogspot.co.uk/2009/09/sudoku-in-coders-at-work.html)

------
PankajGhosh
One of the points being discussed here is the impact on velocity due to tests
_not_ being functional and tests validating implementation details. This slows
down refactoring or re-design exercises, which is quite inevitable in any
customer driven project.

I suggest looking at BDD
([http://guide.agilealliance.org/guide/bdd.html](http://guide.agilealliance.org/guide/bdd.html))
which results in tests that validate scenarios/specification/behavior and are
not coupled too closely with the implementation details.

------
ChemicalWarfare
TDD/BDD approach can definitely work and result in higher cohesion between the
requirements, design and implementation phases.

The trick is to make sure all phases of the project are factoring this in -
requirements with clearly defined user stories and acceptance criteria; design
docs where each acceptance criteria is covered, etc.

Another thing to point out is "TDD" is a balancing act between integration and
unit tests. There's also a balancing act between the external tests driven by
the dedicated test tool and internal tests included into the app.

------
sitruc
Isn't the point of TDD to create cleaner code that is robust enough for it's
purpose? When the author says he is "more conservative" when using TDD; that
seems to confirm that he is doing what works best rather than some kluge.
Isn't that the point? Many comments have the theme of I loved TDD and then I
stopped doing it when I didn't have to. That may have to with experience in
the domain. TDD helps newbs (like me) cut down on mistakes while seasoned vets
are able to work with more freedom.

------
guzmanovich
I had the same journey as the writer, except I now have done a full circle and
am back on TDD. The problem is not TDD, the problem is unit testing, mocking
and TDD.

If you are able to write tests from a user story level, i.e. scenario or
functional tests, where you are testing from the top down, then TDD is
actually very helpful also in the sense of program design and making you think
very closely on what problem you want to solve.

------
kazinator
TDD:

Write the test case representing "this USB host controller driver has no race
conditions", and then just fill in the code, and out pops a race-free USB host
controller driver!

(Of course, it does nothing so far other than demonstrating freedom from
races, but that's just a small matter of writing more tests for actual USB
requirements and fulfilling those.)

~~~
lgunsch
TDD is not about reduction of bugs, or race conditions. Robert Martin has said
that reduction of bugs is not sufficient enough to warrant using TDD for code.

TDD is about achieving better designed code that is maintainable and readable
for many years. TDD uses unit tests, not integration tests. So, the behaviour
of each function is asserted independently. Maybe you have a function that
sets up a data structure at a particular memory location. Your unit test then
is just to assert that the data structure at the memory location was setup
correctly. You should even be able to unit test functions used in a bootloader
like Grub.

~~~
catnaroek
> TDD is about achieving better designed code that is maintainable and
> readable for many years.

It isn't entirely clear to me why this is true. Clearly, TDD forces you to
think beforehand about what you want your program to do, since that's
ultimately what a test suite is: an executable description of what you want
your program to do. However, it doesn't necessarily follow from this that your
code will be well-designed or readable. Tests are about evaluating whether an
implementation conforms to a specification, not whether the design is actually
good. To evaluate a design, you need performance metrics. In other words, you
need an answer to the question: “Is this design actually helping me achieve my
goals?”

Engineering design is somewhat of a black art. A designer has to be both
organized (to formulate and carry out plans) and flexible (to reconcile goals
that may conflict with each other and/or evolve over time). Having a large
toolbox of methodologies and problem-solving heuristics is a good thing, but
it's also important to avoid the kind of mindset where your favorite tool is
the One True Tool, or your favorite problem-solving approach is the One True
Approach.

~~~
lgunsch
Tests in TDD are focused on behaviours required by the problem being solved.
One rule in TDD, as Kent Beck lays it out, is if you can't design and complete
a test (of behaviour) in under 10 minutes, you haven't broken down the
original behaviour enough yet. He uses the simple reminder "start small or not
at all". TDD is the very method of evaluating a design. However, keep in mind
that this refers to the mechanics and details of a more global overarching
design. You still start off with a plan and "bigger design".

TDD in this manner breaks down problems into small manageable chunks of unique
behaviour very specific to the problem they are solving. Combine this with the
refactoring step, and the code becomes simple to understand and readable. It's
very important to note though, that a complex problem will always be complex.
TDD does not remove the complexity of the problem domain.

------
vickychijwani
Justin Searls' "The Failures of Intro to TDD" is relevant to this discussion:
[http://blog.testdouble.com/posts/2014-01-25-the-failures-
of-...](http://blog.testdouble.com/posts/2014-01-25-the-failures-of-intro-to-
tdd.html)

------
vannevar
_It is easier to test some program designs than others. Sometimes, the best
design is one that’s hard to test..._

Sometimes, but in my experience this is rare. Much more often, the easiest to
code solution is both the hardest to test and to maintain, which is why having
some TDD discipline more often results in better design, not worse.

------
andrewclunn
I recall a situation where a manager was pushing for TFD in an Agile
environment. What a nightmare, the two just don't mix. The biggest issue isn't
the approach itself, but when it hits Buzzword level with the managers who
have no idea how development really works.

~~~
firepoet
This is true. These kinds of changes should come from the professionals -- it
is, after all, their reputation on the line. As a software developer and
executive simultaneously, I walk a fine line -- usually I adopt a practice
myself and show people what it has done for me and the maintainability of my
code. Then if people are curious I help. If not, that's fine. They'll come
around or go somewhere else where engineering is less rigorous.

It's extremely tricky to avoid being threatening as a leader. Your position
gives you power nobody else believes you deserve. Even when, sometimes, you
do. Not saying I deserve it, but there are those that I've worked with that
did, and I only noticed in hindsight. F'ing limbic system.

~~~
abawany
In my scenario, not only was the management mandating TDD, they were also
mandating a code coverage minimum, leading to joyless soul-crushing unit tests
such as those for getters and setters to boost coverage. The projects
ultimately were not successful and this same swarm is I suspect now making
some other developers' lives very miserable with the same broken-record
absolutist ideologies.

------
tamana
TDD gets in the way of writing code that doesn't work. Experimental, toy,
research code doesn't need to work. Everyone prefers to believe they are
writing research code, and don't want to be bothered about whether it solves a
problem correctly.

------
justifier
i used to keep track of bugs i found from creating or updating tests as a
preemptive response to those who claim active testing is a waste

now i just wonder why people are so against it

i enjoy the time i spend trying to break my own code

------
whatnotests
TDD works well when tests can express the intent of the code in question,
rather than simply its implementation.

Intent-focused tests tend to be easier to fix during a major refactor.

------
a_imho
[https://news.ycombinator.com/item?id=3033446](https://news.ycombinator.com/item?id=3033446)

------
prophetjohn
I'd describe myself as a "TDD guy." Probably the type that most of the TDD-
negative comments are about. I've got quite a bit of experience doing TDD in
large applications, mostly web or services for consumption by a web-app and
mostly Ruby and JavaScript (a bit of Java as well). Here are some of my
thoughts.

\- The best test that you can write is a complete system integration test.
These are usually driving a browser and integrating with a database for web
apps, or making HTTP requests if it's an API / headless service. They're the
best because they guarantee that the system works as a whole given whatever
initial setup you do.

You can totally write these tests first. And you should. It requires you to
stop and think about how the system should behave at the boundaries
(interaction with the user and external systems such as a database or API).
Then as you start writing the code, you don't have to do a bunch of clicking
in the browser, you run a test that takes 2 seconds to know if your thing
worked

\- The best test that you can write is also the slowest. Got some complicated
flow that behaves differently in 10 different contexts? Full end-to-end tests
are too slow for this. 6 months of writing tests like this and you're looking
a 10+ minute test suite, _at best_.

\- If the best test you can write is also the slowest, you need more tests
somewhere. If you don't have those tests somewhere, then code you write today
is going to break at some point and you won't know until it hits production.
This sucks. This is where unit tests come in.

\- Unit tests should test exactly that, the unit. That means, the behavior of
one class or function. The behavior of an object I depend on is _not_ my
behavior. Therefore, if I have dependencies, you should be mocking them out.
It's true that this in a way tests implementation, not behavior. But thought
about another way, the behavior of one unit might be to call a method on one
object and pass the result to another.

Thinking about unit tests in this way and mocking collaborators prevents the
issue where you make on change and break a _ton_ of tests. It also prevents
you from creating a bunch of duplicate setup when your code is broken up into
lots of small objects / functions. If you don't mock your dependencies, you're
setting up data for something that isn't relevant until multiple levels down
the dependency chain and it's not obvious why that setup is necessary.

\- Sometimes things are just too complicated and mocking all the collaborators
of an object just isn't worth it. These situations should be pushed as far
toward to bottom of your abstraction hierarchy as possible and then you should
do an integration test from that class / function down to the bottom with no
mocking.

\- TDD isn't the only way to have great test coverage or the only way to write
well-decoupled components, but it's hard to have bad test coverage and write
highly coupled components when test-driving correctly. And when you're a year
or more into building an application that's a cornerstone of your business,
it's going to be super valuable to have the flexibility of a well-tested,
loosely-coupled application. You won't have to spend hours in manual testing
to confirm that you haven't broken anything and changes will be easier to
implement.

------
brucehauman
The general rule being that its completely dependent on the situation.

------
whatnotests
Yes, please give up on testing your code. When you get fired, I'll take over
your job.

------
am185
think-driven development =)

------
sauronlord
From the article: " deliberately decided to experiment with test-first
development a few months ago. As a programming pensioner, the programs I am
writing are my own personal projects rather than projects for a client with a
specification so what I have to say here may only apply in situations where
there’s no hard and fast program specification"

All my side projects have unit AND automated feature/UI tests...and it is one
of my favorite parts of software development: having confidence and clarity in
how my creation works.

Soft.

