
Is TDD Dead? (2014) - cik
https://martinfowler.com/articles/is-tdd-dead/
======
VHRanger
The best talk on this topic, IMO is Ian Cooper: "TDD, Where did it all go
wrong?"
[https://www.youtube.com/watch?v=EZ05e7EMOLM](https://www.youtube.com/watch?v=EZ05e7EMOLM)

Couple of notes:

\- TDD, much like scrum, got corrupted by the "Agile Consulting Industry".
Sticking to the original principles as laid out by Kent Beck results in fairly
sane practices.

\- When people talk about "unit tests", a unit doesn't refer to the common
pattern of "a single class". A unit is a piece of the software with a clear
boundary. It might be a whole microservice, or a chunk of a monolith that is
internally consistent.

\- What triggers writing a test is what matters. Overzealous testers test for
each new public method in a class. This leads to testing on implementation
details of the actual unit, because most classes are only consumed within a
single unit.

\- Behavior driven testing makes most sense to decide what needs tests. If
it's required behavior to the people across the boundary, it needs a test.
Otherwise, tests may be extraneous or even harmful.

\- As such, a good trigger rule is "one test per desired external behavior of
the unit, plus one test per bug fixed". The test for each bugfix comes from
experience -- they delineate tricky parts of your unit and enforce workign
code around them.

~~~
mumblemumble
If I recall correctly, another very important point he makes in that talk is
that it's fine to delete tests. TDD tends to result in a lot of tests being
created as a sort of scaffolding to support initial development. The thing
about scaffolding is, when you're done using it, you tear it down.

I don't think he mentions it during the talk, but the next step after deleting
all those tests is a little bit more refactoring for maintainability. Now that
you've deleted all the redundant tests, you can then guard against future
developers unwittingly becoming tightly coupled to your implementation
details, by taking all the members that used to only be exposed for testing
purposes, and either deleting them or making them private.

~~~
VHRanger
Yes, you should delete tests for everything that isn't a required external
behavior, or a bugfix IMO.

Otherwise you're implicitly testing the implementation, which makes
refactoring impossible.

A big smell here is if the large majority of your tests are mocked. This might
mean you're testing at too fine-grained a level.

~~~
tonyedgecombe
So today I have been writing a lexer and parser. The public interface is the
parser, the lexer isn't exposed.

The problem is if I delete all the tests for the lexer then any bugs in the
lexer will only get exposed through the parser's tests.

This makes no sense to me.

~~~
VHRanger
The lexer is a unit then.

The lexer has a clear boundary from the parser.

The issue that takes experience here is how to determine what's a unit. "The
whole program" is obviously too big. "every public method or function" is
obviously too small.

Just be pragmatic.

~~~
AnimalMuppet
> "The whole program" is obviously too big.

Of course.

> "every public method or function" is obviously too small.

Why "obviously"? If it's public, someone outside the class can call it. That's
an external behavior.

~~~
VHRanger
If the class is only consumed in the context of one code unit (module,
service, whatever) then the class itself is an implementation detail.

------
dragontamer
I've been reading "Large Scale C++" by Lakos. There are two kinds of code
Lakos writes about:

1\. Application code -- Fast changing, poorly specified code. You need to have
a rapid development cycle, "discovering" what the customer wants. Your #1 job
is pleasing the customer, as quickly, and as reliably, as possible.

2\. Library code -- Slow changing, highly specified code. You have a long,
conservative development cycle. Your #1 job is supporting application
programmers.

TDD probably works for #2, less so for #1. Furthermore, we all dream of being
library developers (that our code and specifications are stable, and that our
code can last decades). Alas, most of us are Application developers, and the
lifespan of our code isn't really that long.

Recognizing the lifespan of your code, as well as the innate goals of your
team (quick and dirty application style, or slow and careful library style) is
important.

\------------

Mixing up the styles causes issues. If you write library-style code for an
application, the specification will change under your feet and everything
needs to be rewritten to the new whims of your new customers.

If you write application-style code for a library, you yourself won't be
"stable enough" to support your peers, and no one will want to use your
library.

~~~
slavik81
I find this application/library distinction hard to reconsile with the many
long-lived, stable applications that exist. AutoCAD, for example, is older
than I am.

~~~
dragontamer
Any large scale program will have both "application" parts and "library"
parts.

Lets take Autocad as a long-running example. Its first release was 1982, and
had an update this year, in 2020. What's a good example of "application" vs
"library" code here?

Lets take the user interface, which is almost always "application" style, with
short lifespans. Back in 1982, you probably had to write a custom serial-
communicator to interface with the mouse (or trackball). Eventually, Windows
is released and a standard mouse-interface becomes common practice.

\-------

A few decades later, the 3-button mouse with scrollwheel (with the wheel being
the 3rd button) becomes popular. Adding new features to the application, and
the overall design of the UI would have to change for modern sensibilities.

Then Ribbon happens. Hate it or love it, Windows programs are now Ribbon-
based. Gotta overhaul the UI AGAIN to match the changing times.

This progression from 2-button mouse -> Windows driven mouse -> 3-button mouse
with scrollwheel -> Ribbon -> touch-enabled (??) is "Application code",
requirements that change with the times. Every few years, the code interfacing
with the user was largely rewritten, to match the (then modern) expectations
of its userbase.

\-------

Of course, there's the "library chunk", which probably solves geometric
constraints or something like that. That part may have never changed
throughout the life of AutoCAD.

\-------

Application code CHANGES. That's the important bit. You cannot expect
applications to look the same over decades. I'd be very surprised if AutoCAD
still had their DOS GUI laying around ([https://www.scan2cad.com/wp-
content/uploads/2016/06/autocad_...](https://www.scan2cad.com/wp-
content/uploads/2016/06/autocad_1982.jpg)). That sort of code is thrown away
when it is no longer fashionable.

------
DigitalSea
Sadly, in this day and age of development and the need to constantly ship
things, TDD has been dead for a long time. In my 12 year career, I have heard
lots of people talk about test-driven development, but I've never seen it in a
workplace (at least none I've worked at).

On my own personal projects I have dabbled with TDD and I've seen the benefits
it can provide, but it does make even simple programming tasks take a lot
longer. Sadly, companies these days (especially during the pandemic) can no
longer afford the luxury of development taking longer, even if it does mean
the end result will most likely be cleaner and have less bugs. The company I
work for see shipping potentially buggy code and fixing it as bugs are
reported as an acceptable development practice.

With the advent of automated builds and deployment processes, it is way too
easy to quickly ship code and roll back bad releases or push out emergency
patches. Things don't have to be perfect the first or second time around. The
optics to non-technical executives seeing code go out and features released
are a lot better than seeing things take longer to develop.

~~~
rzwitserloot
That should itself be rather telling. TDD is extremely well known, in my
experience. I bet if I stuck a microphone under the nose of random passersby
at any development convention [1], 90%+ will be able to tell me what 'TDD' is
short for, and even if they don't, they'll have heard of the concept at least.
Hell, I bet most would tell me they 'aspire to do it'.

And yet, more or less nobody does.

So either it is next to impossible to begin doing it (seems like a bizarre
conclusion), or, perhaps more likely, nobody wants to do it, and the few dev
teams that do manage to do this have not managed to turn that into a
competitive advantage. Which makes the value of TDD rather questionable based
on simple evidence.

To explain this observed behaviour that TDD is clearly not a competitive
advantage[2], I can name a million pet theories. But without going into any of
those, the sheer fact that it's __this__ rare in practice says a lot, no?

[1] I mostly go to java related ones, maybe it's less well known amongst other
communities.

[2] What other explanation is there? Clearly not 'ah, but, TDD is brand new
and you have to give it some time for teams to get familiar with it, and for
the concept to percolate through, maybe wait for tooling support to catch up'
\- TDD's quite an old concept!

~~~
DigitalSea
Exactly. I think most experienced developers know about TDD, maybe they have
even tried it on a personal project. But, selling it to a company that have
commitments to investors, paying customers and an executive team who might not
all have technical backgrounds makes TDD a hard sell. How do you explain to
people who don't get it that things will take longer?

One of the biggest problems with TDD is that it kind of relies on having
clearly defined specifications. I don't know about you or other people here,
but I've worked in a lot of places (many even called themselves Agile) where
the work was not properly scoped at all. If you start doing TDD and the scope
isn't clear, the goal posts keep on moving and things just perpetually take
longer.

I think it's all about cost in the end. It's cheaper for companies to ship
buggy code and then iteratively patch the bugs. Unless they're massive
showstopper bugs (which normal tests should be catching anyway), it probably
still comparatively works out cheaper to fix bugs as you find them.

~~~
catwind7
> One of the biggest problems with TDD is that it kind of relies on having
> clearly defined specifications

That's very true. I think it's an easier sell when you have a relatively
stable set of specifications (accounting software core logic) where the rate
of change is low but the cost of regressions is high. But I think you can
tackle the same problem space with good unit tests instead of enforcing a
test-first mindset.

In situations where work is vague, spending more upfront time thinking about
architecture is much more useful (design docs?) imo. Having a set of units for
a crappy, entangled system is pretty costly.

------
jpz
TDD for a complex domain generally means (at least in an OO design) you cannot
do exploratory coding over what objects you have and what decisions you will
make.

After all, you have to write the tests for the classes, before the classes
exist.

When you do so, then start to fill in the code, and then realise you need to
refactor, you have a heap of tests you need to refactor. It always appeared to
me highly inefficient and predicated on an assumption that the author can
express his class heirarchies well ahead of the coding.

In my experience, the class design is highly iterative, even the names of
methods, what methods you have, etc - to have to write the tests before you've
written the code just creates a huge amount of impediments to the flow of
expressing a solution.

This is not a criticism of unit testing - but of TDD.

~~~
mayneack
In the version of TDD I've been exposed to, you only write the minimum number
of tests to cover your next feature. It's still iterative.

I do more TDD on exploratory work than something fully formed because I can
work in a narrow scope without having to grasp the whole project.

~~~
closeparen
If you only know what the _feature_ is, you're prepared to write functional
and/or integration tests, but not unit tests. Unit tests closely wrap the
details of the implementation.

------
tel
Frankly good type systems displaced TDD for me. They do a better job getting
at the goals of TDD than TDD does. Types in good systems (OCaml, Rust,
Typescript, Scala, Haskell, others) are 100% laser focused on depicting good
public interfaces and laying out their behavior.

Sometimes you need a test or two to really nail down this behavior. That'll
give you a TDD-like flair. But it's not the same because you've already
invested so much productive thinking time into the interface driven purely by
the types. At that point, TDD is good hygiene, but not a transformative
practice.

~~~
terrortrain
For js->ts, this is right on the money.

When I started unit tests way back when, half the tests were just checking
what happens if you give it strange arguments.TS now does that job for me.

Another thing that has cut down on the number of tests is switching to pure
functions for most things. And trying to isolate side effects as much as
possible, so you really don't need to write to many tests around it. If the
function only has one or two lines, typically types are enough to catch
potential issues.

~~~
mattmanser
It's one of the things that really frustrates me with the ASP.net team.
They're obsessed with unit testing, have made really stupid decisions that
means tons of stuff now gets injected instead of just passed, all in the name
of unit testing.

But C# is a typed language and generally doesn't need reams of unit tests, so
all it does is make the code unnecessarily complicated.

My big bugbear was when they idiotically made the config injected, of all the
things that should be super simple to use, and definitely not injected, it's
config. It only changes between environments.

------
didip
Over the years I have some notes about TDD:

* Writing test first is simply unnatural. Way more developers need exploratory session to even discover what are they supposed to do.

* That's because our spec is always vague and no amount of Agile process can help it.

* If you have a bunch of idempotent functions, they are the easiest to write tests on. So I would start there.

* Some languages are easier to do unit tests on, for example: Go. It gives everything you need: test runner, benchmarker, data race checker, unit test struct is part of std library, etc.

* This DSL: `describe(){ it "should work" { result.should eq true } }`, is nuts and counter productive. People are already not writing tests and you are adding another friction?

* To write or to not write mock? This is a controversial one, but in my opinion, it is simply practical to test directly on the database you are using. Your CI/CD should just prep the database you need instead of mocking. Once again, people are already not writing tests, don't add more frictions.

~~~
jniedrauer
> Your CI/CD should just prep the database you need instead of mocking.

This is a hard one. A lot of tests require setup steps even if you're using a
real database, so it's not even that different than a mock. And if you can't
give each test its own database to work with, then you have to run your tests
serially. I'm working on a codebase that spins up 4 separate database engines
in docker, and each test has quite a few setup steps. The result is brittle
tests that take hours to run, so they're not very useful as a release gate.
I've been bitten by mocks before too though, so I don't have good answers.

~~~
sedatk
We did that with transactions. Every test gets their own view of the database,
and rolls its changes back after they're done. They take much longer than unit
tests but it's worth it in the end as writing unit tests are easy, there is no
need to create mocks, and you also test if your DB queries work too this way.

------
mothsonasloth
Controversial Opinion

TDD for me has always been something to show off at an interview; performing a
beautifully choreographed dance of "Red, Green, Refactor" when showing off
your skills.

TDD is only useful for me when I know the structure of my code i.e. I am
fixing a bug or adding a feature to an existing application.

However when you are "in the dark" or early days of developing a new service
then TDD can definitely slow you down if you don't know what your architecture
is going to look like (maybe thats my fault from not enough whiteboarding?)

~~~
digitalsushi
Your last sentence is beautiful. You're on the journey.

If TDD is slowing a developer down because they don't know what the
architecture is, it's a strong indicator that the plan isn't complete.

By the time TDD is being used, it should be as obvious what to do as unpacking
a U-Haul truck full of boxes parked in front of an empty house with the door
propped open.

~~~
mattlondon
> By the time TDD is being used, it should be as obvious what to do as
> unpacking a U-Haul truck full of boxes parked in front of an empty house
> with the door propped open.

<sarcasm>By the time TDD is being used, you should have already have known way
ahead of time the house you were going to move to in the future and simply
just had your amazon orders shipped to that address in the first place.

If you have to move your boxes at all, your design was clearly incomplete
originally and you should not have started buying belongings until you knew
the final house you were going to live in first.</sarcasm>

Real life is too messy for TDD 99% of the time I find. Unexpected things
happen (e.g. you move house) that means you can't know everything ahead of
time.

~~~
digitalsushi
I hear you. But I feel that saying real life is too messy for TDD 99 out of
100 times, is an appeal to fatalism.

I believe, in good faith, that more often than one percent of the time, the
existence of a good plan is what enables TDD to be the successful
demonstration it claims to be.

The trust that we grow as practitioners of good software architecture is what
enables us to have these talking points. I believe that we should feel enabled
to discuss how to succeed, and less so how to fail.

------
aazaa
If so, what replaced it?

What methods do teams use instead to ensure that:

1\. software works as advertised

2\. software can be refactored

If there's "no time" to write tests before code, it seems likely that there
will also be "no time" to write them after.

If there are no tests, or test coverage is spotty, refactoring is going to be
well-nigh impossible. If refactoring isn't done, the implementation is likely
to be brittle. If implementation is brittle and no tests exist, few developers
will want to touch the code for fear of it breaking. If no developers touch
the code to keep it well-factored, the code will rot.

Now maybe this is the plan all along: code is written to decay and eventually
disappear. But it doesn't sound like a recipe for long-term success.

So whenever these discussions about the use of TDD come up, I'm very curious
about the specific ways teams address the two points I raised above.

~~~
jdmoreira
Believe it or not but good typed languages with a lot of compile time checks
are probably one of the main reasons why TDD is not that big. Sure, these
won't catch logic errors but most developers don't write that much heavy logic
code anyway. And the senior developers are capable of writing mostly straight
forward code that it’s easy to debug.

~~~
JMTQp8lwXL
Type-checking, while helpful, can't always catch logical programming errors
that result in user-facing bugs. We would have observed a marked increase in
software quality with the advent of typed languages, but we haven't. Quality
depends on more than just a typed language, whether that be unit tests,
integration tests, etc.

~~~
jdmoreira
I strongly disagree having years of experience in both Objective-C and Swift
codebases / products

~~~
JMTQp8lwXL
Disagree on general software quality? I've been an iOS user for about a decade
now. Seen a couple of 'bad' (extremely buggy) apps here and there, but the
quality seems relatively constant over time (possible sample bias).

~~~
jdmoreira
Great product teams will build great products independently of Objective-C or
Swift but you will need much less resources to deliver on the same quality
using swift and modern frontend architectures.

I would also doubt the majority of apps in the Appstore today are using swift
at least in the sense that I mean it. It’s not enough to use swift you
actually need to know how to model your problem well using its type system and
that takes experience. A lot of devs simply focus in solving narrow problems
without giving much thought to how much more strict they can be with the
language.

------
Bokanovsky
Over the years I've worked with many developers who don't write ANY unit
tests, relying only on integration tests and this has caused severe bugs that
could have easily been caught by unit tests. This has cost the companies
they're working for a fair bit of money.

I've called this developers out and they often seem to be anti-unit test or
against unit tests as they think it slows them down, when in reality the cost
of cleaning up afterwards costs more.

When you start writing code with unit tests in mind, you generally follow best
practices, and start to realise if a "unit" is too big and needs to be split
up into smaller units. That and mocking I've found that the anti-unit test
developers I've worked with commonly aren't keen on mocking stuff out either
(but again, that's all anecdotally).

Generally I find Fowler's guidance on the test pyramid is always worth
considering.

[https://martinfowler.com/articles/practical-test-
pyramid.htm...](https://martinfowler.com/articles/practical-test-pyramid.html)

~~~
aroundtown
As one of those developers, the problem I face is learning how and why to
write tests. Most of the tutorials I've read on the matter test things that
are too trivial and test too often. It also doesn't help that the people I've
worked with don't have a clue how to do it either or those that do it think
that everybody should just get it and can't be bothered to explain it.

~~~
codethief
I recommend reading The Art of Unit Testing ->
[https://www.manning.com/books/the-art-of-unit-
testing](https://www.manning.com/books/the-art-of-unit-testing) . At least
this is the book I learned TDD from more than 10 years ago and the knowledge I
gained from the book has proven to be timeless.

------
gfiorav
I think like 80% of people in the comments are trying to find refuge in the
idea that "TDD is dead" to justify them not writing tests.

Not going to argue, I just hope you realize this will come back and bite you.
This profession requires discipline, like any other.

~~~
rement
It's okay. I'll be working somewhere else in two years and someone else can
figure out how to fix the bug ridden code I wrote /s.

~~~
disgruntledphd2
And that person, if they have any sense, will write tests.

Source: I am that person (in general, obviously not in specific).

------
arvidkahl
I've always had trouble with TDD. I'm a mostly self-taught dev, and I had my
first jobs in high-velocity startups that themselves were made up of engineers
who'd prefer a quick-and-dirty approach.

Later in my career, I've worked for more traditional software businesses, but
they didn't encourage testing either. It was always an afterthought, and time
was better spent on building flashy and fancy new features to pacify
customers.

When I founded my first bootstrapped businesses, it was the same thing: things
had to be built quickly, almost always as easily scrapped experiments. A SaaS
that is well-tested but missed it's market opportunity window was something I
didn't want to risk.

Even with the SaaS FeedbackPanda that finally worked out (and which I sold
with my co-founder last year), testing was more or less non-existant. We truly
tested in production, and trusted the infrastructure and framework choices we
made to bear the most of the burden. It worked out for us, and even our
acquirer didn't expect much in terms of TDD in the properties they were
looking for.

I have the feeling that the concept of rapid prototyping has developed into a
cultural phenomenon that expanded into software engineering—and people have
adopted to it.

~~~
bdavis__
s/expanded into software engineering/replaced software engineering/

------
guru4consulting
In my experience:

libraries and frameworks - TDD is required and highly effective

for web apps, business CRUD apps in loosely typed languages like
Javascript/Python - TDD can marginally help, especially with checking types,
validations, etc.

for web apps, business CRUD apps in strongly typed languages like Java - TDD /
Unit Testing does not add much value. skip it.

I get more confidence when I run tests for the deployed web page (or API) with
actual test data in real time.

~~~
gizmo385
> loosely typed languages like Javascript/Python

Isn't Python strongly dynamically typed?

~~~
guru4consulting
yes, Python is strongly typed which is much better than Javascript's weak
types.. but, still the dynamic types mean that potential bugs are not caught
by compiler but the runtime (that means live Production environment!!).

Static vs Dynamic type - based on WHEN the types are checked.. i.e., during
compilation phase or runtime phase

Strong vs Weak type - based on HOW strict or loose the type check is. i.e.,
weak types make many implicit assumptions and are very liberal in allowing
different types to be referenced interchangeably or added together.

~~~
randompwd
> apps in loosely typed languages like Javascript/Python

> Python is strongly typed

Another of Schrodingers animals.

------
AcerbicZero
It seems pretty obvious that TDD was a good idea for some folks, but it became
a meme long before most of us were ever exposed to it. By that point you
couldn't just write tests and do dev work, you had to join the church of TDD
if you wanted to play. Then some lazy ruby scripter like me gets told to write
tests not code, and we end up with another layer of glorified middle
management to make sure "everything was written with a test"

Eventually those of us who have work to do throw away the junk, get things
done, or we move on. Its a pretty natural progression for most bureaucratic
organizations.

------
brainless
I am surprised this article exists, but then I noticed it is from 2014. TDD is
not a magic bullet, but the moment I have ventured into codifying a critical
part of business logic, I have written test first. Not always unit tests, many
times an integration test. But the confidence I get when adding/modifying
things around that business logic is just mind-blowing.

~~~
bdcravens
Wouldn't you get that same confidence if you wrote verification tests instead
of tests first?

~~~
brainless
Actually writing tests before means I'm asking myself what I want to achieve.
While verification test is more like saying that whatever the output is,
that's the right thing.

------
tobyhinloopen
The amount of people admitting not doing TDD surprises me. I wouldn’t dare to
write any production code without a comprehensive suite of tests.

Most projects I work on for have about 50% production code and 50% tests code,
especially if I have a say in it. I simply won’t take any responsibility for
my code if I cannot test it.

Funniest thing is that the projects that are well tested are also the projects
that just work, rarely triggers the error notifier and rarely has bug reports.
The errors that are present are usually due to 3rd party API outages, database
outages or abuse.

~~~
manmal
TDD != writing tests, but it means that you write tests before you code. Many
people write tests for their production code, but they mostly add them after
at least some of the code has been written.

The idea is that you'll understand the problem space better by writing the
test first. Which is IMO a valid assumption. But, at least for me, the problem
space (interfaces etc.) is often a bit too fuzzy initially, such that it's
easier to just write the first draft, and THEN test that.

~~~
tobyhinloopen
I know what TDD means. I write tests before I write my code.

Usually I write out a set of tests (this should this X, Y and Z but not H, I,
J) and then I implement whatever thing I need to add.

Same with bug fixes or changes. Bug fix: reproduce in tests, then fix bug.
Change: change or create new tests, then update code.

~~~
regulation_d
> The amount of people admitting not doing TDD surprises me. I wouldn’t dare
> to write any production code without a comprehensive suite of tests.

This suggests that you don't understand that people can achieve the stated
goal of "a comprehensive suite of tests" without TDD.

Just to be clear, people can be passionate about testing without being
passionate about TDD.

~~~
Chris2048
Or they can do testing, not do TDD, and be passionate about neither.

------
m3kw9
With the speed at which management expect deliveries, TDD has been replaced
with Tech Debt Development

~~~
AgloeDreams
ooo I like this.

------
jbob2000
The design damage from TDD is very real, I'm unsure if the complexity added is
of any real value. But once you turn your mind to it, it becomes second
nature. I wonder if people's woes about it are because it's hard to change?

Regardless, TDD doesn't work for my org, it's too expensive. Our business
logic is very unstructured, requirements are built up over years of projects
being layered on top of each other. Decisions about business logic are made on
a whim and need to implemented quickly to support the rest of the
organization. Given how quickly and randomly the work changes, I'm not tempted
to implement anything further than automated smoke tests. Besides, I have a QA
team that is triple the size of my development team, it's their responsibility
to test the end to end solution.

~~~
jrochkind1
> Our business logic is very unstructured, requirements are built up over
> years of projects being layered on top of each other.

That sounds to me like a codebase I'd be terrified to make changes in
_without_ extensive test coverage. (Whether the code was written with "TDD" or
not is a slightly different question). But I guess it doesn't work out that
way for you?

Ah:

> Besides, I have a QA team that is triple the size of my development team

Sure, I guess that is an alternate approach to tests. I have never worked with
a formal QA team that extensive, but I'd guess they have to have various
scripts and explicitly written out acceptance criteria and such? That's
basically a form of 'tests', just in human language and human testers, not
code. (I also wonder if the QA team is actually using some forms of automation
that look a lot like tests, just they write/maintain them instead of
"developers"?)

With a team three times the size of the dev team, it's definitely not a
_cheap_ alternative, but I guess I could believe it's cheaper/more effective
than trying to have automated test coverage for your code (or a combo of much
smaller QA team with some test coverage), like I could believe there is _some_
context where that's true, it seems unlikely to me it will be widely true. But
whatever works.

The number of software development projects that lack either sophisticated QA
operations like that or good test coverage is probably bigger than those doing
either though.

~~~
sumtechguy
It is a fairly traditional way of doing it. TDD usually means you kinda know
what is going on up front. If you work in a org where the sales guy can make
up a feature and you need it done yesterday that can be a tough sell that you
need time to write tests 'thats what we got QA for'. The refactor at this
point is a daunting task. It would even be a decently expensive one. What
comes out on the other end is effectively from the end users point of view,
the same.

I personally like working with a decent QA team. They challenge you to do
better. They take a special glee in breaking your code in ways you did not
think of. You can also use their test plans to write automated tests. So that
way they can go think of more devious ways to break your code. Also sometimes
developers can go off the deep end and over do things. It is nice to have a
semi neutral third party saying what is important to test or not. Also one
thing to keep in mind is many of these integration testing frameworks are
basically tedious coding exercises. Especially if you are external API heavy.

> it's definitely not a cheap alternative You may have hit on why many orgs
> like the idea of TDD. It pushes the idea that I have someone who can write
> code well they can write the tests too. Skipping over the fact that takes
> time and energy away from other things.

~~~
jbob2000
> They take a special glee in breaking your code in ways you did not think of.

God, this is so true! It's made me a better developer though; _" You know this
will break, make it better so Carly doesn't yell at you"_.

------
nybblesio
It's a shame Kent used the words "test" and "development". Test Driven
_Design_ would have been better, but people would still misinterpret what is
under "test". Yes, there's a side effect of asserting behavior in Kent's
vision of TDD but it's a happy accident.

What's under test is the _design_. Way before TDD was a thing, when I worked
at IBM, we used to call this "inverted design": write the calling code first
to see what the API might look like and then make it work. In the late 80s it
would have been considered a massive waste to assert behavior though; we'd
just implement it.

Automated _functional_ tests (from the outside in) are where the bulk of does-
it-do-what-it-says-on-the-tin testing should happen.

~~~
ragnese
> Way before TDD was a thing, when I worked at IBM, we used to call this
> "inverted design": write the calling code first to see what the API might
> look like and then make it work.

I really like the idea of this, and I very occasionally have the foresight and
wherewithal to do this kind of "top-down" programming.

Maybe not surprisingly, this is how I sometimes end up with the much-
criticized "Interface with only one implementer" design smell. I write the
interfaces that I would like, right next to where I'm writing the calling
code. The interface(s) evolve as the calling code is unfolding. Then, later, I
make an impl for the Interface.

At that point I could just delete the interface and only use the concrete
implementation, but... I don't. _shrug_.

------
appleflaxen
Mods: (2014) would be a useful tag to have on the submission, especially given
the topic and provocative title.

------
grumple
No.

If you have a large, critical system you need tests. Whether you write the
tests first or not is largely immaterial, the tests and functionality get
merged and deployed together.

Once your system is of sufficient complexity, you need those tests to prevent
regressions unless you’re working on something very isolated.

~~~
AgloeDreams
TDD is mostly about writing tests first. I don't think this is stating that
tests shouldn't be written, just that the TDD approach of writing tests then
basing your code around fulfilling those tests is dying.

------
catwind7
I've found that the strictness imposed by TDD (write test first) has been
valuable in learning how to write good tests and understanding the concept of
"testability".

If you've never tried TDD for a real project, I still highly recommend it for
those reasons alone. It may not be your cup of tea and it may end up being
extremely unproductive for you, but hey at least it'll give you some
experience-backed opinions for this never-ending debate :)

------
gitgud
The main benefit of TDD is that it forces you to code _with testing in mind_.
This produces a decoupled architecture of easily testable system components...

However, once you learn the skill of making things _testable_ , you can find
that you don't really need to write tests first anymore... which makes TDD
less useful

This is not necessarily rational, but it can explain why it's popularity has
diminished, even among people who know it's benefits.

~~~
ravenstine
> However, once you learn the skill of making things testable, you can find
> that you don't really need to write tests first anymore... which makes TDD
> less useful

Exactly! Plus, I find the idea that TDD produces decoupled architecture to not
be true. In fact, it's usually the exact opposite unless someone has a
"testability" mindset, in which case they don't need to TDD in the first
place.

------
antoineMoPa
Developers are too polarized about tests. I wish more projects had just enough
tests.

Tests can be a nice entry path for new developers trying to understand how to
use the code. Tests be useful when solving tricky bugs. Also, it can be useful
to make sure parsers and financial stuff work as expected under different
scenarios. On the other hand, too much testing means you are wasting precious
time not adding value.

~~~
ibejoeb
Right, but the discussion is whether or not creating a test is _the_ essential
starting point for all development. It's important to not conflate testing
with TDD. I don't think testing is polarizing. TDD, though, certainly has its
advocates and detractors.

Plenty of others have noted the consultancy influence on the matter, so I'll
skip that and just say that I've encountered this phenomenon several times.
When the consultant or new VPE or DirE or whatever comes in and says it's TDD
time, and when there is dissent, you see this "So you're against testing?"
play. That's not useful.

------
tantaman
Haven't watched yet but this should be a gem: "Test-induced design damage"

In my own company we saw TDD create tons unnecessary indirection by
introducing dependency injection all over the place. The only reason for that
dependency injection was so the component could be sufficiently isolated for
unit testing.

Although it could be argued that the components _should_ be highly
compositional anyway. ¯\\_(ツ)_/¯

------
njharman
TDD has nothing to do with Testing and everything to do with Development. It
is a development methodology (test driven), not a testing methodology.

It is a means to focus development and structure code into isolated "units".
And it facilitates wholesale, brutal refactoring and deletion of code because
it provides confidence that; tests pass (external interface remains
consistent), tests fail (you need to fix your refactor or update external
interface), your tests are poor / incomplete.

By writing tests firstish, you make it real hard/noticeable/laborious to write
YANGI, sprawling, or over engineered code. If your tests are hard to write, it
means your code is interconnected, doing too much, not layered properly, etc.
Too many people don't realize this and struggle with tests / spend way too
much time writing equally complex tests. When they should realize all this
pain is the same pain that will occur when you try to fix/expand/maintain your
code. Fix your code! not the test.

------
nikivi
Quite liked this view point on testing: [https://eng.rekki.com/unit-testing-
at-rekki/t.txt](https://eng.rekki.com/unit-testing-at-rekki/t.txt)

------
Plasmoid2000ad
In Microsoft, at least in Office division, TDD got some big pushes and serious
traction within many of the teams working on Services, funnily enough back
around the time of this article.

But... maybe relatively uniquely? We had a firm division between Dev and Test
Engineers at the time. The person writing the tests was not he person
developing the code.

The best success story I saw for this was a well defined feature, covered by
tests written while the Test Engineer had time before a vacation, and then
developed against by the Dev who didn't get time until the vacation.

Shortly afterwards Microsoft did away with the split between Test and Dev, and
laid off many of the Test Engineers (while keeping QA in the case of Windows).

I haven't seen many Engineers these days, former Test or not, be enthusiastic
to do TDD. It might be motivational due to the constant pressure to make
progress and a shift to smaller check-ins, so developing tests for TDD is
maybe not observable progress.

------
zoomablemind
I consider TDD as a tool rather than a methodology. Its utility is in
exploration, when the problem at hand does not have a precedent or an already
well-fitting solution.

Writing tests first serves more like a guidance in fleshing out the approach.
At the conclusion, seeing the 'emerged' approach, I always get an itchy
feeling that it could be done smoother, now that I know it's doing what I
need.

Sure, no one is going to rewrite it, so it goes out as done. The net result is
perhaps more domain knowledge, and the tests flesh out the expectations of the
behavior.

That's why writing meaningful tests (even by the name) make TDD worthwhile.

But all in all TDD a kinda prototyping tool first. Ideally, someone has to
analyze the bulk of the tests to better formulate the product and its
resulting 'spec' such that this prototype could be further maintained,
hopefully morphing into a better one at some future time.

------
polotics
For me personally it's the story of the TDD luminary trying and giving up on
implementing a Sudoku solver with TDD, and the equivalent elegance of Peter
Norvig's solver that led me to conclude test driven may ne good, but cargo
culting it is particularly bad...

------
eggsnbacon1
I never "bought" TDD. Just like "microservices" it adds a ton of complexity
and cost for something you probably don't need.

The TDD cult doesn't consider that its the last thing you should try, test
everything. If you need high reliability you should move to a safer language
first. Dynamic typing to static. Unsafe memory to safe, like Rust or Go. Turn
on a bunch of linters too. Move to a language that doesn't allow nulls.

Once you've got a bunch of linting and a static language that doesn't allow
memory corruption, do you really need to test everything? Probably not. The
linux kernel is a good example as usual. Virtually no tests and its the
backbone of the internet

------
pixelmonkey
Relevant post I wrote back in 2012 --

XDDs: stay healthily skeptical and don’t drink the kool-aid

[https://amontalenti.com/2012/02/12/xdds](https://amontalenti.com/2012/02/12/xdds)

------
ronanyeah
Tests are often a substitute for compile-time guarantees.

~~~
sateesh
True that. Without decent amount of testing with good coverage I don't feel
any confident of the code (in Python) I write. Without decent amount of test
any migration from Python2 to 3 is a cumbersome exercise.

------
cringepirate
I personally I make sure I have some form of automated testing (normally unit
tests and e2e tests). As I like to know my code works but I can't do the
writing tests first. It just doesn't work with how I approach problems.

The approach I have to many projects is:

1) Sketch out a rough idea of how I will build it. 2) Try to get the
fundamentals working, 3) Start building and fixing any obvious defects in the
design. Add tests as I go along to catch obvious defects. 4) Iterate from
there to completion.

For most of the projects I am working on this seems to be fine.

------
afarrell
Working in a codebase without tests is like taking a job at the top of a
4-story building that lacks an elevator.

For me, the biggest benefit of TDD is to act as increase my individual
velocity. When I forget what my task is in the middle of executing it, running
a test enables me to return to the task within seconds. So personally, I
refuse to work professionally in a codebase if I cannot write (even hacky
shell-script-based) tests.

Also, tests enable me to better understand how someone else's code is supposed
to be used.

~~~
Chris2048
surely "development with tests" is different from TDD, a very specific subset
of the former.

~~~
afarrell
Correct. But a codebase without tests is one where it is unreasonably hard to
practice TDD.

------
GuB-42
TDD is one of these buzzword methodologies that it good to know it exists but
is inapplicable by itself.

In fact, TDD-like techniques have produced some of the worst code I've seen.

The trap with test driven development is that you write code to pass the
tests, not code that solves a problem. It is easy to write code you don't
understand. Ex: "is it +1 or -1? +1 passes, -1 fails, it must be +1". In the
end you don't know why you wrote +1, and maybe it makes no sense but since it
passed your tests...

------
cryptica
I use TDD with integration tests. At first, my test case verifies a whole
feature at a high level, then later when I have the main test case passing, I
add more detailed integration test cases. Once I'm satisfied that the feature
works under all the tricky possible edge cases, I merge the feature. I just
run my test case to verify that the feature is working. It's way easier than
manually testing using a browser or HTTP client to make requests; also it's
nice that once I finished testing, I get to keep the test case and it serves
to avoid regressions on that feature later.

Sometimes I do TDD with unit tests, but only if the specific component is
complex enough to warrant a unit test. I don't write unit tests for a simple
component where the correctness of the component can easily be inferred from
the passing of integration test cases which rely on that component.

Trying to cover everything with unit tests from the beginning is a terrible
idea and will lead to sub-par architecture. You have to start with tests that
cover functionality at a high level first and work your way to the details
later; top-down, not bottom-up.

The beauty of the integration test is that it forces you to modularize high
level logic to make it testable. It forces you to think about the input and
output boundaries of the code being tested under integration. Integration
testing doesn't mean you need to have a database running during the tests;
often you can mock out the database client with a dummy adapter (or an in-
memory adapter which works in the same way as the real database for the
purpose of testing).

------
galkk
I like a lot the distinction that I first saw mentioned at Google

Is your test actually testing functionality or is merely change detector?

For example, unit test has copy-pasted sql query and breaks if you reformatted
it etc? If it's change detector, it should be deleted, because it's just
copy/different representation of the same information.

This is very clear definition that helps argue and think about tests a lot,
especially about mythical numbers like "90% unit test coverage".

------
shados
These discussions always surface something: The problem with the terminology.

If you look at the arguments (including my own posts in this thread), people
are basically talking about different kind of tests, with different costs and
value propositions, and dumping them all under the same umbrella. The term
"unit test" has been badly overloaded to mean a whole lot of stuff.

Without first categorizing type of tests, and defining what we mean by unit
test, we can't even have the discussion about if it's dead, useful, whatever.

Even after defining that, there's a couple of sub groups that strictly define
the value of testing based on specific criterias and ignoring others (eg: "the
goal of tests is to know if you're breaking something when you refactor").

Tests have a LOT of benefits, and different type of tests get a different
subset of these benefits, at varying cost. Without those definitions and
associated tradeoffs, the discussion is basically a waste of time. You can see
it in this thread: people are going "TDD is XYZ and the purpose is ABC". And
you have about 6 variations of XYZ and ABC. Everyone is sure their version is
the right one. Myself included.

~~~
neillyons
Agree. Also it seems that some people think TDD is about "do you write tests
at all" while others think "do you write tests before the implementation".

------
maps7
I just consider TDD as Test During Development. 'During' can be before or
after the actual code is written. When I say development is done I mean that
the code is written and has been tested.

I have seen mocking and code coverage work really well for a large codebase so
I am in favour of those things too. I am not dogmatic about it though. I
choose when to apply those things.

------
specialist
The "work outside in" strategy is ideal. Do this as much as possible.

Alas, TDD requires precognition.

Or such prior familiarity with the problem domain as to make the effort
redundant.

Or performantive. Which is ok if you're managing upwards. Trying to impress
those bozos who read the forward of some Agile Methodology books so now
believe they are experts.

------
ragnese
I'm going to go on a bit of a tangent.

I don't follow any particular testing religion, and I definitely find myself
struggling to figure out what to test, how to test it, and have inflicted
design damage in the name of testability...

BUT. I just want to give a shout out to the Rust language for allowing us to
test private functions. Sometimes the "tricky" part of a "unit" is not
directly in the public API. Maybe you have a fancy regex, string formatter, or
sorting algorithm that's part of a larger API. The fact that you can test that
part directly without the ceremony of constructing the rest of the unit and
being forced to parse out subtle differences in the public API's output is
REALLY refreshing when it's needed.

That's definitely not TDD. But it's related to unit testing, so I just felt
the need to give a virtual high-five.

~~~
jayd16
This is considered an anti-pattern for a lot of reasons. Bakes private
implementation into the specification, making it harder to refactor. Can't
reuse tests across different implementations. Brakes abstractions.

I wonder what the reasoning was for the Rust devs.

~~~
ragnese
I'm not sure I follow. What do you mean by "bakes private implementation into
the specification"?

In Rust, these tests live inside of the module where the functionality is
defined. Nothing "leaks". You write your private function, then right next to
it, you write some tests for it. Nobody ever has to know about the private
code (unless your tests fail).

> Can't reuse tests across different implementations.

Huh? It's a private function. You aren't going to _have_ multiple
implementations...

~~~
jayd16
When you test private methods they essentially become part of your API because
you cannot remove that method without changing the tests. You increase the API
surface because any observable behavior becomes part of the API, no matter
what the docs say.

A test against an interface would be more reusable (and more clearly defined)
than tests against a concrete class's private implementation. If you do make a
new implementation, you have to pick apart what can be used and what can't.

Maybe YANGI but the idea is that its a code smell that you can't easily test
core functionality from the public API. Why is the API so subtle? Is this
class doing too much, should you break it up? These are the questions I ask
when a junior dev makes a method public just to test it.

But I don't know Rust so perhaps some of these concerns are not relevant to
Rust.

~~~
ragnese
Well, the way unit tests work in Rust is that you write a test module inside
your module. So, in a case like I'm describing, the unit test is usually
(always?) right under the private function in question.

So, technically, yes, that's "observable behavior" in the sense that a test
will pass or fail. It's not really the same IMO as running e.g., jUnit, where
all of your tests are in the same area away from the code they're testing.

The reason I say that is because if you refactor and change the function in
question, the test for it is RIGHT next to it. It's just going to be part of
the refactor. It's almost like saying that private methods shouldn't depend on
each other because taking one away will break the other. I believe your IDE
will even show you squiggles when your test gets messed up while you're
editing.

Your suggestion to use an interface is exactly the kind of "test-induced
design damage" that's referred to in the OP. Which is what made me think to
leave my comment. If you have a one-off pure function that is not used
anywhere but inside one module, why in the world would we want to create an
interface, then make an impl, then inject that interface into our module? Rust
code is also not as class-driven as some other languages, so often times a
module is just a collection of free functions. You'd have to inject this
interface into every function, when the thing is never going to be different.

> Maybe YANGI but the idea is that its a code smell that you can't easily test
> core functionality from the public API. Why is the API so subtle? Is this
> class doing too much, should you break it up? These are the questions I ask
> when a junior dev makes a method public just to test it.

I don't disagree with that. It very well could/should be considered a smell.
But sometimes a piece of smelly code checks out. And, while it should be used
sparingly, testing private functions can sometimes save us from making our
"actual" code more complicated than it needs to be, just for the sake of
testability. Sometimes we can get good testability without adding extra
ceremony.

EDIT: Also, one of the things that I sometimes struggle with when writing
tests is: which public function's tests are responsible for testing the
private functionality?

In other words, let's say I have a private function that formats a date a
specific way. Two public functions depend on that functionality, plus do other
things.

How do I verify that my date formatter is correctly implemented? I can do the
whole iterface+impl+injection dance so that I can test my formatter (which was
supposed to just be private implementation detail, but is now public so I can
test it), or I can add some asserts to my tests for one of the public
functions that depends on it. But which one? Do I test it in both places? Do I
pick a favorite? Do I leave a note in one test explaining why it seems more
involved than the sister function?

------
teknopurge
Was thinking about this the other day. Working with various teams, especially
made up of developers early in their careers, I've seen recently a mashup of
TDD disguised as CI/CD pipeline tooling reliance.

I appreciate wanting coverage to see if something gets broken, however, when
things break anyway and the root-cause shows there are too many moving
parts(updating test cases, dependency trees not getting bumped, overly-
complicated build stage, etc.) it makes me question where the sweet-spot is.
Working code and broken pipeline? May spend 2 days figuring out why only to
punt the issue as irrelevant.

I can't help but feel tooling and frameworks are contributing as crutches and
guardrails that developers lean on, contributing to fragmentation and
efficiency waste in the industry.

~~~
AgloeDreams
The most obvious case I have seen of this is the addiction to over mock
everything and to not write any integration tests.

We got 90% coverage! Sure, all the mocks use 'any' and are not really relevant
to real world returns, we don't test any part of the front end and the tests
have never actually caught a failure of the many we have every quarter, but
90%!

------
BiteCode_dev
To be dead, it would have needed to be alive.

Sure, on HN, you will find sometimes some people that reports using it on real
life projects.

But in the last 10 years, I did missions for about a 50 or so companies, and
none of their team members, NONE, used TDD.

Few even had tests at all.

I've seen companies tried many things: agile, remote work, one team by micro-
services, etc.

I've seen many videos of people saying they applied on TDD in their companies.
I've even seen a few IRL of them in conferences or meet up.

But I've never worked with people using TDD in real life.

Not saying it doesn't happen, or that TDD is bad, just saying that I don't
think it ever became a popular approach. Like Haskell or Nix, it's famous only
in our bubble.

------
fallat
"Is X Dead?" -> No. Usually this mean the tech has finally left hype zone.

------
johnyzee
The next big thing is ATDD - Acceptance Test Driven Development.

This is implemented using frameworks like Cucumber[1], in which test scenarios
are:

1\. Described at the feature level

2\. Described together with, and signed off by, the business stakeholders

3\. Described in a structured format, which can be implemented in code ('given
... when ... then' format)

The advantages this gives are huge. It is essentially the business
requirements described as test scenarios, which can be executed in an
automated fashion, (co)authored and owned by the business.

[1] [https://cucumber.io/](https://cucumber.io/)

------
jmartrican
Three points about TDD.

1). For some people TDD helps them gather their thoughts and design. Gives
them a place to start their coding. For some people, like me, i rather just
start writing the code.

2). I think it is important to have failing tests. TDD is one way to arrive to
this. One thing I like to do is that after writing my code, i then write the
test, then comment out the code to create a failing test. I want to make sure
that the code is passing because of the new code.

3). For fixing bugs, TDD is a must... IMHO. You want to make sure you are
fixing the right thing, so create a failing test first.

------
m12k
I think one of the main benefits of writing tests before implementation rather
than after, is that it forces you to think about making a testable design. But
if you are already in the habit of making testable designs, then IMO TDD isn't
worth it, because front-loaded tests can slow down your iteration speed, and
become a drag if you are trying out a couple different designs before settling
on one. Once you know how to write testable code, you're just as well off
writing the tests afterward - works fine for me at least.

------
desmap
Just skimmed the comments and missed the rise of typed languages, especially
TS with its excellent tool-chain. Sometimes I just code with vim and coc.vim
(which brings VSCode's LSP to vim) for hours without ever running tsc. The
live type-checking and other checks in the editor are so good. Same with Rust.
Guess C# had this for decades in Visual Studio (same with Java) but yeah,
somehow I and many other people (from the dynamic langs like Ruby) missed this
and needed TDD even more.

------
henrik_w
TDD/unit tests are great in some cases:

\- when testing algorithmic logic

\- for rapid feedback

\- for setting up good context (e.g. a pool that is full)

\- helps get well-tested parts for when using integration tests

\- to make sure your design is decoupled

More here: [https://henrikwarne.com/2014/09/04/a-response-to-why-most-
un...](https://henrikwarne.com/2014/09/04/a-response-to-why-most-unit-testing-
is-waste/)

~~~
wry_discontent
TDD is most useful to me for algorithmic logic. I tend to pull a lot of that
stuff out of my code, because it's not related to my domain.

TDD is a lifesaver for random one off algorithmic problems. You can either
write it in 5 minutes, and spend the next 2 weeks fixing random bugs in your
`includeRange` implementation, or you can spend 20 minutes to TDD the function
and be done forever.

------
danmg
One of the things I got out of learning Haskell was learning about QuickCheck.

Now, my preferred testing method is to write pure functions attach generators
for the given input types, assert invariant properties about them, and then
let QuickCheck fuzz my functions where it will try to find a minimal example
that breaks the given invariants.

The remaining stateful portions of code are tested through integration tests.

------
hinkley
Some of the literature I've read on Property Based Testing 'speaks to me' but
not really in an epiphany sort of way, so I'm still trying to decide if I want
to dive into that or not.

So far it's the only thing I've seen that really feels like it could carve out
a big chunk of territory from TDD. But that would mean _less_ TDD for me, not
removing it.

------
AnonymousRider
I remember arranging a talk for my college’s AITP Club in 2012 which was sold
to me as a trendy “How to XCode for iOS Blah Blah Blah.” After setting up the
projector and giving a brief introduction the guy went headfirst into an
evangelical tirade about TDD. No XCode, no iOS, no iPhone app. From that day
forward I knew TDD was bullshit.

------
rco8786
Was TDD ever really alive?

------
sergiotapia
In my experience it never was a thing. 12 years writing code, never seen
people do TDD even once. You write tests sure, but you don't TDD. All it was
is an opportunity for shysters to squeeze money from training unsuspecting
companies.

There's another gimmick out there these days, but I won't name and shame,
people will get mad.

------
platz
who is actually watching and discussing the linked videos in this thread
rather than braindumping all their anecdotes?

------
pjmlp
It was never a thing to start with.

My fun questions to TDD advocates was always related how to do desktop
applications the TDD way.

~~~
insertnickname
There's a book on TDD with an extended case study about developing a Swing
application: [http://www.growing-object-oriented-
software.com/](http://www.growing-object-oriented-software.com/)

------
wilgertvelinga
If people want to learn how to do TDD well, check out the Codecraft playlist
for your favorite language by Jason Gorman:
[https://www.youtube.com/c/Codemanship/playlists](https://www.youtube.com/c/Codemanship/playlists)

------
wolfgang000
I believe TDD is still a valid tool to develop well-tested software, of
course, don't take it to the extreme, where you mock everything to in the end
test nothing or create tests that have very little value, I find TDD a lot
more when is applied as guidelines instead of rules.

------
andrewingram
I used TDD for unit tests, less so when I have to start mocking things, not at
all for E2E tests.

------
mucholove
Yesterday I made a glue program that imports my model code and http code and
makes requests to my running server.

I’ve loaded it with assertions and anytime my server code is wrong—it’s just
so easy to find out.

The biggest issue for me is trying to get XCode to launch my server first and
my test program second every time I make a change.

Besides that—having the test program is a joy. Took me an hour to setup.

All this is to say—testing is very much alive and well in me.

The biggest mistake made is that these IDE’s or programs usually make the
testing command something you need to run besides the program. They really
should run together. Once click. Test everything.

PS — my favorite testing framework is MPWTest because it brings my code to
life as a living document.

With HTTP or server code it hiccups of course—hence my need to roll my own.

[https://blog.metaobject.com/2020/05/mpwtest-reducing-test-
fr...](https://blog.metaobject.com/2020/05/mpwtest-reducing-test-friction-by-
going.html)

------
ryanbrunner
I definitely see a lot less of "TDD as religion", where all code is written in
TDD, without exception, and no code exists or is added to a project without a
failing test to demonstrate its need.

Most developers I work with (as well as myself) will use TDD as a technique
that can be useful in certain circumstances.

When it's easy to use and reason about code as an isolated unit without any
dependencies outside of straightforward libraries, TDD is usually useful and
yields good results.

Things get uglier when there's more dependencies involved, which tends to lead
to excessive mocking that results in your tests just testing a very specific
implementation (expect method A to call method B on object X), or to alter
your code in a way that's purely in service of the tests (the "test induced
damage" that DHH talks about).

To take a Rails example - I think there's little to no value in doing TDD
style testing for controllers. The tests will almost by definition test a
specific implementation (since well-written controllers will often just
connect various other models and service objects), and any efforts to get
around this will just introduce unnecessary abstractions that make your
codebase much more difficult to comprehend.

~~~
aequitas
> I definitely see a lot less of "TDD as religion", where all code is written
> in TDD

In my experience it helps to drive a new technique or paradigm to it's extreme
for a while (for example by employing it religiously in a pet project). After
that you can look back and see where a sensible boundary lies for applying
said technique. And often it allows to grasp the true essense whereas
otherwise (trying it just a little bit in your current job project) doesn't
allow it to bear it's fruits and it will be quickly forgotten or turn into a
anti pattern.

------
gregdoesit
IMO fast software iterations, coupled with excellent observability tools - and
the ability to better issues close to realtime - killed TDD.

TDD became popular at a time where waterfall and slow shipping cycles were
common. In this setting, writing well-tested code upfront had a large upside.
This process also helped clarify the spec on the go.

However, as teams and companies move to deploying multiple times per day,
adopting things like monitoring, alerting, canarying, the value of having code
that satisfies a spec upfront is lower, than code that works as expected in
the prod environment.

I see almost all tech companies use unit and integration tests extensively,
often with high coverage - but sometimes retrofitting them, after validating
that the code does what is expected, in a complex environment.

~~~
troynabed
Visibility (logging, monitoring, etc) trumps testing IMO in terms of coding
productivity, especially when you don’t understand the domain fully yet.
Testing utility increases when code matures and becomes more stable.

------
mattlondon
TDD only works for extremely well-specified requirements.

E.g. you are writing a server for a very well-understood protocol, or writing
some code to perform some very well-understood algorithm.

These are easy to test because there are very clear & totally unambiguous
answers that the software has to answer. You can very easily write test-cases
then because you already know _precisely_ what the software should do.

In reality for a lot of "enterprise" software I have been involved with, I
have found that there is _very rarely_ a specification that is that well
thought out enough so that all of the answers (...or even _questions_ that
need to be asked) are known before any code starts to get written. The specs
are usually super-high level (e.g. "the user should be able to update their
preferences") that most of the details are left as an exercise for the
developer implementing it.

I'd wager that if you _are_ in the situation where you have such a clear and
concise specification before any code is written, then you're not actually
agile at all and instead you are in some lethargic ossified waterfall where it
has taken 18 months for The Committee to sign-off the specs for exactly what
the widget will do when the user specifies that their preference is for home
delivery but they have not entered a postal address yet but they do have a
grandfathered-in address from the pre-aquisition database that can be used
when the terms-of-service-acceptance bit has been flipped, but not if they are
in EU and have not flipped the GDPR-acceptance bit yet etc etc etc.

And if you are in this scenario, then some TDD ain't going to help your
velocity. Real life is messy.

Don't get me wrong, I am 110% in favor of unit tests. I just don't think that
for the vast majority of projects that there is enough detail in the specs to
write the tests _first_ then stop when all the tests pass.

------
Bokanovsky
I've found what really has helped me with TDD is a continuous test runner.
Different platforms have different takes on them, but in my case it's NCrunch.

The fact that it's running the test as I'm typing is incredibility powerful.
It helps me keep to a rhythm, as I don't have to stop and get the test runner
to run the unit tests.

When I'm refactoring and a test goes red when I'm not expecting it has saved
me time, it also helps you question everything, why was this test needed, do
we still need it?

That and (with NCrunch at least) the coverage dots. If I'm diving into some
code without an test coverage or code with gappy coverage it lets me know I
need to be cautious.

------
rement
TDD is a lot like microservice adoption.

If you start your project planning on a microservice architecture it is
relatively simple to design it properly. If your are migrating your monolithic
application to microservices you are probably going to have a hard time.

TDD is similar. If you start the project planning on writing your application
TDD style it will be simpler. Converting your large business critical, not
written to be tested, application over to TDD is not impossible but it makes
business sense to "ship it" and hope for the best. Hiring a QA engineer would
probably be less expensive than your developers rewriting your application to
be "testable".

------
dpeck
It’s not dead, but it requires the developers to know what they’re building
before it’s being built.

“Walking on water and developing software from a specification are easy if
both are frozen”

------
exabrial
Depends on your org and it's goals.

For orgs that are engineering oriented with strong business plan, strong
requirements gathering, case studying, and minimal feature selection, it's a
boon.

For orgs that aren't engineering oriented and use development as a tool to
find holes in the business plan, testing anything is usually a waste of time.
You're trying to see what the market responds to, rather than actually build
something of quality.

It comes down to each org's goal and usually the best answer will be somewhere
between the middle of the two extremes.

~~~
afarrell
It also depends on the individual engineer.

Some engineers (myself included) need to be able to write tests to make hour-
by-hour progress towards the minimum needed to learn about the market.

------
treespace89
Any developer worth their wage tests.

After writing any code I run the program to test the change I have made. I
make sure the code is exercised either by logging, debugger, or clear UI
change. If it's a browser app I use multiple browsers, if it's a rest service
I make the rest call.

But I have known some developers that write a unit test, but never test the
actual change! And without fail serious bugs appear. Like the application
fails to start, or crashes when the new feature is invoked for the first time.

------
lazyant
"Every great cause begins as a movement, becomes a business, and eventually
degenerates into a racket." Eric Hoffer

------
xellisx
To do TDD correctly, things have to be defined in great detail. 'Given X, I
should get Y'.

~~~
mason55
That’s actually a benefit of TDD imo. One of the most difficult things about
software development is specifying how it should work. Once you have that the
rest is pretty easy.

So if you go to write a test case and don’t know what it should do that that’s
an early sign you need more time on the spec.

------
lmayliffe
This is six years old. What is the value in rehashing the same arguments again
in 2020?

~~~
geebee
Perhaps as a reminder of the fundamentalism and bullying that can happen in
our field. Some methodologies almost seem to take on the quality of a moral
panic. It's fascinating to me to see how they are eventually punctured and
deflated.

------
bdcravens
Should have (2014) in title.

------
proverbialbunny
I worked on the proxy and dns software almost every ISP in the world uses
today to transfer http and tls traffic. If a bug was put into code it could
take months where subtle parts of the internet would be failing. This wasn't
an option, so we had strict testing. I think for every line of code we had 2
to 3 lines of equivalent test code. From this experience I've found:

1) I found a natural way to write pseudo TDD I prefer. When I'm mocking up
some code in a project, I write interface code to run what I'm writing. I run
the code from time to time to make sure it's working. If I'm not doing that,
how do I know my code it working? You get a sort of dopamine high when running
the code and seeing everything come together.

What I do is I take that interface code I wrote while creating the code and
instead of deleting it, I create a test function (or multiple) and I copy
paste it there. If I'm in a hurry I'll come back and write the assert_eq or
equivalent later.

Writing a test is that quick. You're already doing 95% of it. Just copy paste
it into a function. I know it's not always that simple, but often times it is.
When approaching testing this way there is no reason not to write tests.

2) Systems tests catch far more than integration and unit tests, so if you're
serious about testing, these should be considered. Systems tests go by many
different names, so by this I mean a testing echosystem where the system is
spun up and a mock client and mock servers are ran. This simulates a real
world client connecting to the software and tests the real world results they
would get.

The reason system tests catch so much more than unit tests is in a large
system not everyone knows every functionality and how it should work. It's
easy to create a regression where a new feature is added but it changes the
behavior of this old previously unknown feature. This often results in going
back to the drawing board as it's a business issue more than just a software
issue. These kind of bugs can get pretty nasty in the enterprise space if
unchecked so it's a good idea to catch them.

Furthermore, systems tests catch nearly 100% of what unit tests and
integration tests catch as well. This is because if there is a bug in a unit
caught, it will propagate out to the client affecting behavior. If it is
caught in a unit test and not a systems test, then there might be a hole in
systems tests where it's not testing every scenario, or you might have
dead/unused code in the code base or something similar.

Systems tests I've found are better at catching race conditions as well.

Systems tests act as a great source of documentation, because they document
how every interface in the program is intended to act. You get a quick high
level view and can learn the ecosystem quickly from it, where unit tests are
down in the weeds.

And finally, you don't need to write anywhere as many systems tests as you do
integration and unit tests. You can test a lot more with a lot less work.

------
rerx
(2014)

------
TheRealPomax
What's the tl;dr of this article? It's framed as a question so I assume the
answer is "no" but that's not particularly useful information on its own.

------
hit8run
Sorry but not a fan of DHH anymore... In my opinion he is too focused on self
marketing and always tries to show how different he thinks. Dude is not Steve
Jobs.

------
admp
(2014)

------
lmilcin
Let me have my take on TDD. (I work with legacy corporate systems mostly so
that's how I am biased)

TLDR: it never had a chance of working

The basic problem of TDD is that it has a huge cost to it and therefore you
have to do it right to not only get any benefits but to just get even.

In a perfect TDD utopia, all developers would start with a set of clear,
complete requirements and then implemented each requirements with a clean and
concise test and then ensured their code passes the test. Then later, with not
much additional effort, these tests would be used to verify that whatever
shenanigans you are doing to your project still cause functionality X to do Y
when Z happens.

Unfortunately, this fails to capture whole diversity of the development
reality.

\- It is rare you get clear and concise requirements. Most corporate code is
just something that a developer decided should happen in a situation, not a
result of meticulous discussion and planning. As such, preserving these
requirements in stone has much less value than one might think because now
instead of preserving objective truths about how the application should work
you are spending time to preserve what an intern Joe decided this should be
doing with probably little regard for entire system.

\- There isn't a way to tell your tests are complete and good quality. Whereas
an application must implement functionality and must be doing (something)
right because otherwise your users would notice missing or faulty
functionality (users are ultimate verification of your app), tests can be as
incomplete as you want and as pointless as you like and will still "pass".
Thus, completeness and quality of tests is completely on a whim of developer.
Guess what, if they can't get the app right will they be able to at least get
tests right? And if they can get the app right the lack of tests is the least
of their problems. By having feedback on how teh application functions from
their users the developers are "forced" to deliver something at some level of
quality. By having no feedback on tests quality developers are not forced to
deliver anything (other than to satisfy tests) and if they are not forced to
do something no user or manager will ever look at they are most likely do only
what is absolutely necessary (to pass automated gates).

\- Most developers want to be done with a task and there is this bunch of
stuff they have to do that is just annoying because it has no immediate
bearing on how the application functions. That bunch consists of refactoring,
documentation, code reviews and guess what, tests. Do you know why
documentation is typically poor if at all? Do you know why code reviews mostly
focus on little stuff and rarely attack big problems? Because overwhelming
majority of developers don't put their heart into stuff that does not bring
them closer to achieving the goal (except reading HN, taht is, and other forms
of procrastination).

\- Management. While they will usually "require" tests, I have never in my
life seen that a development was extended to get more time to "get tests
right". Usually tests is something you have to do as quickly as possible to
meet the requirements rather than something that is seen as core to the
project functioning.

\- Developers. Most developers are already overwhelmed with stuff they need to
learn to be able to functioning in increasingly complex technological jungle.
Microservices? Devops? Full stack? Given choice to spend time learning skill
of doing tests right and learning above not many developers will choose tests.
Making quality tests is a skill that like any other skill require you to apply
yourself.

\- Refactoring functionality in a legacy system (which is theoretically
exactly the reason you might want tests) requires that you know exactly what
you are doing. This means understanding what changes are "safe". When I
refactor a piece of code I try to form a hypothesis that the change is not
going to break functionality and then prove it by following code leads (how
the function is used, etc.) until I am satisfied. Doing hundreds or thousands
of refactorings one by one requires from me that I am 100% certain I know I am
not breaking anything. If I were to use tests to help me do refactorings I
would also have to have close to 100% trust in that these are correct.
Unfortunately, there are no tests to test that tests are correct or complete.
Therefore, I cannot use tests to replace my process of following code leads
and this makes tests useless for my needs.

~~~
wenc
This. It's not that tests have no benefit -- they do, especially for complex
refactoring (though even for that, as you say, tests are rarely exhaustive
enough to engender full trust).

But outside of the narrow confines of production programming, it's rare that
the requirements can be specified completely up-front.

In the initial stages of development, coding is very much a discovery process.
Code is often evolved rapidly and drastically, such that any test written is a
throwaway. And some of these tests/mocks aren't easy to generate. In data
intensive applications, it's especially a chore to have to generate new mock
database objects each time the code changes, and with completely different
data characteristics to boot (in order to test different facts of the code).

The psychological tedium of doing this will put off most developers. The
potential drag on productivity extends to more than just the time required to
write test + code: the whole pace will feel sluggish and the dev is liable to
feel demotivated.

I see tests being more useful as a retrospective addition, once the code has
stabilized and the use-patterns established.

~~~
lmilcin
That is good point. In my initial stages I like to explore the problem and
tests just get in the way of being able to refactor everything very quickly.

Maybe I want to put some dirty implementation first and then once I think I
understand the problem better quickly fix it?

Maybe I want to start refactoring it without knowing where it exactly will
lead me?

The problem is tests do not help with these initial stages, they only help if
you know exactly what you are doing.

And once I am done with the initial stage I want to move to another problem. I
don't like sticking with the same module to now do the documentation, tests,
etc. Once something works and I am satisfied I move on.

------
calebm
yes

------
jpswade
Betteridge's law of headlines applies.

------
PaulHoule
When I hear "Kent Beck" I reach for my gun.

I don't have a problem with "Kent Beck" as an individual but more than anyone
else in software "Kent Beck" is a brand like "Anthony Robbins". I care what
you think but not what you think "Kent Beck" thinks.

For that matter the same thing is going on with Martin Fowler, who is using
his software craftsmanship cred to legitimize a shop that ships work to the
third world and turns programming from a white collar to a blue collar
profession. There is no topic you could pick better if you wanted to have a
clickbait discussion war over software.

~~~
ullondo
I laughed at this, and I agree that almost any sort of appeal to authority
should be derided as the cheap trick it probably is. That said, Kent Beck says
a lot of smart stuff in his books, and I don't think you should stop listening
just because his name comes up.

~~~
PaulHoule
It is the level of indirection about the problem.

Kent Beck said something good about problem A.

Now somebody else is talking about "Kent Beck said something good about
problem A" and there is the risk that "Kent Beck said so" outweighs the
"something good about problem A".

Baudrillard writes about this phenomenon about the "precession of simulacra",
which gets you roughly to the place Girard warns about -- Baudrillard reminds
that there was something else in the past, Girard wouldn't care.

Steven Hawking said some absurd things about black holes in the 1970s that
essentially postulated no quantum gravity (e.g. the propagator is not unitary
and operators being unitary is almost the only thing you need to do quantum
mechanics) I think the "cult of personality" held back work in QG for at least
two decades, it wasn't until some brave people tried to calculate things with
radically different methods and realized they were getting the same results
and not by accident that the "information loss" concept is absurd, as is the
classical picture of a black hole interior.

