
Why TDD isn't crap - mzl
https://hillelwayne.com/post/why-tdd-isnt-crap/
======
latch
Like the author, I subscribe to the less strict view that TDD isn't
necessarily about writing the test first, but rather about having test play
some part in how the code takes shape.

Unlike the author, I absolutely believe that tests are about design. More
specifically, they're about identifying coupling so that you can reduce it.
The function being "awkward" to use is part of it, but code which is hard to
test is almost always going to be hard to maintain.

If you do this for enough time, you should naturally start to write code that
is less and less tightly coupled. At that point, the value of tests as a
design canary decreases, but never completely goes away.

This is all broad generalizations. Individual vigilance and giving-a-fuck
matter more than anything else. But if you show me some random code, and it's
spaghetti, I'll bet every single time that the author doesn't test.

~~~
k__
"having test play some part in how the code takes shape"

That is exactly the point why I don't like TDD.

I mean, there are enough constraints that shape the code, why should something
artificial like tests shape it too?

With mocking and everything you end up writing code for tests and not code for
your problems.

~~~
humanrebar
Well, you write code for design flexibility. Testing just forces the issue and
helps you explore possible design needs. For example, what if the database
query you're using isn't suitable any more? Instead of designing around a User
table, you start passing around User records. Then your design is more future
proof in case you start getting Users from a service or in-memory cache
instead of a database.

> ...not code for your problems...

Right. It's not your problem _now_. It may be your (or your successors')
problem in the future.

Is it always worth the effort? Nope. But experience, communication, and good
teamwork will help with balancing short-term and long-term goals.

~~~
discreteevent
This is exactly the problem. In general you should not be writing code for
design flexibility. Your code should contain the minimum number of
abstractions to satisfy its requirements. If flexibility is a requirement
right now then that's ok. Otherwise refactor in the flexibility later. But
don't make the code flexible for the sake of tests. It vastly over-complicates
it for no benefit. Instead write tests as close to the requirements level as
possible (ideally most of your tests should be just below the ui)

~~~
humanrebar
> Otherwise refactor in the flexibility later.

I'm saying, with experience, teamwork, communication (including through
tests), you'll know when this proposition makes sense or not. It's not
universally true that "we'll worry about it later" make sense.

------
hacker_9
I expect the reason TDD is so controversial on here is people can't see the
long term benefits of tests, and instead only think in the short term. But in
the commercial world, code you write can potentially have a lifespan of 30+
years. In this case, making a choice to write tests is the difference between
writing a maintainable component in the future vs writing a soul-destroying
'legacy system'.

If you agree tests are a good idea, but think TDD is too extreme, consider
that TDD simply makes sure you write testable code from the outset. When you
have a test wrapping a method, and need to add a dependency, you actually
decide to use DI (Dependency Injection) because otherwise your current tests
will break / become integration tests. TDD makes you think upfront about about
things like mocking, separation of concerns, etc.

When you have the mindset that you will absolutely 100% write tests at some
point anyway, TDD is actually a faster and more fun way to develop than
bolting on tests afterwards.

~~~
mattmanser
This whole thread is a response to:
[https://news.ycombinator.com/item?id=15591190](https://news.ycombinator.com/item?id=15591190)

Your points are directly addressed in the pdf. One of his general points
being, in practice, the tests become the legacy system instead. And I'd add to
that. Given that you've at minimum doubled the code (and doubled the bugs), it
seems like a really bad long-term trade off.

Also DI does not reduce coupling. I've seen plenty of code with DI that's just
injecting like 30 things, which is obviously therefore coupled. It just makes
it really obvious, but DI itself has massive downsides.

If you've ever worked with bad programmers and seen it in the wild now, I'm
sure we can agree DI and TDD doesn't stop bad programmers writing bad code. In
fact, all it seems to do is make even more of a mess.

Not only do you have to pick apart the bad code, you have to start dealing
with carefully moving methods to the right places because DI can make it hard
to figure out what's being used where, and then on top of that tests break all
over the place because they're entirely dependant on the implementation
instead of the functionality.

~~~
hacker_9
What exactly is your experience working in a commercial environment? When the
updates you ship can affect tens of thousands of customers? In these
situations, 'Sods Law' often comes to mind - "what can happen, will happen".
If you have a defect in your untested code, you can absolutely bet it will
come out at the most painful time possible in front of all your clients. Being
blamed for that kind of stuff is a stressful way to live your life, much nicer
to have tests shout at you instead.

* and to reply to your edit: of course tests break when you change the code they are testing! But after reviewing the broken tests, you see the intent and re-adjust the test. But what often happens, is you realise you didn't fully understand the code previously, and actually after reading the test you need to undo your refactoring as it didn't make sense in the first place.

~~~
ozim
Pick the tool for the job, if your experience is with things that last 5+
years then ok. I work on a system which is 5 years old and all things from 5
years ago are not relevant. Actually I am working on it only for 3 years now
but we pretty much each year rebuild whole thing. With minor things going in
and out. Of course we spend a lot of money on automated tests but those tests
had to be thrown away because of amount of changes in the system.

~~~
hacker_9
So you disagree with TDD, but you need to rebuild your system from scratch
every 5 years? I feel like this is an argument for TDD.

As for throwing away tests; Unit tests are meant to be pretty simple - rule of
thumb is you can run a thousand tests in ten seconds. Arrange, Act, Assert -
they don't need to be complex, they just need to imprint the intent into the
codebase. If the intent changes, by all means remove the test.

~~~
nilkn
> So you disagree with TDD, but you need to rebuild your system from scratch
> every 5 years? I feel like this is an argument for TDD.

Not necessarily. If the rewrite is only addressing issues that would have been
prevented with tests, then sure, this is clearly an argument for TDD. However,
if the rewrite is going beyond what could have been provided by tests, then
having a large testing system could actually make the rewrites harder, which
means it's an argument _against_ TDD.

It's hard to say which is the case without knowing the details, and I'd
certainly err on the side of saying a yearly rewrite is not a good sign. But
if a company is undergoing rapid growth, it's not that unusual for certain
systems to be rewritten frequently as fundamental new insights are gathered
about how to tackle problems that are hard to scale. And if the system isn't
that large to begin with, periodic rewrites could be easier than writing a
single version that's supposed to last 10+ years as business requirements
dramatically change and expand.

~~~
Fifer82
I agree. My experience is:

1\. Wow it made 10k! Lets rewrite

2\. Wow it made 100k! Lets rewrite

3\. Wow it made 1M! Lets rewrite but let's properly think hard about the
future of this product (Introduce tests and more people).

ALWAYS DO TESTING should always be given the context (if budget allows).

------
weego
TDD isn't bad at all for a mature product where you have clear requirements
for additional feature development and granular developer tasks.

I've just seen a lot of it where a product is still in broad strokes
development and developers get stuck between whether having tests written
based on early assumptions are correct and the code should conform to those,
or whether new ways of thinking that invalidates the early assumptions and
tests are the right way and the tests should be changed.

From the article "We don’t actually know that much about what good software
engineering looks like." sums the issue up nicely. There is no definitive
playbook on whether a strategy like this is good or bad. It's a tool that is
good if you use it right.

~~~
latch
There's some truth to this, but it's also possible that the team just wasn't
very good at writing tests. It's not easy to do well. Changes will break
tests, but given that changes are applied one at a time (you're not erasing
your ./src folder, dumping in new code and expecting all your tests to pass),
then it should be possible to keep the tests up to date. If a few changes
break many tests, then tests might be over-specifying, not making use of
factories, testing too much, or a lot of ways that are easy to get it wrong.

I've seen and heard this a lot, and it's often the result of code that does
too much (not cohesive, too much coupling) and tests that correspondingly do
too much.

~~~
FLUX-YOU
Getting good at TDD requires being bad at TDD for a while, which requires
someone to write bad test code at some point.

But if the entire population as a whole is still having problems with TDD for
as long as TDD has been around, then it needs to be a niche methodology.

Our decisions to adopt methodologies/technologies can't revolve around the
perfect case of a crack A-team of developers who are good at everything they
do.

Think of the mediocre developers!

------
vog
This submission appears to be a reaction to the following submission:

"Why Most Unit Testing Is Waste"

[https://news.ycombinator.com/item?id=15591190](https://news.ycombinator.com/item?id=15591190)

(just for the record, i.e. for future readers)

------
apo
The problem with many TDD critiques is that they offer no alternative. The
original presentation:

[https://www.youtube.com/watch?v=DQBf6li1hww](https://www.youtube.com/watch?v=DQBf6li1hww)

is a case in point. Presenter takes what he believes to be TDD's four main
points, some of them strawmen, and mocks them. He does make some good points,
but here's the problem: he offers no alternative.

If you're not writing tests as you go, that you run before every commit (or
before you move onto the next thing), then what is your standard for putting
code into a production repository?

~~~
k__
You're right, I also don't know good alternatives.

I wrote my first big API with many tests, hundreds of them. Then the
requirements changed and all of them failed. People worked for weeks to get
the tests passing again. So just writing many tests up front in a new project
doesn't seem to help anyone.

Also, I went from feature to bugfix sprints. First I implemented some
features, then they got tested by non-devs, then I fixed all the bugs.

Often this was faster than the whole tests up front stuff with refinement of
tests afterwards.

I also saw that >90% of my bugs came from the dynamic nature of my language of
choice (JS). So I could imagine, that this feature->bugfix->feature->bugfix
cycles could be greatly shortened (especially the bugfix parts) by using a
typed language instead.

~~~
humanrebar
In most statically typed languages, the bugfixing (usually) takes longer
because you need to do many more things each time you refactor. An enterprise
Java application doesn't typically pass around the equivalent of a JS object.
It passes around Users, Admins, and Guests. But then some features in Admins
need to be added to Users without affecting Guests. But Guests inherit from
Users, so you have to go back and restructure the type hierarchy. But it turns
out that User is a Hibernate entity, so you have to make sure you don't change
your DB schema on accident.

Using a static language _does_ help with some bugs (where it effectively
serves as a compile-time test suite), but it can also increase coupling a lot.
It's not easy to quantify that tradeoff in abstract. I'd be wary of people
overselling aggressive compiler checks leading to productivity boosts.

~~~
bunderbunder
I'm beginning to suspect that, in the context of these debates, "most static
languages" is a roundabout way of saying, "Java", or other static languages
with a similarly anemic type system. Also it's a roundabout way of describing
the particular way that Java code tends to be structured.

In one that has better support for generics (i.e., reification, contravariance
and covariance) and some form of mixin, you generally shouldn't have that much
trouble adding behaviors to an entire class of types without having to modify
any of them. This is the fabled open/closed principle that is oft lauded and
rarely practiced.

To take it a step further, if you're using interface polymorphism and
decorators to build your types instead of relying on subclasses, you won't be
able to paint yourself into the corner you describe in the first place. The
problem is, of course, that a language like Java that doesn't let you add new
behaviors for types without either modifying the original source file or
resorting to some Gang of Four awfulness, will tend to punish people for
writing cleanly-structured code like that.

~~~
humanrebar
> ...is a roundabout way of saying, "Java", or other static languages with a
> similarly anemic type system...

I agree. In my experience, it's how the median developer writes code, though.
Even in more flexible languages they'll reach for type hierarchies and
abstract base classes. I've seen people _create_ these things in languages
like Lua and Javascript that don't really need them.

I think TDD-like approaches make the case for other approaches more clear, for
what it's worth.

~~~
bunderbunder
Further agreement.

I haven't looked at a textbook on programming recently, but I'm worried that
the standard is still to actively teach new developers to program this way,
even though we've _known_ for decades that towering piles of subclasses
invariably collapse under their own weight.

That said, I still wouldn't lay this tendency for damaged design at the feet
of static typing in general. Not when some of the most vigorous arguments for
static typing tend to come out of language communities that don't have
subclassing in the first place (e.g., Haskell), and when (as you point out)
similar mistakes are just as often made in dynamic languages. Dynamic
languages are certainly more forgiving about poor design, but whether that's a
good thing is yet another fun debate.

~~~
humanrebar
I don't blame statically typed languages. But, yeah, they are a bit more
unforgiving if you need to refactor your way out of that mess is all.

~~~
echlebek
Static languages make refactoring easier, not harder. The more information you
have at compile time, the more automated tooling can do for you.

~~~
humanrebar
The more compile time information you have, the greater coupling you have as
well. Make User subclass Entity for polymorphism reasons and now taking a User
parameter makes your business logic depend explicitly on your ORM.

~~~
bunderbunder
But now the conversation's going in a circle. In a decent static language, you
should never have to run into that particular problem.

Assuming it's a modern static OO language, your business logic should depend
on a User _interface_ , so that it never has to take a dependency on
implementation details like that. Even if User was a class beforehand, you can
easily extract an interface at a later date, when you find that you need to
avoid some tight coupling.

Don't blame the gun for what happens when you point it toward your foot and
pull the trigger.

------
macawfish
You know what's awesome about TDD? It means I, a nobody, can contribute to a
huge open source project with lots of moving pieces and be pretty confident
that I'm not catastrophically breaking anything and that my feature works as
intended.

That's awesome!

~~~
humanrebar
Right. Good tests are a passive communication tool. They communicate
expectations. Most of the discussion here revolves around different people
having different expectations. I think the "please test" people have a better
argument mostly because they're advocating for tools for good communication.

~~~
macawfish
Totally. I used to hate testing. It felt like someone making me do homework I
didn't want to do.

Then it dawned on me that I was already testing, the hard way, by opening up a
console and manually setting up test conditions over and over and over again,
and that I could do this much faster and in a reusable way by writing tests
and running them. What an epiphany that was!

I still open up the console all the time. It's a really useful thing. I think
ideally, my testing environment would dump me into a full-fledged console if
something went wrong, but this is not something I've taken the time to set up.

------
DanielBMarkham
My journey with TDD started with hater, moved to skeptic, and is currently on
cautionary-supporter.

It's a design methodology, not some new way of unit testing. In fact, I think
the more you think of TDD as being testing, the more you're probably missing
the point.

Modern OO languages are full of hidden dependencies and perverse side effects.
The only sane way to write clear and maintainable code is to write the spec
first, that is, you code by writing tests, then writing the code to make the
tests pass. In this manner your code is always up-to-date with your spec.

Where is it a bad idea? Exploratory or academic code, for one. Startup code
where there's no clear benefit to maintainability or even knowledge of what
the app is supposed to do.

Pure functional code is another case entirely. Lately, I've switched to
writing small pure FP in microservices, usually with less than 200 lines of
code. Writing code like this creates very simple and small pieces of
functionality with little hidden state or adverse side effects and a limited
cyclomatic complexity factor. I don't see any reason to use TDD here, because
there's nothing happening that isn't obvious. (It's a horse of a different
color with larger pure FP projects, however. Having said that, one should pay
careful attention to whether or not you need to build out huge pure FP
execution units in the first place)

~~~
anon1385
>It's a design methodology, not some new way of unit testing. In fact, I think
the more you think of TDD as being testing, the more you're probably missing
the point.

Weirdly it seems to be the loudest advocates of TDD who are the most confused
about this.

~~~
DanielBMarkham
Yes. Every time I read an essay where in one paragraph the author uses "TDD"
and the next paragraph "unit testing" to mean the same thing, I want to
cringe. I know we're off-the-rails.

I've seen many outsourced teams say they're doing TDD and when you look at the
code it's obvious it's just the same old unit testing as before. I have no
idea how vendors get away with this. It's no less than fraud, really.

ADD: I think the danger here is that, even with hardcore TDD boosters, they
don't understand why they're doing it. It's a discipline, not an engineering
skill ( _Choosing the tests_ is the engineering skill). Over time they tend to
get lax. After all, the code always does mostly what I wanted it to do, right?
So I can look at it and by inspection reason through the execution.

At this point, when you don't understand the rationale behind it and you've
started to slip-up in your application, TDD has become nothing but some weird
way of writing unit tests. Then, sure, you can use the terms interchangeably.
But then you've missed the entire point of what you're doing, so might as well
just call it "unit test ahead of coding" or something.

------
unnawut
I have been attempting at TDD on and off for about 3 years. It never really
took off for me. I would spend most of the time writing the tests, then
implementing the code, then only to find out that a lot of my tests don't make
sense, or are overthoughts that don't really add business nor technical value.

However, I think I reached my sweet spot just weeks ago. Here is my optimum
workflow now:

1) Write test cases (the one sentences that say what is expected, in plain
English) 2) Implement the code 3) Implement the test code

The test cases become neatly arranged in bullet-like layout in a test case
file. I'm able to read through and be confident that I'm probably covering
most if not all the cases that need to be covered. I'm always able to switch
between the code and this file to make sure my code covered all that the tests
need.

Then once the code is done, I come back to the test cases to implement them
one by one, catching error by error and seeing my code coming to life. As I
code now, I know I have the implementation I'm quite confident of, my tests
that have business value are being covered one by one. It's been pleasure
since then and I'm more confident of my code and of my time spent efficiently.

~~~
xellisx
I've been trying to get into TDD myself for years. I've read the 2 most
suggested books and another book that had "testing" in it. But I still came
back with issues. I don't program Java, so I don't understand some of this
code, so I can't just convert it over to the languages I know. The examples
were of basic stuff like "Lets make something that adds two numbers", "Here's
a basic mock, lets not worry about DB queries", etc... It really didn't teach
me how to think in TDD, it wasn't specific for the languages I program in, nor
did it touch on things like "real time data" coming from a stream source,
which I think is "How to think in TDD and how to program to fit it."

------
JoeNr76
I never was a fan of TDD, until I saw this talk by Ian Cooper:
[https://www.infoq.com/presentations/tdd-
original](https://www.infoq.com/presentations/tdd-original)

The whole idea of testing functions and/or classes separately means tightly
coupling your test code to the implementation of the real code, while you
should only care about testing the functionality.

Nowadays, I try to write tests that test a unit of functionality. And the
tests should only change when the functionality changes, not after every
refactoring.

That said, code coverage is a metric with no inherit value (well, unless it's
0, of course)

~~~
marsrover
Not a word for word quote, but I heard this:

> How many of you have a code base that if you refactored, test would break?

> Most people raise hand

and it made me realize maybe I'm not as stupid as I think I am.

I've been trying to understand how to create unit tests that allow me to
refactor for years, and have been completely unsuccessful. The only way I can
achieve this is using the "classicist" viewpoint of creating many "unit test"
and only isolating the architecturally significant boundaries. That doesn't
make me happy either, though.

But, yeah, good to know (or bad to know) that I'm not the only one that
struggles with this.

Back to the video I go.

------
S_A_P
I don't think that any of the common development methodologies are crap. I see
no problem with writing tests as a way to work out ideas. I see no problem
with not doing so. I think the main flaw in these methodologies is that they
assume people problem solve in the same way. It only becomes a problem when
you have a methodology zealot telling you to work in that way only. I am happy
to write unit tests to cover my code. Unit tests can and do prevent
bugs/outages in many cases.

What I do think is kind of crap is the holy war of methodologies that exists.
In my experience, it usually happens that some new CTO or other manager comes
in and says, "We are doing it wrong! From this day forward, WE ALL MUST USE
TDD/AGILE/WATERFALL/YOURFAVORITEWAYHERE".

There is no latitude given, and folks who are not used to the methodology now
take 1.5 to 3x longer to complete things, and deadlines slip. Then the push to
complete work means that things get written sloppily and tests are not well
thought out or people arent fooing their bar with the baz properly. Quality
suffers in the short term, but eventually everyone catches on, and life goes
back to normal.

Then the new CTO shows up...

------
jondubois
I'm glad that developers are finally having honest discussions about this. Not
long ago, it seemed like 100% unit test coverage with TDD was the only valid
point of view to have.

I'm not a fan of TDD (in the sense of writing the test first) but I do think
that some sort of testing-as-you-go is important for back end API work in
particular. I don't think that 100% test coverage is a good idea for 99% of
business use cases.

------
chewbacha
I’ve been viewing tests as writing the logic twice. That might sound like a
waste but it’s kinda like a form asking you to type your new password twice;
you may make a mistake the first time, but making the _same_ mistake twice has
much lower odds.

And really, we’re playing a probability game with ourselves, trying to reduce
the probability of typing the wrong thing. Writing it twice is one way to do
that.

------
staticelf
I think DHH sums TDD up best in this talk (which is absolutely fantastic btw):
[https://youtu.be/9LfmrkyP81M?t=24m](https://youtu.be/9LfmrkyP81M?t=24m)

------
acqq
For those who don't use that acronym:

TDD means here

Test Driven Development

[https://en.wikipedia.org/wiki/Test-
driven_development](https://en.wikipedia.org/wiki/Test-driven_development)

------
LoSboccacc
the most important part to me is to write testable code. and you cannot be
sure that something is testable unless you do write at least some test to
check that objects do in fact work in a mocked environment and that you can
replicate user interaction at any level of your application.

writing a full test battery after that is probably overkill, but making sure
everything can be taken out of the running app and tested singularly is
essential to be able at a later date to get a user bug report and convert it
into a testable case to narrow down the root cause.

a good strategy for that goal is to make use of dependency injection at each
layer separation and to make sure that every user generated event can be also
triggered pro grammatically - that's especially useful as while relying on
something like selenium do work, it's exceptionally costly and aggravating in
the long run.

being able to isolate the bugged behavior and responsible component is the
major advantage that comes with the full TDD implementation, but that doesn't
mean you can't have enough of that with a lighter approach

------
chewbacha
> you’ll need a bunch of servers and a hug if you have microservices

I laughed out loud

------
dustingetz
Hi Hillel Wayne (Author), I love your writing style and that you are so
carefully articulate about what people say, and reading their arguments
charitably! The world needs a lot more of that. This article has an
inspirational writing style that I will try to leverage in my own writings. PS
maybe put your name at the top of your posts :)

~~~
hwayne
Thank you! I was heavily inspired by Dan Luu's blog
([https://danluu.com/](https://danluu.com/)), so I'd recommend checking that
out too.

------
grandalf
My view is very pragmatic: Some code is far easier to write quickly and
correctly using a TDD approach.

In other cases the cost of refactoring with TDD is very high if any
significant design or architecture changes occur, so in those cases I find it
is useful to stabilize the broad strokes/patterns a bit before investing to
much in tests.

------
logicallee
I haven't shared this on HN but I saw this on another forum and thought this
was a really fantastic example of what you can end up with, _in the real
world_ , when you are _driven_ by tests. Real-world example:

[https://i.redd.it/lwin56fisdsz.png](https://i.redd.it/lwin56fisdsz.png)

It doesn't look like a joke to me. It only works over integers so the code is
absolutely correct.

It also strikes me as the kind of convoluted logic that someone took a really,
really long time to come up with, before it finally worked. (As indicated in
the comment.)

A test can hardly capture what's wrong with this code. But any human can see
it instantly. (And it's kind of weird that the programmer didn't.) I think
most people can think of braindead decisions that are not really captured by
testing.

~~~
UncleMeat
Are there any proponents of serious testing or tdd that don't also promote
code review? Why would tests make this code more likely? If anything, I am
more confident that I can change it to be better if I have a test suite.

~~~
ngoede
Yeah the way to write better code is to write better code. I happen to find
TDD a useful tool in that but doing things badly is still possible.

Someone mentioned DI not getting rid of coupling and I agree. DI is a tool you
might use but the way to get rid of coupling is not one simple thing but a
process of a bunch of different tools and techniques. You can't just slavishly
fallow some process and expect it to fix all your issues. You have to think
and do work yourself to fix it.

------
codemac
This is a very poorly researched article, and the previous was as well.

With people like Capers Jones and others doing piles of studies 30+ years ago,
I'm confused why someone says there are _no_ studies on TDD.

My bet is the author doesn't have access to the relevant historical papers,
and doesn't know they exist.

~~~
davedx
Could you provide some citations please?

~~~
codemac
Look up the talk "what we know about software engineering" on YouTube. Also,
search for Mills, Cleanroom, and obviously Capers Jones as I mentioned.

I'm on my phone so I can't link a lot, but I maintain that the author should
do a lit review of software quality in engineering, and they'll get better
conclusions

~~~
hwayne
TDD is only ~15 years old. Cleanroom and correct-by-construction both have
some solid studies supporting them, but take very different approaches than
TDD does. They tend to be more rigorous and also take longer so again, it's a
case-by-case basis thing.

~~~
codemac
> They tend to be more rigorous and also take longer so again, it's a case-by-
> case basis thing

I'm not sure if we're taking away the same things from these studies, as their
whole conclusion is it actually takes less time and costs less, due to early
wins in quality. The papers claim it's not a case-by-case thing.

------
jorgec
I created an document explaining TDD (test unit in general) is a real case
scenario:

[https://medium.com/southprojects/tdd-a-business-crud-is-
it-w...](https://medium.com/southprojects/tdd-a-business-crud-is-it-
worth-1e6d03dd6b84)

tl/dr: In theory TDD sounds great but in a real example, TDD is not magic with
a real limited coverage.

------
jlebrech
it's not crap if there's tight coupling between the testing frameworks the
programming language and the frameworks used.

------
agentultra
The only thing I disagree with in this article is the leeway given to
supposedly _legendary_ programmers who can somehow write bug-free code without
tests, specifications, or an inkling of communication with others.

First, it's probably not true. Linus Torvalds is not a legendary human being
who can write critical systems without a single flaw. He relies on legions of
human beings to carefully check and review every line of code before he even
looks at it. There are discussions on mailing lists. There are arguments and
disagreements. There's a process there. He doesn't just flit his fingers
across the keyboard and output amazing, error free code. It probably has
tonnes of errors.

Linus' philosophy is that errors aren't the end of the world and someone will
patch them when they are uncovered.

For some use cases that's fine. However there are plenty of applications where
a more proactive approach to correctness is necessary: real-time systems,
safety critical systems, and yes... even security.

Maybe TDD is a misnomer. I think we should call it _specification driven
development_. Unit tests and integration tests are just a weak form of
specification. They provide theorems in the form of examples that we try to
prove with an implementation. Property based tests give us more examples to
quantify assertions over. Model checking can test liveness as well as safety
in our high-level designs... how much you need to specify and how thoroughly
really should be a factor of the risk and complexity present in the
requirements of the system.

To use an analogy: blueprints. If you're just building a shed or a small
footpath then it's enough to sketch your idea on a napkin. If you're building
a house you need to have a more specific and detailed plan that passes by a
civil engineer. And if you're building a sky scraper then you need to be
thorough and able to convince others of the validity of your designs.

(credit for the analogy should go to Leslie Lamport).

I think most software projects are at the house level in terms of risk. You
could get by with using a dynamic language and a few unit tests if you value
productivity more than correctness. That just means you're willing to accept
that you will have higher reported error rates and are comfortable with
potentially losing customer data or a higher risk for security
vulnerabilities. You can lower your risk if the project requires more
sensitivity to data consistency or security by using a sound type system and
encoding your assertions at the type level, add some property-based tests, and
more integration and unit tests. It's a spectrum one should consider.

I know we all like to write code and sometimes we even hear ourselves saying,
"Well if you wrote the perfect specification you might as well have written
the software," but don't be fooled.

 _" Software engineering is the part of computer science which is too
difficult for the computer scientist." \-- Friedrich Bauer_

~~~
pwm
Not disagreeing with your overall message, just want to point out something
that bugs me and you're not the first one I heard saying something like this:
"They provide theorems in the form of examples". This is incorrect. Theorems
are deductive while test are experimental and this is a crucial difference in
my opinion. To give and example: say we have our function isDAG :: Graph ->
Bool, which determines whether a graph is a DAG, and we want to _prove_ that
it works then no amount of experimental test cases will be sufficient. In real
life, however, most of us (including me) settles for having _some_ confidence
in the correctness of our code by utilising tests.

~~~
agentultra
You're absolutely right. Thanks for pointing that out.

I was using _theorem_ and _proof_ as an analogy to illustrate the separation
of _specification_ and _implementation_.

A useful distinction as you get further along in writing formal
specifications.

 _Update_ : typo.

------
signa11
tdd always reminds me of ron jeffries attempt at solving sudoku, in contrast
with peter-norvig's approach...

~~~
hwayne
Ron Jeffries' mistake was thinking that he could just use TDD to solve sudoku,
as opposed to research and up-front design. Testing does not substitute for
thinking.

------
Siecje
What is TLA+?

~~~
pc86
It is a formal specification language.

[0]
[https://lamport.azurewebsites.net/tla/tla.html](https://lamport.azurewebsites.net/tla/tla.html)

[1]
[https://en.wikipedia.org/wiki/TLA%2B](https://en.wikipedia.org/wiki/TLA%2B)

------
p0nce
I guess TDD can be great for parsers.

~~~
jononor
In the classical way of setting up the parser, passing some example input and
then asserting some things about the output? It's good for covering basic
features/usecases. Since setup is always the same, it should be written as a
list of input/expectation pairs (data driven). But coverage is typically
limited by the effort needed to write the assertions - which grows in parallel
with the input complexity.

If one also writes a serializer, then one can additionally test for any input
the property: `serialize(parse(input)) === input`. This means adding a new
test is just dropping in more example inputs (say from bug reports).

Now one can go further, and define a set of mutation operators that can act on
input to produce a new valid input. Reorder tokens, change data at leaves,
delete and insert new data, etc. Now one can generate arbitrary amounts of new
test cases based on existing input examples.

Other mutation operations can be designed to generate invalid inputs, which
should always give an error (never crash, halt or unexpected exception).

------
jrochkind1
This is great.

