

Why Developers don’t TDD: a podcast series - lancerkind
https://agilenoir.biz/series/agile-thoughts/
Starting at episode 14 : https:&#x2F;&#x2F;agilenoir.biz&#x2F;series&#x2F;agile-thoughts&#x2F;
======
todd8
I've been a developer for many years, so most of the time I kind of know where
I am going and have a map in my head guiding my development. I like test cases
to help me with refactoring, but full TDD seems to slow me down when I try it.

I have found that TDD helps quite a bit when coaching a new developer. It sets
up intermediate goals that are not as daunting to get to. Sometimes thinking
about a big project can just seem overwhelming to someone that hasn't done it
before.

~~~
lancerkind
I like your points about when working with a new developer and how it creates
intermediate goals. Are you Pair programming when doing this?

~~~
todd8
Sometimes, but more often just looking over other people's work or progress
when I'm asked for help or advice.

------
marcinzm
Because I find that I do not fully understand a problem and it’s potential
solutions until I am working through it in reality.

~~~
elemeno
I used to have that view of the world as well, that a lot of the programming I
was doing was simultaneously exploring the problem space as well as working
towards a solution and thus writing tests (let alone full blown TDD) wouldn't
work.

I think that what changed my mind, other than the dubious joy of maintaining
my own code a couple of years down the line, was the realisation that while I
don't know the solution to the problem yet I do know what each function I'm
writing is supposed to do and thats what I should be testing. As a side effect
it also meant that the functions I was writing became smaller (easier to test)
and it became easier to see what the end solution might look like because I
could understand the intermediary steps more easily.

Or to put it another way, I might not know how to write the compiler, say, but
I do know that I'll need to start off by reading a line of input and I can
test that I'm doing that properly, etc.

~~~
unimpressive
>As a side effect it also meant that the functions I was writing became
smaller (easier to test) and it became easier to see what the end solution
might look like because I could understand the intermediary steps more easily.

I definitely use "it's hard to test this code" as a smell indicating
refactoring is in order.

I've never tried TDD, but I suspect most of the value is that it forces people
into practicing separation of concerns and other basic anti-ball-of-mud
practices.

~~~
jacques_chester
This is true, but you still go through a painful process of learning what that
actually means.

My early sins included happily relying on field injection in Spring (don't),
happily blasting stuff into Ruby objects (don't), mocking my software's world
into an unrecognisable comic book universe of gravity-defying nonsense (don't)
and so on.

A common criticism of TDD is that poor tests make things harder to change, not
easier, and I believe it, because I have absolutely done that.

------
dkarl
The ideologies that developers publicly claim are often described as
religions. One way they resemble religions is that people who espouse them
don't live the way you would expect them to if they really believed in them.

~~~
voodootrucker
XP (and by inference TDD) is often described this way.

But after one has practiced it for a while, it's hard to go back.

The way I describe it is this: When most people start as programmers, they
write a big 1000 line file that should completely solve the problem. Then they
run it, realize it was very wrong, and spend hours debugging it.

As programmers learn, they tend to practice "error driven development": they
get an idea what they want to happen, they run the program (usually "hello
world" to start), and slowly run, edit, run, edit until it evolves into what
they want to see.

If you practice 2nd process above, you are one step away from TDD. You are
still testing, you are just using manual tests. Once you learn to automate
those tests easily, why wouldn't you TDD?

~~~
PavlovsCat
> When most people start as programmers, they write a big 1000 line file that
> should completely solve the problem. Then they run it, realize it was very
> wrong, and spend hours debugging it.

No, I start line by line, and tend to be thorough. When I start out there is
zero complexity, everything I add to it I don't add blindly, with a "rough
idea" of what it does. I know exactly what it does. I know where user input
and other code ends, and my code begins, at the least, and how I normalize
things that cross the threshold.

Yes, I'm talking about a solo dev making little things for their own needs,
sure. I have the luxury of being as thorough and slow as I want, too.

It's not lazyness, the idea of tests actually excited me when I first heard
about it. But for example, when I write a particle system, how would I test
that? Do I test all combinations of parameters and then compare them with
images, which takes a million years? How do I generate the images, with
another particle system? How do I test that, in turn? It really does strike me
as a "now you have two problems" kind of deal in that situation. In others,
the code is just too trivial and unchanging to write a test.

What's a "real" application that has good and complete tests? Something like
the GIMP, or Blender, or an audio editor, or a complex game? I wouldn't know
where to look, and the tests I saw when browsing random github repos often
seemed too simple to be useful, often just covering some "token input".

But what if your "escape string" function has a bug that kicks in at exactly
15 characters? What if your "get user ID from username" has a bug that ONLY
happens when the string "whoops" is in the username? Fuzz everything? Who
really does that for applications without serious security implications?

~~~
voodootrucker
I'd love to know how to test graphics and audio programs, frankly.

TDD works very well for things with a defined input and output, including: 1\.
compilers 2\. databases 3\. all web apps (output is the DOM)

In regards to the escape string, if there was a bug at exactly 15 characters,
there must be a branch somewhere that isn't being hit, and it should be pretty
easy using code coverage reports to find it, and generate the correct input
sequence to exercise it.

When you get to fuzzing, you're hitting the edge of state-of-the-art. There is
good research going on at automating some of that:
[http://lcamtuf.coredump.cx/afl/](http://lcamtuf.coredump.cx/afl/)

~~~
mdpopescu
What I do for graphics (never had to test audio programs) is to make the
actual graphics code as simple as possible - no logic, very few lines: just
take this thing and put it on screen. This way, I can inspect it visually,
test it manually once and be reasonably certain that it will work. (Which
_can_ be incorrect, of course, but it's relatively rare. Lost a contract
because of it once, so it can also be painful :D)

Put all the logic in a class that can be tested and that returns something
that can be used in the "get this, put it on screen" part mentioned above.

------
jacques_chester
TDD (I lump together lots of extended practices here) is hard.

It's hard to learn from a book or blog post.

It's often hard to solve the many problems of "how do we test <behaviour X>?".

It's hard to know when to stop spiking and start test-driving.

It's hard to stick to it when you solo and you think "oh I can just skip a few
steps here and come back afterwards".

It's hard to learn mockist style and statist style. Hard to go all-in on one
style, then all-in on the other style, and then afterwards often hard to
strike the right balance between them.

It's hard to deal with the vast universe of technologies which, having not
been developed in a TDD fashion, are hard to test.

It's also hard being told your years of practicing amongst practitioners
aren't real experiences. That you didn't see the remarkable speed and
confidence it granted on massive codebases with hundreds of engineers in
dozens of teams working for multiple companies on several continents for years
on end.

------
LandR
Tdd won't take off until developers realise they need to start thinking more.

If you don't understand the problem you're trying to solve well enough to
write tests, you shouldn't start coding.

But too many developers just want to dive in and hack code together and try to
figure it out as they go. The resultant code from this is almost always a tire
fire.

Someone on hn a whole ago posted the quote "coding should be to software
development what moving the pieces is to playing chess"

I couldn't agree more. I wish more developers would get this.

~~~
quanticle
_If you don 't understand the problem you're trying to solve well enough to
write tests, you shouldn't start coding._

Sometimes there are problems that are only really possible to appreciate after
one starts coding. In many instances, I've found that attempting to build a
prototype is the best way to understand the problem. In those instances, TDD
is a huge mistake. It ties you to architectural decisions that you know you're
going to change when you better understand the problem.

Contrast Ron Jeffries' attempt to build a sudoku solver using TDD with Peter
Norvig's attempt [1].

[1]: [http://ravimohan.blogspot.com/2007/04/learning-from-
sudoku-s...](http://ravimohan.blogspot.com/2007/04/learning-from-sudoku-
solvers.html)

~~~
voodootrucker
Peter Norvig's approach seems like the textbook way to TDD:

1\. Find a large body of known good input/output pairs 2\. create tests around
that known good data 3\. create an implementation that produces the desired
output from the given input

There's no reason Peter couldn't or shouldn't have started with the tests.

~~~
quanticle
What you're describing is closer to what Ron Jeffries attempted than to what
Peter Norvig did. It's Jeffries who started with input and output and then
faffed about for five blog posts while attempting to figure out a way to get
input to match output. What Norvig did was start by thinking about the
representation of the problem and the constraints that the problem imposed. He
then found a way to elegantly represent those constraints in code. And, only
then did he write tests.

Moreover, the process you describe is quite similar to the parody of "how to
draw an owl" [1]. It completely glosses over the fact that "create an
implementation that produces the desired output from the given input" is where
99.5% of the complexity lies.

[1]: [https://i.kym-
cdn.com/photos/images/newsfeed/000/572/078/d6d...](https://i.kym-
cdn.com/photos/images/newsfeed/000/572/078/d6d.jpg)

------
rickdg
Because tests have to match requirements and those are usually ambiguous or
close to non-existent.

------
rongenre
Usually it's because we let TDD lapse, and suddenly the rebuilding the
scaffolding to properly do TDD for a feature is more work than the feature
itself.

------
salawat
I know when I'm coding for me I don't because to me, tests don't solve the
problem I'm out to solve at the time. I want _this_ thing, over _there_. A
test does not get it there.

Now, once I'm done with getting that over there, _then_ I look at it and
evaluate my level of testing I'm going to commit to it.

Some code, I can keep straightforward and narrative enough where even coming
back a year later, I can pick it up and quickly move with it. Part of my
secret to doing that is forcing myself to sit down with a saintly non-
programmer, and talk them through it.

If I can get them to follow, I've got it nailed down. If I can't, I haven't
whittled it down to it's simplest incarnation.

For all the other code... Let the test suites be written!

------
SomeHacker44
Because I test everything thoroughly at the REPL as I am developing, and
record my REPL tests.

Because I have been doing this for 35+ years and generally have acceptably low
error rates without it.

Because I would rather invest test resources in runners that actually exercise
the application at the UI layers and the API layers directly in a fully
configured and deployed environment rather than trust mocked up unit tests.
(Which CI can also do.)

Because tests can have as many bugs as the underlying code.

But, for teams of junior developers, I am all for it.

(I also will write some test harnesses/unit tests for aggressive or major
refactorings.)

------
schneiderscode
Using a directory watcher to rerun ruby unit tests on file changes was
something that helped me do more TDD. Having a terminal open and seeing it
turn from red to green as tests start to pass on every save is a very
satisfying feeling. Saving and manually re-running your tests after changing
what should make the next test pass is not.

Edit: It was guard for those curious: [https://github.com/guard/guard-
rspec](https://github.com/guard/guard-rspec)

------
segmondy
TDD wasn't a thing when I started coding, there's no software that I use or
that has been influential to me that's developed using TDD. For my personal
projects, I'll never use TDD, it's a crutch, I don't even jump into tests, I
add assertions and then only add a test if I find a bug that wasn't obvious.

------
foxyv
TDD is great when you have a solid set of requirements. But when you are doing
"Agile" and the requirements change every day, your tests can be invalidated
almost instantly. If I was doing a reliability focused project TDD would be my
first step. But that isn't the kind of project I work on.

------
pjmlp
My fun test for TDD advocates is picking random GUI framework X and ask the
presenter how to TDD a native GUI.

After all I am not supposed to show anything on the screen, send a shader to
the GPU, or change any widget property without an existing test.

~~~
rgoulter
AFAIU TDD, it's beneficial to have an automated test than can check what I'd
check manually if I didn't have automated tests (since automated checking is
quicker, and is consistent). One benefit of writing a test before fixing code
is it avoids the problem of writing a test which passes but doesn't actually
check the broken thing. ("You have to write the test before or else you're
doing it wrong" seems overzealous or cargo-culting to me).

Of course, as with anything, "it depends" and so if you're confident that
adding more tests would hinder your development more than benefit it, sure.

Some programming tasks are difficult to get the system setup for test input,
or for checking test output. But it's not like you're not going to check
whether the program works, and so it seems beneficial to make an automated
test if you can.

~~~
pjmlp
I fully support automatic testing as far as possible.

Now TDD cargo culting of writing tests before working code, or even actual
design of data structures just seems nonsense to me.

It only works for CLI demos of simple tools, or data processing pipelines.

Anything else seems convoluted, without a sound architecture design, and just
impossible in some scenarios, e.g. GUI code, UI/UX.

Tests should be a mix of unit, module and integration tests, written after the
architecture design, overall UI/UX design process, performance analysis if the
chosen data structures are the best ones for the case at hand.

~~~
Huggernaut
Given that there are plenty of test harnesses that allow for testing of GUIs,
what about GUIs do you think proves impossible for the TDD process?

~~~
pjmlp
Large majority only work for browser based GUIs,

Then overall they are only able to validate the UI position or widget
properties, which are like 25% of the overall UI/UX of a GUI and HIG
compliance behaviors.

~~~
lancerkind
There are test frameworks for every tier of any application on any platform.
People are getting it done.

Testing pixel presentation is a problem. Today, Automated macro tests can’t
describe in a program how the UI should look before the UI is built. The UI is
simply too dynamic and subject to the whims of fashion. But writing tests
first for the UI and smoke testing that the UI elements are there and
basically function is doable.

Most devs struggle keeping their micro testable code seprate from presentation
code. TDD is a great way to incur this requirement. Otherwise it needs to be
tested in a slow macro UI test. If you’re not familiar with the three tiers of
the Test Automation pyramid, give Agile Thoughts episodes 1 through 4 a
listen. It takes you through each stage of the pyramid and describes how macro
(cutaneous and subcutaneous) and micro complement each other.
[https://agilenoir.biz/series/agile-
thoughts/](https://agilenoir.biz/series/agile-thoughts/)

------
wolco
Tdd is writing tests and getting a side effect of production code.

Tdd feels like you are always after the next hit. Turn that red into green by
hardcoding. All that matters is green.

------
Crazyontap
This is a very good article on the same topic by DHH. A great read if you
haven't read it before:

> Test-first fundamentalism is like abstinence-only sex ed: An unrealistic,
> ineffective morality campaign for self-loathing and shaming.

[https://dhh.dk//2014/tdd-is-dead-long-live-
testing.html](https://dhh.dk//2014/tdd-is-dead-long-live-testing.html)

~~~
Huggernaut
To me this is confusing the level of abstraction tests sit with when the tests
are written. You can write system tests first and you can write unit tests
last, they are orthogonal.

I practice double-loop TDD, which involves writing a system level test to
drive out some behaviour, then writing other integration or unit tests down
through the layers until the system test passes.

~~~
voodootrucker
I've heard this called "outside-in" testing.

~~~
walligatorrr
I thinks this is more something like ATDD (acceptance test driven development)
consisting in capturing specifications in acceptance tests and use them in a
second loop to drive traditional TDD. Outside-in TDD or mockist TDD following
Martin Fowler’s vocabulary is just a TDD technique extensively using test
doubles in order to define how actors collaborate or interact with each other
to achieve the specifications.

~~~
lancerkind
I played a bit with mock driven development (I think this is the same as
mockist). I so for haven’t found that valuable other than as a kata for
learning a new mocking framework.

@walligaotr Have you found mockist useful?

------
haolez
What about languages with strong typing? Is it as useful as with, let’s say,
Ruby?

~~~
voodootrucker
It's 100% as useful, but much more difficult to mock and inject.

~~~
paulddraper
> much more difficult to mock and inject

Isn't it just the opposite?

The whole reason of Mokito (
[https://site.mockito.org/](https://site.mockito.org/) ) is to make Java mocks
easier by adding dynamic typing.

~~~
jacques_chester
What you write in your tests, how you test, how you design the software and so
on are altered by the language you work in. This is as it should be.

If I have a type system that prevents certain classes of defects, I gleefully
accept that bounty.

If I have a type system that simplifies certain classes of tests, I gleefully
accept that bounty.

I will happily rant and complain about all of them, but I will also try to
program to the strengths of what I'm using. Wishing one language was a
different language is a waste of everyone's time.

------
pooya72
This is a nice series, but the background music is too loud.

------
theptip
There was a back-and-forth between DHH and some of the original TDD advocates
a while ago, which I thought was pretty interesting.

Test-Induced Design Damage: [https://dhh.dk//2014/test-induced-design-
damage.html](https://dhh.dk//2014/test-induced-design-damage.html)

And a resulting conversation between Kent Beck, Martin Fowler, and David
Heinemeier Hansson: [https://martinfowler.com/articles/is-tdd-
dead/](https://martinfowler.com/articles/is-tdd-dead/)

I think a lot can be learned from the exchange.

------
Vanderson
Can someone recommend a podcast on TDD that is to the point?

I couldn't get into these because they seem more entertainment focused than
technically focused.

~~~
lancerkind
This is tough to do in a listening medium. Episode 12 covers a simple example
that expresses the workflow and give you the general idea.
[https://agilenoir.biz/podcast/012-an-example-of-doing-
tdd/](https://agilenoir.biz/podcast/012-an-example-of-doing-tdd/)

Getting a copy of “TDD by Example” or buying a video course will get you
further.

------
st3fan
Hey. I do.

~~~
mikekchar
Here's a question I find interesting, though: Do you TDD or do you write
tests? _Lots_ of people write tests for their code. I don't know many people
who actively embrace the idea of TDD -- where the testing activity is driving
the other aspects of their coding. Even among the people who do, I know of
very few people who agree on what the word "driven" means. I suppose I should
be more inclusive and say that "driven" might just mean, "I write tests". I've
had numerous people tell me that it is improper when doing TDD to modify
production code to suit your tests (which, for me is a really weird definition
of the word "driven" ;-) ).

If you do TDD, how do your tests drive your development and do you find that
it is a concept that you are able to communicate to others easily?

Disclaimer: I have not listened to the podcast, but the combination of the
title and this comment was too thought provoking to let slip by :-) Now I'm
quite excited to find some time to listen to the podcast!

~~~
voodootrucker
What I've found most people do wrong is start with unit tests, which test
implementation not deliverable, then realize after writing a bunch of code
they did it wrong.

I tend to write the acceptance test first.

e.g. #1 If I'm writing a web app I write a selenium test "as a user, when I
click the button, I see X".

e.g. #2 If I'm writing a distributed database, I write a test "state should
converge".

These acceptance tests (aka end-to-end, UAT, UI test) ensure the desired end
state is tested for, then I work outside-in to create implementation, writing
unit tests as needed along the way.

When I find people fail at TDD, it's usually due to the lack of knowledge,
ability or infrastructure to approach the problem this way.

Small stories in the correct order are also critical. I agree with the "risk
first" approach an article on HN described yesterday.

Source: I taught a course on TDD to some big enterprise clients.

~~~
alkonaut
The problem I find is that it takes quite a long time _after_ having a sketch
solution to realize that it’s a dead end.

You start with “as a user when I press the X button...” but you realize there
is no way a button will do. The premise is flawed. It has to be a multiselect
list. And this can’t be a user thing, it has to be a config thing” and so on.
That is: you can’t (no one could) even _specify_ what is to be implemented
without implementing one or more sketch solutions. You can specify the _user
problem_ but not the solution. “as a user I need to do X”. But a problem
specification alone doesn’t allow easily creating a test on any level.

If I were to do “TDD” I’d perhaps accept tests as being not the first step but
the step before the final solution. Sketch solution(s). Throw away sketch.
Tests. Final solution.

~~~
mikekchar
> Sketch solution(s). Throw away sketch. Tests. Final solution.

I think it's worth pointing out that this has been my interpretation of the
general approach that XP originally took many years ago. If you know the
general design that your story should take, you do a test first approach using
your knowledge of what you are doing as a guide. However, if you _don 't_ have
a good idea of the best approach to take, you "spike" a solution, throw away
the code and then use test first to develop the "final" solution (hard to say
"final solution" in XP, so I hope people understand what I'm saying here --
the solution that will be refactored over time as opposed to the solution that
will be rewritten ;-) ). As the sibling thread pointed out, though, "test
first" in the TDD sense is an iterative process and usually you write a test,
satisfy it, write another test, etc, etc. You wouldn't want to write all your
tests up front because it would constrain your design too much -- and then it
wouldn't be "test driven", even if it is good ;-).

I find it especially interesting that you use the word "sketch", which is
exactly the same word I use. For me the activity is based around trying to get
my head around the basic composition in the design. But like you say, if you
are unsure about the UX workflow, just banging out some code and looking at it
can make you realise, "That work flow is just not going to cut it because it
doesn't use/generate the information I need".

------
programmingyes
Because it's tedious and not as fun as just going for it.

~~~
lancerkind
:-)

I actually enjoy writing well crafted code that does test and production
functionality. Still, I admire your spirit. Just don’t join my team. ;-)

~~~
programmingyes
Good idea. I'd take your code coverage down to single digits in one afternoon
bam!

------
gaius
TDD requires you to manually predict every possible failure mode and edge case
before starting work. It is obviously complete nonsense. Pure snake oil
designed to sell training and consulting, not to produce working software.

~~~
tashoecraft
While I don't do TDD, I know this isn't correct. The idea isn't to prove every
single possible way the code can go wrong. It's supposed to be an iterative
process, this should do that small thing, write tests to prove it, then
refactor updating your tests.

Saying it's just designed to sell training and consulting is ridiculous.

~~~
lancerkind
Nicely said @voodootrucker. And I agree with @Tashoecraft. Lots of people get
value from TDD. Unfortunately, some don’t and assume it’s the TDD process
that’s broke. Its not the tool’s fault. It’s also understandably difficult
(though not rational) that groups of people have trouble adopting new
practices like TDD. Agile Thoughts podcast exposes some of the social
blockages starting with episode 14: [https://agilenoir.biz/series/agile-
thoughts/](https://agilenoir.biz/series/agile-thoughts/)

