
Test-Driven Development Bypasses Your Brain - ddfreyne
http://stoneship.org/essays/tdd-bypasses-your-brain/
======
MattBearman
Personally I completely disagree with this, I've never found myself randomly
changing code in a desperate attempt to get a test to pass.

Maybe it's because I'd been coding for years before I ever tried TDD, but when
a test fails, I logically debug the code the same way I would if I wasn't
using TDD.

As far as I'm concerned having tests just flags possible errors much quicker,
and also gives me more piece of mind that my code isn't gonna be riddled with
hidden bugs.

~~~
Erwin
An often touted "benefit" of TDD is that "addictive" feeling when you write
tests and see them pass. "you feel like you have done a lot because you have a
lot of code"; "you feel a great deal of accomplishment". Quite a few pages
talk about it when you search for "tdd addictive".

The canonical example is the master of XP solving Sudoku in the TDD way:
<http://xprogramming.com/articles/oksudoku/> (part 1 out of 5) -vs- Peter
Norvig: <http://norvig.com/sudoku.html>

------
onemorepassword
The author made one slight mistake: he wrote "there is a tendency to
mindlessly modify code" instead of " _I have_ a tendency to mindlessly modify
code".

Also, it's not like this we haven't seen this kind of behavior decades before
the invention of TDD.

This is just another example of a craftsman blaming his tools. TDD is not a
silver bullet, but no method or tool can serve as an excuse for mindlessly
poking around until it works. This isn't limited to programming either.

~~~
bhaak
If I only were in the past. I've seen this behavior with coworkers, changing
random bits of the code, without any coherent system to speak, rerunning the
application from scratch and manually testing if it works now.

I can't describe how shocked I was.

------
seguer
I don't recall ever reading that just because you have tests, you should no
longer understand the processes by which your code functions. Was this
something that they've seen happen, or experienced personally?

~~~
liw
I suspect the linked article is a straw-man built to provoke responses, and
thereby create page views for the blog.

~~~
ddfreyne
I (sadly) average 1 article written per 3 years, so there wouldn’t be much of
a point in creating page views.

~~~
jasonlotito
But you still try. =)

The title contains that "bold statement" to incite a response.

There are numerous fallacies in your article. I believe it's a
misunderstanding of certain aspects.

> writing code in a test-driven way bypasses your brain and makes you not
> think properly about what you are doing.

You should not just start writing code blindly. You should have a clear
understanding of the problem up front. When you start coding, it should be
done after you have a plan.

> Furthermore, true 100% code coverage by tests is a myth: no matter how many
> good tests you write, not all cases will be covered.

Code coverage measures the code you've written tests for. It in no way
promises to cover all cases. This is not a deficiency in code coverage, merely
it's understanding.

> Therefore, mindlessly modifying code until all tests pass is likely to
> introduce new bugs for which no tests exist.

Ignoring the other parts of this that make no sense, I propose that mindlessly
modifying code without tests _will_ introduce bugs.

> Algorithms must be understood before being modified, and modifications must
> be done as if no tests exist at all.

I don't understand this. Of course they must be understood. TDD does not
remove this requirement. I'm also not sure how modifications must be done as
if no tests exist? Maybe you mean to suggest that optimizations in algorithms
must be applied all at once, and cannot be made in small, incremental changes?

> You apply the optimisation, and some tests start failing.

Whereas if you did not have tests, you might not know this.

> But how can you be sure that the algorithm still works? How can you be sure
> that the mindless modifications did not introduce edge cases that were not
> tested before?

How can you be sure that your algorithm worked before in all cases? How can
you be sure, without testing, that your changes still work?

You really are making a straw-man. You are effectively arguing that TDD
doesn't prove something that TDD doesn't promise. In fact, your premise - "no
matter which software development methods you use, do not forget to use your
brain." - and your title imply clearly that TDD doesn't encourage using your
brain.

That's most assuredly not true.

P.S. I hope I don't sound harsh. I'm not trying to belittle or insult you. =)

------
pyre
The "bold statement" is a little too bold. It goes from:

    
    
      |  writing code in a test-driven way bypasses
      | your brain and makes you not think properly
      | about what you are doing.
    

(Test Driven Development _makes_ you not think properly and _bypasses_ your
brain) to:

    
    
      | no matter which software development methods
      | you use, do not forget to use your brain
    

"Just don't mindlessly program."

------
tehwalrus
TDD is good for verifying that your code handles the set of requirements given
by the customer - including any edge cases that matter to them. I probably
agree that 100% test passes doesn't equal no bugs.

Nonetheless, it's still useful! You _can_ still write TD code and use your
brain - it is only slightly easier to be lazy (and specifically, lazy in a way
you're not supposed to care about, yet.)

In the end, production use crash reports will reveal any bugs that matter in
the system (if any), and you can write new tests for those extra cases and
make the code pass again. Combined with the rest of Agile (sorry,) i.e. fast
release cycles and so on, this isn't a road block.

~~~
jasonlotito
> I probably agree that 100% test passes doesn't equal no bugs.

TDD never promised that, and practitioners of TDD understand that 100%
coverage doesn't mean you won't have bugs. This doesn't invalidate the TDD or
testing (as you are obviously aware of =)).

~~~
taligent
Sure you will still have bugs. The question is whether the reduction in bugs
due to TDD outweighs the increased investment in developer and tester time.

Because for those of us who do TDD every day the blowout in time is at minimum
2-3x longer than without it. Not to mention the detrimental impact on build
times.

All of that aside. Have you noticed how there are no decent metrics available
for TDD's effectiveness ?

~~~
jasonlotito
> Because for those of us who do TDD every day the blowout in time is at
> minimum 2-3x longer than without it.

I do not know your environment, but TDD does not add 2-3x longer for most
everyone I know that practice it. This is especially true when you factor in
total development time. Most estimates I see place TDD making the project take
15-30% longer.

> All of that aside. Have you noticed how there are no decent metrics
> available for TDD's effectiveness ?

Not sure what you mean by that. There are metrics you can use (how else could
they do studies on this?). It's been proven time and time again in studies
(some are linked in these threads here).

~~~
Silhouette
_It's been proven time and time again in studies (some are linked in these
threads here)._

I see that claim a lot, but when I look at the "studies" being cited, they
rarely stand up to even cursory scrutiny about their methodology and the
choice of test subjects.

These studies (or those making a case for TDD based on them) tend to do things
like generalising results based on a few undergraduate students writing toy
programs to draw conclusions about professional software developers working on
industrial-scale applications, or using no unit testing at all as a control
rather than say doing either test-first or test-last unit testing but not
following the TDD process.

If you have better sources, please do share them. Developing a non-trivial
application using two different techniques is rarely cost-effective, even
without controlling for having different developers or experience in the two
cases, so getting good quality data about the effectiveness of software
engineering processes is hard.

------
jfim
I think the problem mostly spans from the "do the simplest thing that could
possibly work"[1] methodology that some practitioners of TDD advocate over
thinking about the problem and solving it properly.

[1][http://c2.com/xp/DoTheSimplestThingThatCouldPossiblyWork.htm...](http://c2.com/xp/DoTheSimplestThingThatCouldPossiblyWork.html)

~~~
jasonlotito
The problem isn't the advice, it's the misunderstanding of that advice.
Thinking about a problem should happen, and when you sit down to code, you
should already know what needs to happen. TDD doesn't propose to replace
planning and thought.

~~~
jfim
Fair enough, I do admit my experiences with TDD are pretty much limited to
writing the game of life several times at a code retreat where thinking too
much ahead of time was somewhat verboten and talking with TDD practitioners
that suggest the best solution to solving a problem is to write some tests,
then take some "baby steps" until the problem is solved. I always get the
impression it seems to lead into a somewhat absurd situation, such as the one
described in [1].

What do you think would be a good reference with regards to TDD practices, as
opposed to "I saw some people do it and it looked seriously wrong?"

[1] [http://programmers.stackexchange.com/questions/109990/how-
ba...](http://programmers.stackexchange.com/questions/109990/how-baby-are-
your-baby-steps-in-tdd)

------
duey
I've always viewed TDD as a process that works for _some_ people. It's always
important to remember that people learn, develop and think differently. If TDD
works for you, great. But do not force it upon other people, as it may not
work for them.

(This isn't to say that unit tests are bad, but rather writing tests first may
not benefit all people)

------
DanielBMarkham
This sounds a bit like "we don't need no stinking testing", but I know the
author is trying to hit at a deeper point. I only wish he had done better.

One of the problems here is language: TDD as a general concept can cover
everything from high-level behavioral testing to a method-by-method way to
design your program. There's a big difference between those two!

In general, of course, programming is balancing what the program is supposed
to do with how the program is constructed. That's true whether you have TDD in
the mix or not.

~~~
anthonyb
Good luck doing TDD with behavioral tests. Running (eg.) Selenium tests
repeatedly is only going to slow you down.

~~~
xtracto
Tell me about it! I am in a project right now where we are _required_ to run
BDD tests (Cucumber) which hit _real_ servers (no, mocks won't do it). Worse
thing is, the 3rd party "ESB" we are using takes forever to shutdown and
startup... and is being restarted a lot in the tests (someone else is doing
the BDD tests as an "acceptance criteria").

The result? running the complete BDD test suite takes about 5 hours, which
must be checked for every commit.

~~~
Kequc
This doesn't sound very brain or productivity healthy.

------
rdfi
I'm inclined to agree that it is hard to create an algorithm using tdd (for
example Dijkstra's algorithm). But "the example" mentioned in the post is not
grounded. It would be nice if someone had a real-world example to back up this
claim or else it is very easy to bring up the argument that the author is not
applying tdd correcly

------
karterk
I find TDD to be useful in two cases:

 _1\. When I already know what I'm doing and it's just a matter of coding
what's already in my mind

2\. When I'm writing in a dynamically typed language, it forces me to be not
lazy and have adequate test coverage since I don't have compile time type
safety_

I do less of TDD when dealing with a statically typed language and/or when I'm
working in an exploratory mode. TDD doesn't help me when I'm just trying out
different things to get going.

The thing that pisses me off is when people don't realize that EVERY technique
has caveats and try to promote it as a golden rule - a lot of "agile"
consultants preach TDD as the golden grail for writing code without any bugs.

EDIT: grammar

~~~
ajanuary

      1. When I already know what I'm doing and it's just a matter of coding what's already in my mind
    

A concept often used in TDD is spiking. If you _don't_ know what you're doing,
do a quick and dirty untested version until you do know what you're doing.
Throw that code away and TDD it with your new found knowledge.

------
anon1385
[http://www.dalkescientific.com/writings/diary/archive/2009/1...](http://www.dalkescientific.com/writings/diary/archive/2009/12/29/problems_with_tdd.html)
is a much better article about the problems with TDD.

------
kevingadd
Hacking code to fix problems isn't unique to TDD. I see people do it all the
time to codebases that don't have tests.

If your goal is to fix this behavior, go for the root causes. TDD isn't a root
cause for this particular problem.

------
tytyty
I've been mixing in TDD and BDD for the last 1.5 years of my 11 year coding
career. I can't think of any reason not to test except for laziness and
someone's unwillingness to truly use their brain to evaluate it's value.

Contrary to this article, one great reason is that TDD/BDD allows me to make
refactors and major changes and know whether or not I broke something. I find
it passe to have the opinion of this article.

A perfect example for TDD/BDD is a complex REST API with dozen of end-points
and refactoring a piece of the authentication system. How do I know if I broke
something or introduced a bug?

My experience is that most developers do not test and this is exactly the kind
of way complex bugs get introduced. You actually make the job more difficult
on yourself because instead of knowing YOU broke something, a bug gets
introduced and you spend more time tracing the cause. I have worked at many
places that have this obnoxious cycle of deploying, breaking, deploying,
breaking.

It is irritating to see articles like this pop up because it's not like it's a
school of thought or a religion. It's a purposeful tool that can and will save
you time and effort and probably impose a few good design practices along the
way. I'm not saying shoot for 100% coverage, fuck, I'm happy just knowing a
few complex pieces are working. And I don't always think it's a good idea to
design API's from the tests, especially when you are experimenting and
researching.

~~~
bjeanes
Your "perfect example for TDD/BDD" is actually about testing in general, not
TDD. You are stating the value of having a test suite when making a large
change, not the value of writing tests first.

~~~
tytyty
Sure. I guess I forgot to also make the point that the best way to write tests
is to do it as you write the code you are testing. Otherwise the tasks becomes
somewhat tedious and intolerable.

------
reader_1000
I think this is a more general problem in programming, namely "Programming by
Coincidence" [1]. Some people just tries to solve the problem without actually
thinking about it, but just tries match the output specification.

[1] [http://pragprog.com/the-pragmatic-
programmer/extracts/coinci...](http://pragprog.com/the-pragmatic-
programmer/extracts/coincidence)

------
rmoriz
There are papers out there that show better results with TDD. Here is one:

<http://www.infoq.com/news/2009/03/TDD-Improves-Quality>

[http://research.microsoft.com/en-
us/groups/ese/nagappan_tdd....](http://research.microsoft.com/en-
us/groups/ese/nagappan_tdd.pdf)

------
sklivvz1971
This article misunderstands TDD completely. In TDD, _the tests are your
specifications_. Therefore, _any code_ that passes the tests is formally
correct - even though it should always be minimal (YAGNI).

In fact, TDD is not simply "tests first". It is: write ONE test, make it pass
with the MINIMUM amount of code, refactor, loop.

~~~
pjmlp
Usually this makes people go for very simple solutions without thinking
properly what are the right data structures and algorithms for the problem at
hand.

I rather write proper designed code and write the tests afterwards, before
delivering the code.

~~~
sklivvz1971
True, however the solution is ok since passing the test is the only quality
you need.

If not, write a new test, make it pass. The naive implementation can be
substituted with a different one easily since the tests guarantee correctness.

Generally though, since the "third leg" of TDD is refactoring, this ensures
that the proper structures are going to be used in place as soon as they are
actually needed.

~~~
pjmlp
Have you ever tried to apply that in a big enterprise?

------
vannevar
FTA: _Algorithms must be understood before being modified..._

I would add to this that algorithms must be understood before being tested,
something with which I suspect most TDD proponents would agree, and which
would dispense with the need for the rest of the article.

------
damncabbage
Could we please stop arguing? This back-and-forth with absolutes is akin to
useless political campaigning. [http://blog.8thlight.com/uncle-
bob/2013/03/06/ThePragmaticsO...](http://blog.8thlight.com/uncle-
bob/2013/03/06/ThePragmaticsOfTDD.html)

(More specifically, read everything from the "The Pragmatics: So when do I
_not_ practice TDD?")

------
iansinke
I agree -- I've found myself in that exact case that he described (mindlessly
adding and subtracting one on various loop indices until it worked) more than
once.

------
wwarner
The same argument would apply to a good compiler. And that is exactly how I
think about tests -- kind of a way of extending the compiler.

------
ginko
Dijkstra's quote reminds me of Knuth's "Beware of bugs in the above code; I
have only proved it correct, not tried it."

------
davesims
If you're coding _mindlessly_ doesn't that _by definition_ mean you've
bypassed your brain?

------
taligent
TDD in theory is a great idea. In practice it is dreadful.

Because what has happened is that the obsession with code coverage has meant
that developers create a whole raft of tests that serve no real purpose. Which
due to TDD then gets translated into unworkable, unwieldy spaghetti like mess
of code. Throw in IOC and UI testing e.g. Cucumber and very quickly the
simplest feature takes 5x as long to develop and is borderline unmaintainable.

It just seems like there needs to be a better way to do this.

~~~
anthonyb
You're supposed to refactor your code. Layering tests on top of tests means
that you end up spending more time maintaining tests than writing code.

Also, don't test stuff that isn't going to break, and avoid writing system and
UI tests unless you absolutely have to.

~~~
taligent
The problem is that that right now in the software industry:

TDD, Agile, Scrum, XP etc are a religion.

And a lot of people have managed to make their lives easier by making the
teachings of this religion mandatory. So what I've been witnessing the last
few years is that saying "no I don't think we need a test for this" is a
position that will get you no where. So instead every one just puts up with
longer and longer build times and spending more time each day fixing broken
tests.

~~~
anthonyb
That's an overly cynical take - I've seen both TDD, Agile and Scrum work
really well. In any case, refactoring is part of the religion too, so it
shouldn't be too hard a sell for you.

And it'll fix build times and broken tests! :)

------
largesse
Moral of the story: Coding mindlessly can be almost as bad as blogging
mindlessly.

------
pjmlp
Fully agree!

