
Getting Stuck While Doing TDD, Part 3: Triangulation to the Rescue - waterlink
http://www.tddfellow.com/blog/2016/08/31/getting-stuck-while-doing-tdd-part-3-triangulation-to-the-rescue/
======
ereyes01
I'm pretty fastidious about following TDD practices, and I am very careful to
cover as much program state as the language I'm using reasonably allows.

However, my mental model for approaching TDD I think is much simpler than the
approach presented here. It basically is:

Do I _really_ understand the design implications of the unit I'm about to
develop?

\- No: Play around and prototype, and learn about my design, don't worry about
tests

\- Yes: Throw away any prototype and then really do the unit test-first.

I found that I gravitated towards this process after I learned (partly thanks
to blogs from Uncle Bob Martin) that unit tests are really formal design
specs. They are so formal and specific that they are expressed in code- a
program describing what your program does. Before you can write one of those
(or as many as your unit needs), you need to have a very precise concept of
how you are about to proceed, and what exactly the product code is about to
do.

The benefit of this process is that I prefetch a really deep understanding of
the system I'm building. This leads to less bugs, and it leads to sensible,
well-documented, and meaningful tests. It just takes a bit of discipline to
follow at first, when you're not used to working this way.

Some argue that it takes longer to write code this way, especially when I
throw away my hard-earned working prototype! It does take a little longer at
first, but:

\- Once I know the answer/outcome I seek, the tested version comes out way
faster than the prototype.

\- I spend much less time later on fixing my own bugs, which I learned are
mostly due to my own lack of understanding the full dimensions of my system's
design.

~~~
waterlink
This is a pretty interesting process you have there. I usually use throw-away
Prototypes/Spikes, when I do not know how to write my first test or I feel
uncomfortable test-driving the problem. Once I reach enough confidence, I
remove spike's code and start test-driving it.

I let the design of the system emerge on its own, while I'm listening to how
hard it is to write tests and steering the wheel using Red-Green-Refactor (2nd
cycle of TDD) and while I'm following Specific/Generic rule (applying
Triangulation together with Transformation Priority Premise) (this is the 3rd
cycle of TDD).

Red-Green-Refactor on its own indeed is very well known for leading quickly to
the local maximum of the solution and its design. 3rd Cycle of TDD breaks out
of the local maximum of the solution to the problem. 4th cycle of TDD breaks
out of the design maximum (4th cycle of TDD - Hourly, we step back and look at
the overall design and architecture of the system, taking a careful look at
the architectural boundaries and seeing if we are about to cross any of them
with our efforts, we proceed with informed decisions steering to the right
design we want for the next hour).

Here is the Robert C. Martin's blog post on the cycles of TDD:
[http://blog.cleancoder.com/uncle-
bob/2014/12/17/TheCyclesOfT...](http://blog.cleancoder.com/uncle-
bob/2014/12/17/TheCyclesOfTDD.html)

------
keithnz
I started with XP in 99 after seeing the wars on OTUG, and then made great use
of the TDD approach, I was busy using TDD to build real systems by first
getting legacy systems under test and then progressively refactoring and
introducing new functionality. At the same time on the mailing lists a lot of
the XP consultants started advocating quite "pure" approaches. At first it was
"as a learning / thought experiment", but a curious thing started to happen in
the 2000s, "Agile" developed a doctrine and religion about it and the free
flowing thoughts that were pushing how to design software better started to
dry up. A lot of the TDD advice out there at the moment is from the religious
stage of Agile.

Here's my take-aways on TDD...

\- It' about designing software through feedback

\- Code should compose together easily, refactor till it does, be mindful of
the tiny details

\- Test as little as possible (or I think Kent said "as is Responsible"). The
tests are there to validate your intent and provide feedback. Tests are
liabilities that need to be maintained, so only maintain good ones.

learning TDD? do it on throw away problems, play around with how small the
steps you can take are. Play around with refactoring, don't have have
preconceived ideas about the solution. TDD learning problems with other
people. Pay attention how it changes how you design something. Do some bigger
things, start noticing the problems of TDD, especially when dealing with
outside frameworks and systems.

Production TDD? It's a tool and may help provide design ideas or help stitch
together preconceived design ideas, use everything in your arsenal to design
and create good software, there are no rules or doctrine. But understand as
many tools and "rules" that the software community offers and the reasoning
behind it. Collaborate with others to create and design, and make sure you
have quick feedback cycles. Be quick to recognize new ideas, tools, and
techniques that either provide feedback or designs.

~~~
keithnz
oh, one other thought, if you are a TDD advocate and TDD all the time and are
quite productive. Try this, code something without tests. A curious thing
occurs, your software mostly works fine, it's often correct straight away.
Basically, you learnt how to design and code. The tests are just feedback. The
code itself should also be surprisingly testable if you wanted to test it (if
not, still lessons to learn about design and coding). Now, since you are
coding and designing pretty well without tests, pay attention to what actually
breaks. Gives you a good idea about what kind of Tests are important. Then
think about the minimal test that would have stopped you making the mistake.

~~~
DanielBMarkham
I think what led to the religious zeal currently is that OO/mutable state
programming is a house of cards. On a team, everybody can be a great coder,
but just one bug can bring the whole thing down in a non-trivial manner. I'm
guessing it was easier simply to say "TDD All The Things!" than try to split
hairs and work with the nuance.

~~~
dasmoth
This seems to be anathema in some circles, but perhaps the team (in the
abstract) is the problem here. Rather than bending over backwards to make
software development team-friendly, might it make more sense to have
individual module owners and well-defined interfaces between modules?

(Trying to compartmentalise the mutable state wouldn't hurt, either...)

------
ebbv
I missed Part 1 so I went back to Part 1 of this series and I have to say, I
don't agree with this approach at all. By only writing the simplest/most naive
solution to get your tests to pass maybe you are saving unnecessary work, but
I feel like most of the time, and certainly in this example, you're just
deferring work you know you're going to have to do. So instead of doing the
work as you go through the tests you're pushing it all off to the end and then
having to write the whole class in mostly one go.

If instead you wrote what's necessary to pass each test properly as you went,
you'd be building the class incrementally, and importantly you'd have a series
of commits which each have worthwhile code in them instead of a bunch of
useless slop.

~~~
lgunsch
Kent Beck talks about Triangulation in Test Driven Development by Example as
just a tool to break out when you are stuck. He doesn't recommend it for every
situation, but recommends it when you hit a road-block and need to back off
and work around it.

~~~
waterlink
While this is true, using this technique all the time makes me damn effective
with it and I very rarely get stuck (only when I actually violate one of the
rules).

And I like the BabySteps feel that it produces, it is so nice to evolve your
code bit by bit one test at a time and without harming Semantic Stability (you
can read about it here: [http://blog.cleancoder.com/uncle-
bob/2016/06/10/MutationTest...](http://blog.cleancoder.com/uncle-
bob/2016/06/10/MutationTesting.html) )

------
mempko
Programming by evolutionary process. You may get to the solution eventually,
but damn if it's the slowest and most brain dead way to get there. Potential
for falling into local minimum is so great it strikes me as almost a fraud.

Do people really pay for this kind of work?

~~~
waterlink
Yes, they do, and pretty good.

Because it is actually faster and less error-prone once you get familiar with
Evolutionary Design.

A good read from M. Fowler:
[http://martinfowler.com/articles/designDead.html](http://martinfowler.com/articles/designDead.html)
\- this talks, why a lot of Extreme Programming practices come into the
synergy to make Evolutional Design effective (and not only). Drop enough of
these practices, indeed, it will stop being effective.

------
taeric
Building a sum method.... I get that these techniques should scale to real
problems. I hate that I have never really seen them do so.

~~~
pzh
Yet real system design is usually so complex that no single technique can
cover it. I shudder when I hear TDD proponents claiming that this technique
can even be used for emergent design. An amusing example is Ron Jeffries's
Sudoku solver attempt:

[http://ravimohan.blogspot.com/2007/04/learning-from-
sudoku-s...](http://ravimohan.blogspot.com/2007/04/learning-from-sudoku-
solvers.html?m=1)

~~~
taeric
Yes, that is a particularly poignant set of posts. I am still fond of dancing
links to solve Sudoku, and I can't easily see how I would have found that
through TDD.

And yet, among the most agreed ideas you can ever make in popular programming
circles is that you are fond of TDD...

~~~
kybernetikos
In my experience tdd is useless for discovering algorithms. It is however very
useful for implementing them. I've implemented dancing links and i absolutely
used tdd for significant parts of the design.

------
superasn
Bit OT but TDD works great where you have the specs for your project down to a
T, but sometimes you're working on a website or project in which you have an
end goal and some fair idea of what you're making but not everything is
concrete and in such scenarios doing TDD can waste a lot of time (at least it
did for me).

For such projects at least for my own case I have seen that an Excel driven
development (Yes i just made it up) is a really good idea. Instead of writing
tests and creating mocks and doing a lot of legwork I just write the function
name in an Excel file, with the constraints and output in columns. Hardly
takes me 1 minute since I always have the .xls file open. Then I go into a
free-flying mode, creating my function and code and never think about it until
the project is finally complete. I of course am mindful that I will have to
write tests for it later so I do use DI and staw away from Singletons, etc.

Then when everything is ready (think v0.01), then I go back to each function
in the XL file, and write a test for it creating assertions using the
constraints and output I jotted down earlier. This way I'm not writing or re-
writing my tests because something I hoped to use in my project has changed
during the course of development.

After that it is TDD only because the v0.01 is out and now I have to careful
that any changes I make don't break anything. I don't think this would work
for a corporate environment but for solopreneurs and not so serious projects
this can really be helpful. It does seem to work for me at least.

~~~
lgunsch
TDD is designed for exactly the opposite. If you have a really good
understanding of problem and solution from experience, then just go ahead and
do it. Kent Beck talks about this in his book "Test Driven Development by
Example".

TDD does best when you need to be able to respond quickly to new knowledge as
you dive into the problem domain or make changes due to migrating business
requirements. TDD lets you create your abstractions using concrete evidence,
instead of trying to guess.

However, I have found TDD to be a difficult skill to master. You cannot just
try it out for 6 months and expect to be quicker using TDD than you are
without it. Until you have a bunch of experience built up in TDD, it will slow
you down a great deal - which is what you may have experienced. It took me
about 18 months of continuously using TDD daily to really get a solid grasp of
TDD.

~~~
ereyes01
I second this... TDD is a craft and is hard to master. IMO, that's a direct
consequence of software being difficult to write correctly in general.

It also took me about 18 months to build up a good level of confidence and
speed with TDD. I found it to be a really rewarding experience.

------
vinceguidry
I like TDD if the code is already clean and well-tested, and also if the
existing test code is clean. If the domain code is clean but the test code
isn't, I'll clean up the test code before TDDing the new feature. Shouldn't
take long at all.

If it is not clean, but well-tested, then I need to clean the code before
extending it further. Also the tests will probably be messy, I'll spend time
cleaning those up if I have time. But in all likelihood I'll only be able to
clean up the code this time around.

If it is clean, but not well-tested, then a test-after approach is better.
Take your feature-implementing extension, and write tests for both the
extension and what it's extending. Then the next time I'll have clean, well-
tested code to TDD off of.

If it is neither clean nor well-tested, then I wouldn't bother writing tests
at all, _for this iteration_. Instead I'll clean up the code, make my
extension, and call it a day. The next time I extend that part of the
codebase, I'll use the aforementioned test-after approach.

Tests take time to refine and get right. They are an added layer of security,
but a much better line of defense is nice clean code that you know what it is
doing at a glance. Tests will not save you from bugs, but clean code will make
bugs way easier to iron out.

The prevailing wisdom seems to be to don't refactor unless you have test code
to fall back on and if you don't have that, to write smoke tests.

I do not like this approach, I think if you have to deal with unclean,
untested legacy code, then you need to do the work to understand it, and the
best way to come to an understanding of code is to clean it up. As you clean
it up, domain concepts become clearer.

Tests are a form of documentation, their intent is to help you understand the
codebase. The code itself tells you _how_ it's accomplishing business
objectives, tests tell you what those business objectives actually are. Unless
you are very unlucky you should have alternative sources for that knowledge.
So tests aren't strictly necessary, but they are very nice to have.

------
jgalt212
> RED is as important as other in Red-Green-Refactor cycle. If next test does
> not fail, it is either: already implemented, or has to wait until a later
> time (until it will fail).

I strenuously disagree with this point in the following scenario: your new
test could validate an edge case that your current implementation may only
pass by happenstance. i.e. it's important for whomever refactors the code to
pass your new test and all the old tests.

~~~
waterlink
If the production code does something by happenstance I will consider it a bug
and fix it (unless in Legacy Code, where a bug could mean hidden super-
valuable-for-customers feature). I.e.: I will make sure that this test fail
and all other tests pass.

Production code doing something, that is not defined by tests is generally not
a good idea, because this is clearly an untested functionality. And it has to
be eradicated if nobody needs it (business and customers) or it should be
covered with a test, and I want to see this test fail (by applying Mutational
Testing).

