
Show HN: TDD Screencast #3: Implementing Kind Sort - waterlink
http://waterlink.github.io/blog/2016/02/04/tdd-number-3-kind-sort/
======
dalke
By watching these videos I continue to affirm my believe that TDD doesn't
really lead to design, nor good tests.

In video #2 we saw the evolution of a bubble sort. In video #3 we saw a
quicksort.

However, nothing in the tests lead to one design over the other. It was only
in the mind of the developer, and the only difference to get to the different
designs was the statement that #3 must return a new list. Which can be done
(as was said) by a copy, so that didn't really drive the development or
design.

In fact, there are no simple tests that can drive one implementation over the
other. It's only by counting the number of swaps, or measuring the
performance, and with performance requirements, that one version comes out
better than the other.

One of my issues with TDD is the 'refactor' step, which is supposed to take
place during green. One of the refactor transformations is "Substitute
Algorithm." I've always thought that was a huge loophole as the appropriate
tests for a new algorithm can be quite different than the tests for the
existing algorithm, but TDD says not to worry about that.

That issue is applicable here, as #3 really is a "Substitute Algorithm"
refactor of #2. It could have started with the previous sort test cases, and
shown how refactor the algorithm in the TDD framework rather than redo the
code from scratch. So far I haven't actually seen any of the TDD example
walkthroughs do it, at least with a non-trivial example like sorting, so
showing this might also be enlightening or at least novel.

A test case goal should be to convince people that the code works. The test
cases here aren't enough. For example, there is no test for when values are
equal, and I don't recall the spec saying duplicate values were disallowed.
This is important because some broken sort implementations might get into a
loop for this case.

If this were implemented in Python then there's a maximum recursion depth. I
think the worst case for this quicksort implementation is if the input is
sorted. So I would want to try sorting 100,000 sorted to feel more
comfortable. This sort of error analysis must be part of TDD, but it doesn't
really fit into the red/green test cycle because it's a test you write when
you expect the code to pass, not fail.

Also, from an API perspective, if the input is the empty list then this code
returns the original list, otherwise it returns a new list. I would prefer to
always get a new list, so I have a better idea of ownership.

~~~
waterlink
By the way, I am currently researching, if it is possible to use TDD and
property-based testing. It might be a pretty interesting mix, and should cover
some of your points too.

Have you tried something like that?

~~~
dalke
I have no experience with property-based testing.

