

Today I wrote some code - jacquesm
http://www.jacquesmattheij.com/today+I+wrote+some+code

======
petenixey
It's funny. Just as the author reached this realisation I've reached the
realisation that tests are killing my productivity and are exactly the wrong
strategy for my nascent product.

I've been testing obsessively for the past 18 months but just realised that
I've got a stack of perfectly tested code that's perfectly wrong for what I
need it for. I'm going to have to tear it apart and rebuild.

I've been thinking about what the right balance is between the two. As code
matures the value of tests becomes higher and higher but early on when you
need to shape and reshape they really kill momentum.

~~~
jacquesm
Funny you should say that :)

This is one of the reasons why I don't make testing into a religion but pick a
time and a place to add the tests. The 'write your tests first' school is
right in that it can create another boost but it comes with the problem that
it puts a damper on any exploratory programming. I typically take three tries
to get it 'right', and only after the third try with a reasonably stable
interface is where I will add tests.

Maybe I'm a lousy designer in that I can't nail down an interface on the first
try but on non-trivial code this is unfortunately my experience so far.

First try: very quick, rough and dirty. Second try: a bit slower, mostly right
but still needs major rework. Third try: minor tweaks in the longer term but
mostly finalized.

The elapsed time is usually something under a day for the first try, a few
days to a week for the second and after the third try it goes into
'maintenance' pretty quickly.

~~~
peteretep
> why I don't make testing into a religion but pick a time and a place to add
> the tests

Quite. Testing is great. "Test Driven Development" where expensive Agile
Consultants who haven't written code in ten years tell teams they always have
to write tests first for everything is bullshit: <http://xrl.us/bmebj3>

~~~
Drbble
Many successful devs also preach TDD. TDD is great when your client has a high
quality spec, or you are being paid for meeting team spec. TDD is less great
when your project goal is "build something neat" with a very loose spec and
wide tolerances.

~~~
philwelch
Well it really depends. You can make a religious Process out of TDD where you
start with something like Cucumber and work your way down through the design.
You can TDD your glue code. You can TDD trivial stuff, like when you put a
link in your web app to your new feature, you could write a view test that
verifies that the link appears. You can, if you _really_ want to, go through
an obnoxious iterative process of writing the same function in a progressively
less and less broken state. There might even be value to this approach. I
wouldn't know.

Or you could do something like this: whenever you have a function that seems
to do an actual algorithm, or process some data, or provide a predictable
result given an input, you write up a few asserts and _programmatically_ see
whether it seems to work immediately after writing it instead of writing it
and wasting your time futzing about with it by hand, or tracking down a bug in
it after noticing strange results coming out of the whole program.

Or you could do something like _this_ : whenever you get a bug report, create
a test case that reproduces the bug first.

Agile Consultants are like anyone else who tries to sell you enlightenment:
they wouldn't eat unless there was actually a kernel to truth behind what they
were saying. Writing your tests first is a _fantastic_ idea sometimes.

------
daleharvey
I forgot where I first read it, but I loved the

A boy turns up to school half an hour late, out of breath as he was running
into the schoolyard pushing his bike, "Sorry miss, it took me 45 minutes to
run here" he explained to the schoolteacher, "Why are you pushing your bike,
why didnt you cycle?" the schoolteacher asks, "I was already late when I left
the house, I didnt have time to get on my bike"

As mentioned elsewhere, turning testing and code coverage into a dogmatic
religion is obviously a bad idea, but when I talk to people about testing it
seems that we error hugely on the side of not testing, when you dont think
something is testable, it is usually because you didnt design it to be
testable from the outset. There is definitely no easier programming guide I
have found than a little light that goes green when I have done the right
thing, If I have tested code properly it is an order of magnitude less likely
to take on technical debt, huge sweeping refactorings are no longer big scary
tasks

~~~
jonstjohn
Completely agree. People tend to dismiss testing rather than balance the depth
of testing that they do. 100% code coverage doesn't mean you've tested every
conceivable combination of parameters to a method.

One of the biggest benefits of testing in my mind is improving the design of
code. If you have code that is very difficult to test, there is likely
something wrong in your design.

Any way you cut it, you need to become knowledgeable about testing to be able
to apply it effectively.

~~~
spc476
At work, for one small subsection of the project I'm on, I wrote the
regression test. I'm testing a "program" that consists of 56 processes (what
I'm testing) across three machines (and requires around four other processes
across two machines to stub out some services we require but aren't
technically part of what I'm testing). It can take up to half an hour to set
up (one of the reasons it's not fully automated is that the third party
network stack we rely upon will shut down if there are too many errors) and it
takes around four hours to run (except for two test cases that require manual
intervention to run properly).

And that's just for the back-end processing (nearly 300k lines of C/C++ code).
Unit testing? Okay, for large values of "unit", and most of the "units" being
tested require almost as much set up as the entire "program".

Is something wrong with the design? Given the constraints and how the project
evolved, I can't see it being any simpler. And I'm somewhat overwhelmed with
the thought of testing the frontend (which requires Android phones).

~~~
wpietri
My metric here is always finding the most valuable way to use my time long
term. In the short term, test automation always seems wasteful. But in the
long term, it's great. Solid product, little debugging, and minimal manual QA.

You're in a situation with a lot of legacy code. Testing shapes design, but it
sounds like you're trying to retrofit testability onto an existing mess.
People cut corners for years, and now it's your problem. That sucks.

In your shoes I'd either start improving it or find a new job. I think life's
too short to spend my time doing something a computer could and should be
doing.

~~~
spc476
Heh ... it's actually a new project. Yes, the majority of the code is third
party software. And while we do have a bit of "legacy code" in the project (in
the form of the third party proprietary network stack that literally _is_ in
pure maintenance mode) we're mostly working in a legacy system (telephony
network) that requires very high degrees of redundancy (hence the number of
processes on the number of machines).

And for the most part, I was able to get the regression test for the backend
process(es) to run unattended once started (thankfully---I (along with two
others) did it _once_ manually and it was horrible). I have no idea of how to
do that for the frontend Android cell phone client. Sure, we can run tests on
an emulator, but there are issues with the Android emulator (it exhibits
different buggy behavior than the the physical hardware) so that only gets you
so far. It's an interesting (if somewhat overwhelming) problem.

~~~
jonstjohn
Would you consider this testing 'unit testing', though? Sounds more like
higher-level integration tests with a lot of dependencies. Honest question.

~~~
spc476
I'm not sure. Yes, we _can_ (to some degree) independently test parts, but
like I said, each part requires a significant portion of the environment to be
up (or simulated). And "unit" testing (that is, testing an individual routine
or module) doesn't really make sense given how the code is written (receive a
message via SS7---the network stack I mentioned) and convert it to an IP based
message). To test the portion that talks to the telephony network requires a
telephony network (very hard to mock out---lord knows I would love to) and
another major unit we wrote (which is another part I test) to even be
testable.

And to test that other part? Well, it requires I mock out the previous unit
(or run it), plus three three other parts (one including a cell phone---which
is really a simple script at this point). And again, it doesn't really make
sense to test individual routines because this takes the translated IP packets
from the SS7 module, and makes several queries to other IP based services. So
a lot of what's going on is just simple translations (in a
multithreaded/multiprocessor environment---more fun!).

~~~
jes5199
I went to an Agile class where the lecturer compared unit tests to double-
entry bookkeeping. An accountant doesn't say "oh, I don't need to add up both
columns here, I know it's just trivial addition".

Once I got in the habit of writing tests of even the most simple
transformations, the code complexity and my test complexity grew at the same
rate, so it's much harder to end up with a giant untestable mass.

~~~
spc476
I once spent over a month tracking down a bug (in a different project than the
one I mentioned above) that I have a hard time seeing how unit testing would
have caught. The program: a simple process (no threads, no multiprocessing)
that would, depending on which system it ran, would crash with a seg fault.
The resulting core files were useless as each crash was in a different
location.

It turned out I was calling a non-re-entrant function (indirectly) in a signal
handler (so _technically_ it was multithreaded) and the crash really depended
on one function being interrupted at just the right location by the right
signal. That's why it took a month of staring at the code before I found the
issue. Individually, every function worked exactly as designed. Together, they
all worked _except_ in one odd-ball edge case that varied from system to
system (on my development system, the program could run for days before a
crash; on the production system it would crash after a few hours). The fix was
straightforward once the bug was found, but finding it was a pain.

So please, I would love to know how unit tests would have helped find _that_
bug. Yes, it is possible to write code to hopefully trigger that situation
(run the process---run another process that continuously sends signals the
program handles) but how long do I run the test for? How do I know it passed?

~~~
jes5199
no, unit testing doesn't tell you if your constructs aren't safely composable.
So: it will pretty much never find a threading bug, a concurrency bug, a
reentrancy bug, etc.

I only know three ways to detect this sort of bug, and they all suck: 1) get
smart people to stare at all of your code 2) brute force as many combinations
as possible 3) move the problem into the type system of your language so you
can do static analysis of the code

------
rlander
Let me just state a fact: every programmer tests code. Whether you're checking
a command line output, experimenting in the REPL or reloading a browser,
you're testing your code.

What rubs me the wrong way is that, instead of a simple "you know all that ad
hoc testing that you do? There's a way to automate that that'll probably save
you some time and let you test the same things, in an automated fashion, with
the press of a key...", _non-testers_ usually get a condescending "oooh my
sweet summer child, what to you know of code?"

~~~
rickmb
I'm afraid the statement that "every programmer tests code" is _not a fact_.
Not by a long shot.

In fact the vast majority of bugs are the result of not testing changes _at
all_ , in any way shape or form. Committing code changes without even running
it (or only running it to the most simple and predictable of scenarios) is not
an exception.

Many programmers, especially those that don't like writing test, simply assume
it works "because it was simple". It's this utterly unrealistic and
unprofessional hubris that gets condescending reactions.

Given the damage it does to both the product at hand and our profession in
general, I would say condescension is a rather mild response to this
behaviour.

~~~
tonyarkles
At one of my past jobs, the threshold for "should I commit this?" was "does it
compile?". The rationale was that if there were problems with it, they'd be
caught during "acceptance testing" (which was probably 6 months down the
road). I didn't stick around for very long...

~~~
prodigal_erik
I've seen PHP committed and deployed to production with syntax errors, which
means nobody ever tried the offending pages _even once_. I also left that shop
pretty quickly, because I don't think options keep vesting if I garrote
somebody with a network cable.

------
marvin
Hey, I have a question for all you TDD fans. In my (still short) programming
career, I have only stumbled across stuations where automatic testing looks
impossible to do in a sensible way - deploying a patchwork of code against
huge platforms like SharePoint or working with APIs like COM that don't lend
themselves very well to testing, and where the code is "interface" heavy
rather than "logic" heavy. Recently I've been looking into iOS development.

I also get the impression that most IT work today mainly involves working with
huge libraries and APIs, and that automatic testing therefore is hard to
implement in a sensible manner. I very much get the idea of automatic tests
(and I've wished for them quite a few times, where they've been very hard to
implement because the API seems to get in the way). But are there really so
many applications for them unless you are building a huge, monolithic
application where everything is defined in advance? Seems to me, like a lot of
people have pointed out, that it will slow you down when prototyping.

This is not a criticism of TDD/automatic testing, but I just don't see how to
create good tests when most of your code is just glue between different
libraries and most of your time is spent reading documentation and chasing
bugs in your library. Would be really cool if someone could point me to an
overview of these things. Am I just in the wrong organization?

~~~
davesims
You're asking for integration tests, and yes those are hard to do, esp when
the integration is against a large deployment of third-party or existing API
code like you describe.

What you want to do in that case is isolate the "glue code" if you can and
test its assumptions in isolation. Wrap the API dependencies in an interface
and inject mock objects to play the role of that API. This is really what
mocks do well.

If your code is also bootstrapped by some special plug-in hook that is hard to
emulate in a test environment, like a MS SharePoint or Dynamics thing, then
you should isolate the code in question from the Class that implements that
hook, so that a test can boot up that code just like the plugin would.
Interfaces are probably a good option on this end as well.

So, you often can't test your production code in an integration environment
exhaustively, but that's OK in most cases because A) a truly exhaustive
integration test is probably a combinatorial problem and not realistic anyway
and 2) you'll just be proving that your third-party API works as guaranteed,
which is probably not your highest risk and not worth the trouble.

Your real concern is to test the assumptions of new code and also create the
TDD discipline around that code which tends to make for better code.

------
TamDenholm
I think going back into some production code you wrote years ago and doesnt
have tests and putting them in is a really nice way to self-reflect. It would
provide at least two benefits, 1) improving the code 2) showing you just how
far you've come since writing the code, which in turn shows you how far you
can keep going.

------
jader201
> For a lark I'm going to take an old project that has been running for years
> without a hitch and I'm going to retro-actively add tests to the code.

Be prepared to possibly rewrite most, if not all, of your code. I have found,
at least in my old code -- which admittedly was written well before I became
familiar with the concept and discipline of testing -- that my code is just
not testable. At all.

It is very tightly coupled, and against which is almost impossible to write
any sort of meaningful tests.

But maybe you were a better designer than I was, and if so, you may be able to
fairly easily add tests to your code.

But if not, you're in for a challenge. :)

~~~
__abc
If your code has been "running without a hitch" and "an old project" (which I
assume means you are no longer actively updating it) why are you adding tests?

Out of curiosity, what's the ROI that you see in adding tests to that project?

~~~
ttt_
For some reason clients tend to get upset when things that were working
suddenly break (they don't even care if it as legacy code base!).

Joking aside, in order to effectively prevent that, a test suite must be in
place to prevent regressions when you change/add code. If there isn't one, you
need to decide between cranky clients or the nastyness of adding test to
legacy code.

~~~
falcolas
> For some reason clients tend to get upset when things that were working
> suddenly break (they don't even care if it as legacy code base!).

Another sad truth to consider is that just the process of refactoring your
code to add tests can just as easily break functionality. I've learned this
the hard way.

------
DanWaterworth
The cool kids write invariants that they prove correct via equational
reasoning.

~~~
ericbb
There's also the Woz method of thinking about the problem so intently that you
can simulate the whole system in your head without even looking at any code.
Being unable to write your program bug-free would be like not being able to
recognize your mother.

------
edw519
_For a lark I'm going to take an old project that has been running for years
without a hitch and I'm going to retro-actively add tests to the code. I'm
really curious how many bugs and unexpected behaviors will turn up._

Last week I got an enhancement request from a customer that basically said,
"Add B capability and make it work exactly like A capability."

"Cool," I thought, "This should be easy."

So I examined all the "A" stuff, which had been in production since 2008. Then
I cut, pasted, modified, and added a whole bunch of stuff. (I know, I know,
every once in a while, a programmer's just gotta take the lazy way out.)

When I started unit testing my B stuff, I broke everything on almost every
try. This was before I even assembled a test plan, it was just a hacker
beating up his own work.

How could this be? So I went back and beat up the A stuff and broke it in all
the same places. Stuff thousands of people have used thousands of times. Sigh.

~~~
dos1
How is the A stuff wrong when thousands of people have used it
successfully(presumably for its intended purpose) thousands of times? I
understand where you're coming from. But the longer I do this, the less
dogmatic about testing I get. If the code works for its intended purpose then
it's probably all right. Now, adding features and having confidence in
refactoring is another story with untested code.

~~~
DanBC
Because the people using it for the intended purpose are not fuzzing it for
potential security flaws.

~~~
dos1
Do you write lots of security fuzz unit tests? Cause I don't. I write tests
that hit the edge cases I can think of, but invariably, I can't think of
everything. I really don't think the security argument has much to do with
unit testing.

------
joeyh
In one project, I refactor my code late, late at night. I do it in almost a
dream state; it's a process of nearly pure symbolic manipulation, involving
none of the complex mental model that we're used to needing to maintain while
programming. I've been doing this a few times a month for a year, and have
introduced _one_ known bug. I have a very modest test suite.

I'm no programming god, I'm just writing in haskell. Referential transparency,
purity, and strong type checking for the win.

(That one bug? Made during this hlint run <[http://source.git-
annex.branchable.com/?p=source.git;a=commi...](http://source.git-
annex.branchable.com/?p=source.git;a=commitdiff;h=7e17151e69fcd86fd5cb90dd61ff55d2d017fee7>);
and fixed here <[http://source.git-
annex.branchable.com/?p=source.git;a=commi...](http://source.git-
annex.branchable.com/?p=source.git;a=commitdiff;h=7e17151e69fcd86fd5cb90dd61ff55d2d017fee7>))

------
glennericksen
I've taken a long, meandering road to appreciating TDD/BDD. When I started
programming, I looked up to _why and his hacking approach to coding. Sadly, I
could not express his brilliance and my code was not just untested and sloppy,
but fragile and inundated with smells. As the scale of the projects I develop
increases, I've learned to use testing to decrease the potential breakage and
to better understand the libraries and features I'm working on. Of course
there is an exploratory spike here and there, with tests coming in later to
glue it all together, but those are now exceptions to my normal practice. When
debugging legacy applications, simply creating test coverage for problem areas
goes a long in solidifying the patches. Testing is not fail proof elixir, but
it certainly improves my workflow and my product, and those results are hard
to argue with.

------
MatthewPhillips
Related: I am a big fan of testing in my personal projects, but have never
written a test for a client because they don't want to pay the additional
hours. I have quoted about 25% - 30% of the time on writing tests , if they
want them. Am I wrong? Ditto on documentation, no biters.

~~~
jader201
If you read _Pragmatic Unit Testing_ [1], it talks about how writing tests
actually takes less time than building a project without tests, in the long
run.

First of all, it's possible to actually ship a product that was built with
tests quicker than one that was not built with tests. This may not always be
the case, but adding tests doesn't necessarily mean that it will add time,
overall. It may feel like it's quicker to build an app without tests, but
often the testing and bug fixing that happens at the end often exceeds the
time it would have taken to build tests and eliminate most of the testing/bug
fixing at the end.

Second, bypassing testing rarely saves time in the long run, especially for
apps that continually require maintenance to existing code. Regression almost
always occurs, and sometimes this isn't caught until production.

Unfortunately, it's a hard sell to clients, and depending on how well you are
at covering this up, it often goes unrealized.

[1] [http://pragprog.com/book/utj/pragmatic-unit-testing-in-
java-...](http://pragprog.com/book/utj/pragmatic-unit-testing-in-java-with-
junit)

~~~
MatthewPhillips
While I agree with you, it's a very tough sale if you're consulting for a
company without a strong programming department. I find that companies plan
for best-case scenario and deal with the consequences thereafter. Bugs are
usually considered a programming mistake, even while acknowledging that poor
planning plays a part.

------
trimbo
Someone needs to spell out the difference between "tests" vs. "test _ing_ ",
might as well be me. Tests are something you, a developer, or a test engineer
writes. Really important. Test _ing_ is something that the dev, a test
engineer, or someone off the street can do. That is the _most_ important part
towards making working software. Unit tests are important, but are in a
vacuum. The only way you can look at your code in context is to have someone
use it.

Lots of games (including ones I worked on) ship with hundreds of thousands or
millions of lines of code and practically no unit tests (if any). According to
this article, they shouldn't be working at all, let alone make billions of
dollars. Do I think it's a best practice? No. But on those projects we had an
army of QA to test builds and producers obsessed with their features to always
be in there making sure that things worked. The most effective testing was
"everyone play the game day" (or weekend). Only then can you find edge cases
that unit tests can't. Dogfooding is another important take on this concept.

tl;dr: unit tests are important in their own right, but even 100% code
coverage can't tell you if the thing as a whole works as the user expects.
That's "testing" as opposed to "tests".

~~~
wonderzombie
Someone in the test industry here. I had a stint in the games industry, as
well.

An army of QA is... problematic. Inevitably it turns into a death march, which
is a huge waste. Worse, developers feel much less inclined to own the quality
of their code (consciously or unconsciously) because "QA will catch it." I
suspect that even if you _wanted_ to be more rigorous, the incentives are
against you.

Fixing bugs filed by QA is also expensive, relative to fixing the code before
it's checked in. By the time a bug makes it to QA, it's a ton of patches
later, the developer is working on another feature, and it's not at all
obvious which patch introduced the bug. The most expedient strategy of "revert
the culprit" is difficult if not impossible, and checking in new code for a
fix introduces further risk.

There's certainly something to be said for expert/exploratory testing. Hell,
it puts food on my table. :) But as a tester, I'm far less inclined to work on
a product where developers aren't concerned about code correctness or quality
even at a micro level. It says to me that they don't value my time.

Then again, I suppose in the games industry it matters less what QA thinks,
given that the people actually doing the testing are often temporary/contract
employees.

------
diwank
Completely agree. Even though I am quite new to coding, I was shocked to see
how much more time I spend on debugging a piece of code than actually writing
it. The ratio is closer to 3:1 and sometimes even higher. Writing tests has
brought that down to roughly 1:1 (varies from project to project).

I still don't do as extensive testing as I want to (primarily because I am
lazy) but I have seen the shift in the way I think about solving a problem.
Writing tests forces you to assign structure to your code (in my case I put it
down on paper). It helps you think in terms of "pipes" as in what is going in
and what comes out.

But, I think a lot also depends on the nature of the project. Parsers and
frameworks may need thorough tests while simple apps may do without many. And
for some people, it may be too much of an overhead at times.

------
siavosh
As a long time skeptic, but a new convert to TDD, it also occurred to me that
I was also reaping the benefits of immediate feedback after each build
analogous to Bret Victor's theme in his talk and demo at CUSEC. It definitely
increased my iteration speed.

------
JoeAltmaier
Anecdote: I rewrote an operating system (back when people did things like
that) to repackage it as a library of modules (kernel, drivers, services) that
self-configured on each boot.

It took 19(!) tries to get past the 1st line of code in the entry point
module.

There is no such thing as a trivial change (tho that was sure not trivial). My
mantra is: "If you haven't tried it, it doesn't work" We all know that, deep
down. We tell stories over dinner of the time something worked on the first
try. Why? Because that hardly ever happens.

------
commiebob
As a mostly self taught programmer who has been ignoring tests for too long,
can anyone provide links to any books/primers/resources that can introduce me
to writing tests and doing so effectively?

~~~
mmc
I'd suggest making a pot of coffee and diving in to the wiki:

<http://c2.com/cgi/wiki?CategoryTesting>

------
r00k
Congrats on discovering testing! Getting "test-infected" yielded easily the
biggest productivity gain in my career.

However, you've got another big win waiting for you if you'll try something
subtly different: write the tests first. Moving to TDD was another large
improvement in my overall productivity because it drastically improved the
quality of the code I write.

It seemed silly when I first heard of it, but now I won't write code any other
way (except for short, exploratory programs). Give it a try!

------
starfox
Wow, this guy guarantees that if it isn't tested, it doesn't work. Important
projects that people depend on like the linux kernel must have really good
test coverage.

------
gbog
Jacques is stating the obvious here. I see testing code and running code in an
organic relationship, like the flesh and the shell of a lobster, wrote more
about it in here <http://www.douban.com/note/205412385/>

------
sixothree
My local university now requires tests to be submitted with homework.
Sometimes though tests are provided.

------
ylem
What do people use for testing web front ends? Selenium?

------
ChristianMarks
I find that my tests often need tests, and sometimes even these second-order
tests need to be checked and verified by third-order tests. In experimental
programming though, just running the code is a test...

------
thu
I don't think it was a good read because I already thoroughly agree with you.

In fact, as with a lot of things about programming, even when programmers are
widely accustomed to that line of thought, the real problem will be to
convince management.

In my mind, testing (or lack of it) is part of the technical debt concept
(<http://en.wikipedia.org/wiki/Technical_debt>).

