

Testing is waste of time, I know that my code works - Anon84
http://progfu.com/post/384151811/testing-is-waste-of-time-i-know-that-my-code-works

======
gfodor
Not all code is as simple as an e-mail validator. I don't think anyone would
argue that unit testing pure functions like this is so trivial that there is
no argument against doing TDD.

However, there is more to the world than pure functions, and there is more to
the world than writing simple CRUD applications.

First, code that is meant as "glue" between disparate systems. For example, a
piece of code that pops off a live queue, handles gracefully when the queue is
slow or lagged, and then performs some action that has an additional side
effect.

The retort here is that you should mock out the queue and whatever other
systems. This can only get you so far though. At the end of the day you need
integration tests that simulate the entire environment. Often it's not
possible to build up and tear down an environment that simulates reality, and
additionally it is not always possible to effectively simulate failures or
load. In this case, I prefer to spin up EC2 instances with a replica of the
real environment and manually test the code with real services under real
load. This can be automated fairly well nowadays.

Second, there are classes of algorithms that are highly data-driven, require
large amounts of data, and are qualitative in nature. Machine learning
algorithms, search algorithms, etc. Building these algorithms usually requires
a) a large dataset and b) subjective relevance feedback from a human. TDD is
not going to help you here. Sure, you can unit test the pure functions within
your algorithm (for example, it's common to compute tf*idf in search
algorithms, so test your math there) but the "full stack" test and iteration
process for this code generally requires you to manually look at results and
make judgements. This isn't something you can just automate in a unit test,
because regressions against the input data will be false positives or false
negatives, depending on which approach you take to compute a diff :) You can
get some headway by looking for large differences in RMSE or other metrics
between commits, but at the end of the day your tests will still remain
brittle under big improvements to the core algorithms.

Finally, another class of algorithms that aren't amicable to TDD are computer
graphics algorithms. The reason is the same: judging correctness generally
requires a human. Again, unit test your vector math and so on. But don't
expect TDD and unit testing to provide an exhaustive safety net to protect you
from introducing subtle bugs in your rendering code. Looking at output is the
best way to do that.

~~~
IgorPartola
Agreed. Code that retains state does not lend itself well to unit testing. One
of my latest projects at work was a MySQL node monitor with automatic
failover. I did add unit tests to it, but I can tell you that I found more
bugs, or rather quirks of MySQL's replication behavior and the MySQLdb driver
through doing my manual testing. I also spent quite a bit of time fixing the
test rather than the code, since the quirks meant I needed different behavior.
As this piece of code is quite important to us, the ROI on it will be high.
Then again the investment was sizable as well.

I think unit testing should be viewed in terms of ROI because, just like
premature optimization, often times a lot of effort goes into it where the
return is negligible. Pure functions can be easily tested, so test your
critical calculations every time. On the other end of the spectrum you have
things that interact with external systems. Do you unit test an email
notification function? How about a CSS layout? Can you easily test for an out
of memory condition or a failed hard disk? Sometimes it is just cheaper to use
human judgement.

Lastly, as many have pointed out, unit testing is not a silver bullet. Just
because you did not break the unit tests does not meant the code still
functions properly.

------
richcollins
90% of the code you write at a startup is exploratory. It's written to learn
something about your customers. If you break it, you typically don't incur a
much of a penalty in terms of cost / time.

Testing every bit of code does cause overhead. I find it takes more than twice
as long to write code when you test it. This overheard puts a big drag on the
speed at which you can move, which can kill a startup.

For these reasons, I only test code that's proven its long term value.

~~~
danielharan
If it's taking you twice as long, you're doing it wrong(TM).

Or more likely, you're not measuring the time properly, using memory to
estimate the difference. It's easy to discount the time re-running the program
every time to check the result, and think you spent more time than you did
writing tests because that seems painful to you.

So: measure it rigorously, and pair-program with someone that knows how to do
it properly.

~~~
subbu
He is talking as a business owner and not as a programmer/coder. You should
also consider his background in Lean Startup Circle. Fundamental rule of MVP
(minimum viable product) is to get your product out there in the market as
soon as you can. Writing tests goes in the opposite direction. The goal of an
MVP is to validate your idea with real users. To see if it works or not in the
first place. There is often very little time to write test cases or even look
at edge cases. In that light writing tests doesn't make much sense.

~~~
icefox
Why can't they write a test that makes sure the basic case works?

------
HeyLaughingBoy
I love this title because this was _exactly_ the response I got from a
contractor who was hired as a Technical Lead of a team I had to work with.

I found a bug in a module they released to my team, created a series of about
5 steps to make it perfectly reproducible and contacted the team lead. He
adamantly REFUSED to believe me. Told him how to reproduce it and his response
was "I don't need to do that because I know it works fine." I pointed out that
until the bug was fixed, I couldn't make progress because my code was
dependent on theirs working, he suggested it was somehow my fault. 'K then. I
found a way to reproduce the bug _without any of my code running_ (they had
written a simple external dialog to demo the module) and the guy still
maintains that "there are no bugs in that code, I've been running it here for
weeks. You must not know what you're doing."

If he wasn't 1,000 miles away I swear I would have gone over there and beaten
him over the head with a stack of Knuth!

Finally I said screw it and went over his head. I knew his manager (who used
to be a programmer) pretty well and explained the problem and how I was being
stonewalled. He figured it out in less than a minute on the phone and told me
how to setup their configuration file so it wouldn't trigger the bug and
promised the bug would be fixed. And apologized as a bonus.

Not surprisingly, we had tons of problems later with them not properly testing
a bunch of common error paths. Finally the TL pissed off the wrong person and
got himself fired.

OK, this is a bit OT, but I needed that rant :-)

------
cheald
I am a reformed non-tester. A couple of projects ago, I started writing some
tests for some particularly gnarly pieces of code that I wasn't confident were
going to stay working throughout the product's development cycle. Once I got
those written, I found that I was over a hump - I was already set up with my
test suite, object factories, net communications stubs, etc - all the hard
work was done.

At that point, I found that it became _faster_ to write my tests and then code
to make them pass than it was to develop "traditionally"; in cases where you
do lot of setup or teardown for a piece of code (what happens when a user adds
X to his account? Manually testing involves, adding, then testing, then
clearing, then adding...), automated tested really shines. What used to take
30 seconds of manual testing time per iteration now takes a ctrl-S and 3
seconds of tests. Multiply that by 20 iterations and the time savings are
significant.

I'm at the point now that I write tests _because_ it saves me time, not
because it's the "right thing to do", and I'm much more confident in my code
as a result. I don't test everything, but the stuff that's easily testable,
high volatility, or mission critical, you betcha, that's getting tested
thoroughly.

The problem is that if you write, manually test, and commit a piece of code
once, you're only guaranteed that it is working at the time that you commit.
Two weeks down the road when you change something only mildly related, you
either a) retest that "known good" code, or b) make the potentially faulty
assumption that it still works. Once an app reaches a certain level of
complexity, any particular change is going to require a QA department to
ensure that it didn't break something else. Automated tests are your first-
line QA department. Your test suite can exercise all the important bits of
your code in one fell swoop, so if you break anything, it'll let you know, and
quickly. The value of this cannot be understated, and once you've tasted it,
you'll never want to go back.

------
srean
My beef against testing is that I often see it adopted (and often subtly
pushed) as an alternative to thinking through the logic carefully -- Why mess
with the if statements, my tests will catch the errors if any. Should I index
the array with i or i+1 ? I will just test what works. Should I loop till n-1
or n-2. Let me just stick with n-2 and add a test...

The problem is that to write tests that really ensure that the logic is
correct you really need to think through the logic really hard. Just spraying
the code with a few tests that happen to come to the mind is not enough.

I am yet to see a convincing argument that shows that coming up with a
sufficient number of (unit)tests from a specification is any easier than
getting the logic right.

Testing is well intentioned but often abused as an excuse to be sloppy.

------
geebee
This post makes the strongest argument I can think of in favor of testing:
you're doing it anyway, so why not keep them? I don't know anyone who doesn't
write a little main or side program to test out the code they have created.
Unit testing can amount to nothing more than just keeping those tests in a
file that runs periodically and flags raises a notice when they no longer
work. Usually the "problem" is that the code has changed and the test needs to
be modified, but sometimes the problem is an error introduced into the code.

TDD is a slightly harder sell, because it isn't how most people normally code.
TDD folks strongly believe that once you train your mind to think this way,
it's the right way to go. I don't write code this way, though. I do what I
described above.

The problem arises when it starts to become very difficult to write tests, and
the scenario I described above no longer applies. If you're writing something
that can be tested through simple main style program, no problem. But what if
you need to test a UI element, or a service, or something immensely database
oriented (that would require writing extensive mocks just to do the test). A
good framework _should_ make this easily testable, but they don't always.
There's a limit to how much I'll bend over backwards to unit test something.
In this case, maybe integration tests are the best way to still get some
coverage.

~~~
thesz
>I don't know anyone who doesn't write a little main or side program to test
out the code they have created.

We're not met in person, but - here I am! As some of my colleagues.

Most of my tests do not leave REPL. Those that happen to be outside of REPL
are "functional tests" - tests for a whole bunch of a system.

~~~
mononcqc
Why don't you just automate the REPL tests so they can be done for you?
Whatever you type in the REPL, add it to the test suite. Whatever you check as
output, that's your set of assertions. It will save you a whole lot of time in
the future.

~~~
thesz
Why should I bother to convert tests from REPL? I use REPL for experiments
along with testing. When I done experimenting I also done testing. When I'll
return there, it would be for another experiment, significant part of my
previous tests won't run again.

Also, I work mostly with pure functions and strong type systems. Those
functions won't change their behavior if I change something in the system.
These types won't let slip something bad that is hard to find.

------
hxa7241
Testing is a bit odd. It has practical value, but 'philosophically' it does
not seem to make any sense.

When you are about to write some code and a test for it, you are starting with
one piece of information: what you want the code to do. So why don't you just
translate that into code? What do you gain by translating it into _two_ pieces
of code, and comparing one with the other? How can one have authority over the
other?

So testing must be about ensuring consistency: if you change the code and the
tests fail, you have made an invalid change. But that raises the question, why
do we allow changes to be invalid? why don't we constrain code modification to
only the kinds that maintain consistency?

Maybe testing is just one of the best things that are practically possible. .
. . but there is a nagging feeling that it does not make sense!

~~~
jeffdavis
Languages with powerful type systems do allow the programmer to ensure
consistency without duplicating code, to a degree. And I suspect that it does
reduce the testing burden substantially when used correctly.

There will always be some need for all of the following: static checking (e.g.
compiler checking the types), tests, and code review. There's a simple
economic reason for this: if you omit any one of those strategies, then the
cheapest way to find the next bug will almost certainly be the one that you
omitted.

You're right that tests are redundant (you could say the same thing about type
annotations, perhaps), but redundancy is underrated. Redundancy aids
readability, and it also helps catch mistakes when there's an inconsistency.
If you put code and tests near each other, it might be helpful to think of it
like: "<code>. In other words, <test>." Similar things could be said for type
annotations, declarations, and constructors; or code comments.

~~~
hxa7241
I suppose testing could be understood as a special error-correcting code for a
particular noisy information processor -- humans writing software.

But then one must begin to wonder, are they doing that job very well? Testing
does not seem to be so carefully designed as Hamming codes or others . . .

~~~
jeffdavis
It's not as much about clarity of communication as it is clarity of thought.

Also, a program is not a single message being sent to the computer. A program
is revised over time, and testing helps ensure the integrity of the program
through revision.

~~~
hxa7241
> ensure the integrity of the program through revision

Yes, that is what I am thinking: each step is like sending the signal through
a noisy channel (that also does some transforming -- it is not an exact
analogy). But testing doesn't seem to be carefully designed to address the
particular kinds of 'noise'/mistakes that humans make.

------
scsmith
I'm amazed at the number of people that don't test at all but also at those
that blindly believe that because they're told to test they should always do
it. I think it's important to pick and choose when it makes sense for you to
test, when you are prototyping then you might not need to, in some cases
though it's actually quicker to write something to test an output than it is
to keep trying it another way.

The real skill in testing is knowing when it should be done and how. It's good
article and ultimately until you have tested you can never know when it's
write to make use of testing and TDD.

------
kylebragger
Amen. I'm going to pass this article around to doubting colleagues of mine.

TDD saves my (and Forrst's) ass on probably a daily basis. The most recent
debacle with our rather complicated post formatting library (which involves
Markdown, autolinking emails, usernames, URLs [but ignoring all content in pre
or code blocks, and not double-linking URLs in href or src attributes], XSS
cleaning/sanitization, and tag rebalancing), would have likely been 1000x
worse without a comprehensive test suite. There's just no way to test every
possible case manually.

------
JonnieCache
Delicious link-bait title there.

The best articulation of the benefits of TDD/BDD that I've heard is that it
shifts the pain to the start of the project, rather than the end, where it
otherwise tends to reside.

------
jimfl
Automated testing gives you the confidence to make very invasive changes to
code at any stage during the development process. This is especially important
in environments with short iterations, where you are not necessarily designing
for features that haven't even been conceived yet.

Note that I didn't say unit testing. The author gives the example of writing a
test to verify that a bug exists, that the fix makes the test pass, and now as
you grow the code you have confidence that the bug doesn't reemerge.

The same is true for features. You can write a test that verifies a feature
exists, and that the feature continues to exist as you radically refactor the
code to make new features fit better, as well as across such dangerous
operations as branch merges.

------
lutorm
I'd _like_ to do testing. The problem is that all the examples of these tests
are of modules that work on simple data and produce deterministic outputs. In
those cases, I can see how it's easy to set up an automated test.

But a large portion of my code is Monte Carlo, so the only way to see if it
works correctly is to evaluate the distribution of the output, which will be
correct only in a statistical sense. Moreover, the modules operate on classes
with fairly complicated data, so it's difficult to mock up input data without
running _that_ part, too.

Why don't anyone show a _realistic_ example of setting up an automated test on
a real-life piece of software?

~~~
darthdeus
There's big difference between integration and unit tests. If you're having
trouble writing unit tests, it's probably because your code is too tightly
coupled, making it hard to separate one part from another.

When you try to write a project with tests, not necessarily test first, it
will enforce certain design. This is almost always a good thing, since you're
forced to write modular code.

------
bradleyjoyce
I can understand how TDD can feel like an extra burden if you're a single
developer working on a project. However, having tests is so critically
important if the project is passed on to another dev. I recently inherited a
project that I had worked on at a previous gig. There were no tests but at the
time I was familiar with the code and it didn't matter too much to me at the
time. Now, the code has changed significantly, there are still no tests, and
I'm always a bit nervous if I have to make a push to production, wondering if
somehow I missed something that's going to break the app.

Even if you _hate_ testing... do it for the sanity of the next dev on the
project.

~~~
StavrosK
Where "next dev" includes "yourself, three months from now".

~~~
bradleyjoyce
100% correct!

~~~
StavrosK
I absolutely adore unit tests because they mean I can change something down
the line and be relatively sure that I haven't broken something else. It's
worth writing the simple, 80%-coverage stuff for the peace of mind alone.

------
sammyo
7) Just can't start that diet. I know it's a good idea, I certainly do it
informally to some degree, but it's not part of the culture, building up the
infrastructure is a bother, but basically just the inertia of day to day stuff
holds me back. (sigh, maybe a new years resolution?)

~~~
jeffdavis
To build a culture of quality, the best way to start is a mandatory code
review system. Code doesn't go in until it's peer-reviewed by at least one
person.

Code review is critically important: it instills a different attitude in the
programmer (someone is going to read this, so I won't get away with
sloppiness); and it puts the focus on readability. Tests provide at least two
benefits to readability: the reviewer knows what the code is supposed to do
(provides better context), and the reviewer also has greater confidence that
you didn't break existing basic functionality.

Even without mandating tests, reviewers will soon start to return patches with
comments like "Broken when X,Y,Z happen. Add a few tests around that tricky
code path." Then, it will eventually escalate to general comments like "where
are the tests?", because reviewers will get tired of testing basic
functionality.

------
richardburton
I'm a rank amateur coder working on a minimum viable product so I've had to
can testing to get something out there quicker. I understand its purpose and
would love to know my code is squeaky clean but under the circumstances I've
had to can it.

~~~
rue
You'd rather release a buggy, possibly completely dysfunctional application
than spend time on tests? How much overhead do you imagine testing would
bring?

------
ssp
Test suites can make you less careful because you start relying on the tests
flagging any bugs. And then when the test suite does find a bug, you just hack
the code until the test suite passes instead of fixing the underlying problem.
This is the same problem as using microbenchmarks for performance work: you
end up tuning to the benchmark instead of to the real world.

~~~
jerf
By "you", I think you may mean "I". If ssp is hacking the test suite to bypass
a failing test, ssp has the problem, not the test suite. jerf does not have
much trouble with that, jerf has experienced "the one failing test that turns
out to reveal a major underlying problem followed by a real fix that it would
have taken him multiple customer-losing hard-to-reproduce bugs in field to
learn about" multiple times.

Not that this is perfect, I've got just such a bug out there right now that
simply refuses to be reproduced by anyone once a developer is looking at it,
but I sure have _far fewer_ of those than I would without the tests.

~~~
ssp
_If ssp is hacking the test suite to bypass a failing test_

I'm talking about hacking the application to pass the test suite, not hacking
the test suite. You can often "fix" a failing test by doing

    
    
         if (condition that failed)
               whack the application state so that the test suite will pass.
    

without understanding what the actual bug were. And it's not always obvious to
yourself that this is what you are doing.

 _ssp has the problem, not the test suite_

Tuning to benchmarks is not some unique character flaw of mine. When you
measure some aspect of people's behavior, they will optimize to that
measurement. If the measurement is a boolean PASS/FAIL from the test suite,
then they will optimize their behavior to get a PASS.

But the actual _desired_ outcome is not PASS, it is "bug free program".

~~~
jerf
Hack the app, hack the test suite, I meant either equally. Which I think if
you read my post is pretty clear that the main dichotomy is between "hack" and
"real fix" and where the hack goes hardly matters.

Nevertheless, you are arguing that because people sometimes program to
benchmarks, you are apparently better off without the benchmarks. I say that's
nonsense. The solution is to use the benchmarks better. Are we programmers or
automatons? (Or managers?) If you're going to be that defeatist about
programming you're not going to be able to be a successful programmer under
_any_ circumstances, the entire field is a minefield of superficially-
appealing optimization opportunities!

~~~
ssp
_Nevertheless, you are arguing that because people sometimes program to
benchmarks, you are apparently better off without the benchmarks._

No, I am not.

------
petercooper
I used to be in the test-hater club but it's saved my bacon and made my life a
lot easier since I've gotten over my pride and sucked it up.

I'm writing quite a few libraries lately and doing refactoring as I go. Having
a nice stockpile of tests built up along the way means that if I make some
changes, I can know within seconds whether I inadvertently broke anything.
This makes me braver when it comes to adding new features or reworking
algorithms I've already written and "know" work.

Even if you're doing highly exploratory work or if you suck at testing, even
bad or cursory tests can raise the oddest of bugs. I've seen it first hand and
it's what convinced me to really get into the practice. If even bad tests
could save my bacon, imagine what I could do with better ones..

------
shaddi
I've always wanted to try TDD, but for the stuff that I am working on these
days -- wireless kernel drivers -- it isn't clear to me exactly how to do
this. Does anyone have experience doing TDD at the kernel level?

------
callmeed
My biggest hang-up is the ever-changing landscape of TDD tools (for Rails) and
getting them to work with more complex components like authentication,
authorization, emails, background processes, and oauth connections and API
calls.

It's frustrating at times getting all the proper tools lined up and working.

I like actual TDD but getting it setup makes me want to skip/minimize it.

------
dgreensp
Regression tests for a parser or compiler are by FAR the best application of
test suites; they don't make a good defense of testing in general. I wish the
tests I was forced to write at MIT and Google -- I think I even wrote unit
tests for Pair.getFirst and Pair.getSecond Java methods at one point -- were
so defensible.

------
caustic
_"Testing is waste of time"_ \-- yes, we already know this.

Unit testing through skeptic's eye -- _"It's OK Not to Write Unit Tests"_ , as
discussed here previously: <http://news.ycombinator.com/item?id=1376417>

------
jat850
I'm not sure if anyone else encountered it, but I got a 404 when clicking the
submission link.

I was able to find the article here: <http://progfu.com/testing/testing-is-
waste-of-time/>

~~~
gnosis
It's also mirrored here:

[http://progfu.com.nyud.net/post/384151811/testing-is-
waste-o...](http://progfu.com.nyud.net/post/384151811/testing-is-waste-of-
time-i-know-that-my-code-works)

------
narkuok
Is anyone else annoyed that there is no information about the author? I would
like to have some idea of the author's credentials and work experience.

~~~
darthdeus
So you're judging and idea not based on the idea itself, but based on who said
it?

~~~
zb
The author is to some extent making an argument from experience; it's not
entirely unreasonable to ask what that experience is.

For example, if the author is still a student (I have no idea if that is the
case) that would go some way to explaining the apparent belief that the job of
a software engineer is only to solve already well-defined problems—which is to
say, homework problems. We could then discount the advice accordingly.

~~~
darthdeus
That is true only to some extent. Information can be valid or invalid
independently from it's source.

If you have an idea in your dream, does that mean it won't apply in the real
world? Even though the dream is nothing like reality, it doesn't inherently
mean that the idea is wrong, it just means that it's not 100% right.

