
Ask HN: Do you write tests before the implementation? - MichaelMoser123
I mean how many of you stick with this test driven development practice consistently? Can you describe the practical benefit? Do you happen to rewrite the tests completely while doing the implementation? When does this approach work for you and when did it fail you?
======
christophilus
No. Tests are like any other code. They incur technical debt and bugs at the
same rate as other code. They also introduce friction to the development
process. As your test suite grows, your dev process often begins to slow down
unless you apply additional work to grease the wheels, which is yet another
often unmeasured cost of testing in this fashion.

So, in short, I view tests as a super useful, but over-applied tool. I want my
tests to deliver high enough value to warrant their ongoing maintenance and
costs. That means I don't write nearly as many tests as I used to (in my own
projects), and far fewer than my peers.

Where I work, tests are practically mandated for everything, and a full CI run
takes hours, even when distributed across 20 machines. Anecdotally, I've
worked for companies that test super heavily, and I've worked for companies
that had no automated tests at all. (They tested manually before releases.)
The ratio of production issues across all of my jobs is roughly flat.

This issue tends to trigger people. It's like religion or global warming or
any other hot-button issue. It would be interesting to try to come up with
some statistical analysis of the costs / benefit of automated tests.

~~~
swat535
Maybe you can help me understand this.

Since you don't write as many tests, that means you're not actually testing
all your code branches because tests incur technical debt after all. So does
this mean you test every single branch manually? just don't bother with it at
all? Do you just have a few integration tests and they break and you spend a
good chunk of time figuring out which logical branch broke?

What happens if you make a typo, comment out a piece of code and forget to
uncomment , etc?

I'd love to write less tests but don't know how to do it?

~~~
sdenton4
Yeah, as soon as code has more than two "real" branches, I don't trust myself
to manually test them all. One of them will be broken quickly if I keep
hacking in a particular branch. (This is, secretly, also an argument for
writing code in sufficient generality to avoid this phenomenon in the first
place.)

I also never trust a test that passes the first time I run it. I am both
terrible at writing correct code, and completely normal in that regard.

~~~
allannienhuis
I believe he's using the term branch different than you : 'alternate code
path'

~~~
sdenton4
No, alternate code path is exactly what I mean. What's the functionality I'm
coding in support for today, in addition to the stuff I was supporting
yesterday? How do I know the stuff from yesterday still works? TESTS!

------
IneffablePigeon
The majority of the time, no.

There are a couple of circumstances I often do, though.

The first is when fixing a bug - writing the (red) regression test first
forces me to pin down the exact issue and adds confidence that my test works.
Committing the red test and the fix to the test in two commits makes the bug
and its fix easy to review.

The second is when I'm writing something high risk (particularly from a
security standpoint). In this case I want to have a good idea of what I'm
building before I start to make sure I've thought through all the cases, so
there's less risk of rewriting all the tests later. There's also more benefit
to having a thorough test suite, and I find doing that up front forces me to
pin down all of the edge cases and think through all the implications before I
get bogged down in the "how" too much.

~~~
Deradon
> Committing the red test and the fix to the test in two commits makes the bug
> and its fix easy to review.

I've done this in the past. Then I started to use `git bisect` and having a
red test somewhere in your commit-history is a killer for bisect. So now I
tend to include both, the test and the bug-fix, within one commit.

~~~
jackweirdy
A tip I learned is to commit the failing test but mark it as an expected
failure, if your test framework supports that.

That way you can commit the test, bisect works, and the test begins "failing"
when the bug is really fixed, and you can commit the fix as well as a one-line
change to amend the test from being failure-expected to just a normal test.

~~~
shhsshs
I see a test as a declaration of intended outcome. By writing a test to expect
an intentional failure (say you have a bug in a divide: “int -> int -> Maybe
int” function that causes it to return 0 when you divide by 0 instead of
“None”) you are declaring that is actually intentional behavior. So I would
never write a test like this - I think I would prefer committing the fix and
the new test at once. I don’t see the value in reviewing them separately,
because they are related and dependent changes.

Obviously if you view tests differently (eg. as a declaration of _current
behavior_ rather than _intended behavior_ ) then my argument dies.

~~~
jackweirdy
Keep in mind the test is written with the correct behaviour and annotated to
be failing — in a hypothetical language and framework your test would be

@failing testDivZero() { assertEquals(None, div(1, 0)) }

This expresses both the intent and the reality

------
seanwilson
I think there's this myth that TDD is one of the best ways to write software
and if you admit you don't do it, you'll be seen as a cowboy and will look
stupid. I think the truth is TDD has its pros and cons, and the weight of each
pro and con is highly dependent on the project you're doing.

\- The uncomfortable truth for some is that not doing any testing at all can
be a perfectly fine trade off and there's plenty of successful projects that
do this.

\- Sometimes the statically checked assertions from a strongly typed language
are enough.

\- Sometimes just integration tests are enough and unit tests aren't likely to
catch many bugs.

\- For others, going all the way to formal verification makes sense. This has
several orders of magnitude higher correctness guarantees along with enormous
time costs compared to TDD.

For example, the Linux kernel doesn't use exhaustive unit tests (as far as I
know) let alone TDD, and the seL4 kernel has been formally verified, both
having been successful in doing what they set out to do.

I notice nobody ever gets looked down on for not going the formal verification
route - people need to acknowledge that automated testing takes time and that
time could be spent on something else, so you have to weigh up the benefits.
Exhaustive tests aren't free especially when you know for your specific
project you're unlikely to reap much in the way of benefits long term and you
have limited resources.

For example, you're probably (depending on the project) not going to benefit
from exhaustive tests for an MVP when you're a solo developer, can keep most
of the codebase in your head, the impact of live bugs isn't high, the chance
of you building on the code later isn't high and you're likely to drastically
change the architecture later.

Are there any statistics on how many developers use TDD? There's a lot of "no"
answers in this thread but obviously that's anecdotal.

~~~
Jimpulse
Can you give a run down on formal verification?

~~~
seanwilson
So say you were writing a sorting algorithm and with unit tests (perhaps with
TDD) you wrote tests like:

\- sort([]) should produce []

\- sort([1]) should produce [1]

\- sort([1,3,2]) should produce [1,2,3]

\- sort([1,5,6,2,3,4]) should produce [1,2,3,4,5,6]

You would test a few values and edge cases until you were confident it works
for all lists. However, you can't be 100% sure that there's some list out
there like [5,5,5,5,1] that doesn't get sorted properly.

With formal verification, you can actually test it sorts for all possible
lists with a mathematical proof. You write a maths proof that shows a property
like the following holds:

\- For all lists X, the result of sort(X) will be a permutation of X that is
sorted.

For example, the proof could take the form of proof by induction where every
step in the proof is confirmed correct by the machine (see Coq, Isabelle for
more info).

When you were doing maths at school, you probably had exercises where you
tried a few examples to see if an equation you came up with might hold in
general and then you would write a proof to showed it worked for all possible
cases (e.g. with induction, by case analysis). The former is similar to unit
testing and the latter is similar to formal verification.

My point was there's a spectrum of how rigorous your tests are. People talk
about TDD like it's the holy grail sometimes but it's nowhere close to how
rigorous you can be. If you've tried some formal verification though, you'll
realise it's far too expensive for most projects. Likewise, TDD doesn't make
sense for all projects.

You have to pick your tradeoffs e.g. between time to market vs cost vs ease of
refactoring later vs how rigorous the testing is.

~~~
stingraycharles
Isn’t this formal verification more for algorithms than implementations? Eg if
I have to use Coq to prove my code works, what use is that for my C
application? Porting the code to Coq seems to defeat the point of formal
verification, I can much better use some property based testing method.

~~~
seanwilson
There's lots of options. You can write an implementation in Coq (it has its
own functional language you code in), prove it correct in Coq and then
"extract" (like transpiling) it to another language like OCaml for executing.
There's ways to map C code into Coq to prove it's correct as well. All of this
is machine checked. See the sel4 kernel to get more of a feel for it.

Property based testing sits somewhere between regular software testing with
unit tests and theorem proving on the spectrum. It's much less time intensive
to do but much less rigorous.

My point isn't that formal verification is better than everything. It has its
trade-offs, just like TDD.

------
kstenerud
For maintenance & extension, yes.

For new development, no.

I've found that unless I have a solid architecture already (such as in a
mature product), I end up massively modifying, or even entirely rewriting most
of my tests as development goes on, which is a waste of time. Or even worse, I
end up avoiding modifications to the architecture because I dread the amount
of test rewrites I'll have to do.

~~~
joshschreuder
What period do you consider swapping the definition from new development to
extension?

------
C0d3r
I do, TDD gives me such a sense of confidence that now that I'm used to, it's
hard not to use.

> Can you describe the practical benefit?

Confidence that the code I'm writing does what it's supposed to. With the
added benefit that I can easily add more tests if I'm not confident about some
behaviors of the feature or easily add a test when a bug shows up.

> Do you happen to rewrite the tests completely while doing the
> implementation?

Not completely, depends on how you write your tests, I'm not testing each
function individually, I'm testing behaviour, so unless there's a big
architectural change or we need to change something drastic, the tests have
minimal changes

> When does this approach work for you and when did it fail you?

It works better on layered architectures, when you can easily just test the
business logic independently of the framework/glue code. It has failed me for
exploratory work, that's the one scenario where I just prefer to write code
and manually test it, since I don't know what I want it to do...yet

~~~
bdcravens
> Confidence that the code I'm writing does what it's supposed to. With the
> added benefit that I can easily add more tests if I'm not confident about
> some behaviors of the feature or easily add a test when a bug shows up.

Isn't this just the benefit of tests, not necessarily TDD?

~~~
hackerm0nkey
Yes, it's a welcomed side effect of TDD'ing, TDD is more of a design tool. But
also I have experimented with writing tests before/after the implementation.
Code with tests written first seemed to always be just to the point and get
one in the mindset of thinking ahead of your edge cases and pin them down

------
sloopy543
Nope. I pretty much always find it to be counterproductive.

Most of programming happens in the exploration phase. That's the real problem
solving. You're just trying things and seeing if some api gives you what you
want or works as you might expect. You have no idea which functions to call or
what classes to use, etc.

If you write the tests before you do the exploration, you're saying you know
what you're going to find in that exploration.

Nobody knows the future. You can waste a crazy amount of time pretending you
do.

~~~
kragen
> You're just trying things and seeing if some api gives you what you want or
> works as you might expect.

I don't do most of my programming this way, because mostly I'm writing new
things, not gluing together existing APIs with a tiny amount of simple glue
code. But when I do need to characterize existing APIs, I find that unit tests
are a really helpful way to do it — especially in languages without REPLs, but
even in languages that do have REPLs, because the tests allow me to change
things (parameters, auth keys, versions of Somebody Else's Software) and
verify that the beliefs I based my code on are still valid.

~~~
Izkata
You appear to be talking about unit tests in general, while GP was talking
about test-driven-development (what the original question is about).

~~~
kragen
Good point, thanks.

------
codeulike
No. And also 'do you write a test for everything?'. Also No.

Tried it, ended up with too many tests. Quelle surprise. There is a
time/money/cognitive cost to writing all those tests, they bring some benefit
but usually not enough to cover the costs.

I'm also going off the 'architect everything into a million pieces to make
unit testing "easier"' approach.

I heard someone saying that if you write a test and it never fails, you've
wasted your time. I think thats quite an interesting viewpoint.

Reminded of:

"Do programmers have any specific superstitions?"

"Yeah, but we call them best practices."

[https://twitter.com/dbgrandi/status/508329463990734848](https://twitter.com/dbgrandi/status/508329463990734848)

~~~
shantly
Everyone's in the confessional booth here admitting dogmatic test-first-test-
everything's not so hot in practice, which is nice, but how long until it
becomes safe to answer with anything other than some variation of "love
testing, it's always great, I love tests, more is better" when asked how you
feel about testing in interviews?

~~~
codeulike
Oh of course you have to be enthusiastic about testing in interviews. Same as
with agile!

~~~
tombert
Actually, for the interview for my current job (a pretty big corporation) they
asked me about testing and I flatout said that I think testing has some
benefit for some cases for , but I think the "100% CODE COVERAGE OMG TDD!!!"
mentality is actually counter productive and makes code much harder to adapt.

I think they appreciated my honesty.

------
kelnos
Never done this, and don't consider it practical. Code and interfaces (even
internal ones) change rapidly for me when I'm starting a new project or adding
new major functionality to the point that the tests I'd write at the beginning
would become useless pretty quickly.

I also believe that 100% test coverage (or numbers close to that) just isn't a
useful goal, and is counterproductive from a maintenance perspective: test
code is still code that has to be maintained in and of itself, and if it tests
code that has a low risk of errors (or code where, if there are errors, those
errors will bubble up to be caught by other kinds of testing), the ROI is too
low for me.

After I've settled on interfaces and module boundaries, with a plausibly-
working implementation, I'll start writing tests for the code with the highest
risk of errors, and then work my way down as time permits. If I need to make
large changes in code that doesn't yet have test coverage, and I'm worried
about those changes causing regressions, I'll write some tests before making
the changes.

~~~
tluyben2
That is how I used to work; then I got into finance and there are two things
different with the work I did before that (web/desktop/app (or too long ago;
there was no 'testing' in the 80s) the software I write now has to be
certified/audited to some extent and I cannot change/repair production
software on the fly. That could costs a _lot_ of money for certain bugs. So
now I tend to write tests for everything and that helps a lot.

~~~
Hamuko
_> So now I tend to write tests for everything and that helps a lot._

Isn't that a separate issue from writing the tests before the implementation?

~~~
tluyben2
Yes, and I do both.

------
ollysb
99% of the code I write is test first. It makes my life easier - I always know
what to do next and it reduces the amount I need to keep in my head.

TDD done the way many developers do is a PITA though. When I write a test it
will start off life with zero mocking. I'll hit the db and live APIs. From
here I'm iterating on making it work. I only introduce mocking/factories
because it's harder work not to. I'll gradually add assertions as I get an
idea about what behaviour I want to pin down.

Done this way using tests is just making life easier, you can start off
testing huge chunks of code if that's what you're sketching out, then add more
focused tests if that's a faster way to iterate on a particular piece. For me
the process is all about faster feedback and getting the computer to automate
as much of my workflow as possible.

edit: Kent Beck had a fantastic video series about working this way, I can
only find the first 10 mins now unfortunately but it gives you a taste,
[https://www.youtube.com/watch?v=VVSSga1Olt8](https://www.youtube.com/watch?v=VVSSga1Olt8).

------
chynkm
_I mean how many of you stick with this test driven development practice
consistently?_ I have been doing this for a while now. Practically, saves me a
tonne of time and am able to ship software confidently.

 _Can you describe the practical benefit?_ Say, a change is executed on one
section of the (enterprise level)application. You missed addressing an
associated section. This is easily identified as your test will FAIL. When the
number of feature increases, the complexity of the application increases.
Tests guide you. They help you to ship faster, as you don't need to manually
test the whole application again. In manual testing, there are chances of
missing out few cases. If it's automated, such cases are all executed.
Moreover, in TDD - you only write code which is necessary to complete the
feature. Personally, tests act as a (guided)document for the application.

 _Do you happen to rewrite the tests completely while doing the
implementation?_ Yes, if the current tests doesn't align with the
requirements.

 _When does this approach work for you and when did it fail you?_ WORK - I
wouldn't call it a silver bullet. But I am really grateful/happy to be a
developer following TDD. As the codebase increases, when new developers are
brought in - TESTS is one of the metrics which helps me ship software. NOT
WORK - a simple contact only based form(i.e. a fixed requirement having a
name, email, textarea field and an upload file option), I rather test it
manually than spend time writing tests

~~~
Nursie
The benefits you describe seems to be achievable with tests written after code
as well.

We write extensive unit tests, but mostly after development work. The re-write
work you mention is then avoided.

~~~
Double_a_92
The benefit of TDD is that the code you end up with will actually be testable.
Just keeping in mind that you have to write a test for your code, changes how
you write it. As a bad example, imagine having a 1000 line function that just
does everything you needed for the new feature... Good luck testing that
afterwards.

~~~
lowercased
> Just keeping in mind that you have to write a test for your code, changes
> how you write it.

Which is often enough to ensure the code is testable.

Generally, I'll write some tests sort of alongside, or soon after (like, a
couple hours or a day) to not lose the initial thought process. Going back to
code days/weeks later and trying to 'test' it when it wasn't conceived of as
testable is tough.

------
jeremyjh
I've been writing software professional for 20 years and for much of that time
I was very skeptical of testing. Even after I started writing tests it was
several more years before I saw the value of writing tests first. I've moved
to doing this more and more, especially when doing maintenance or bug fixes on
the back-end. I still struggle with writing valuable tests on the front-end,
apart from unit tests of easily extracted logic functions, or very basic
render tests that ensure a component doesn't blow up when mounted with valid
data.

If you write your test after making the code changes, its easier to have a bug
_in your test_ that makes it pass for the wrong reasons. By writing the test
first, and progressively, you can be sure that each thing it asserts fails
properly if the new code you write doesn't do what is expected.

Sometimes I do write the code first, and then I just stash it and run the new
tests to be sure the test fails correctly. Writing the test first is simply a
quicker way to accomplish this.

Like others have said when there is a lot of new code - new architectural
concerns etc, its not really worth it to write tests until you've sketched
things out well-enough to know you aren't so likely to have major API changes.
Still, there is another benefit to writing the tests - or at least defining
the specs early on - which is that you are not as likely to forget testing a
particular invariant. If you've at least got a test file open and can write a
description of what the test will be, that can save you from missing an
invariant.

Think of tests as insurance that someone working on the code later (including
yourself, in the future) doesn't break an invariant because they do not _know
what they all are_. Your tests both state that the invariant is intentional
and necessary, and ensure it is not broken.

~~~
napsterbr
> If you write your test after making the code changes, its easier to have a
> bug in your test that makes it pass for the wrong reasons.

I see this a lot. I don't write tests first, but I always make sure my changes
are properly covered by my assertions. For instance, when fixing a bug, I
comment/undo my fix and make sure my test fails.

One could say I'm doing twice the work (fix, write test, comment out fix), but
I find it easier than just writing the test first.

------
chrisguitarguy
I tend to write test cases that re-produce bugs first, then fix the bug. Other
than that, I don't stick too hard to test driven development. I did for a
while, but you start to get a sense of the sort of design pressure tests
create and end up build more modular, testable code from the get go anyway.

> Can you describe the practical benefit?

For a test case that produces a bug, you might find the bug manually. Getting
that manual process into a test case is often a chore, but in doing so you'll
better understand how the system with the bug failed. Did it call
collaborators wrong? Did something unexpected get returned? Etc. In those
cases, I think the benefit really is a better understanding of the system.

> Do you happen to rewrite the tests completely while doing the
> implementation?

A TDD practicioner will probably tell you taht you're doing it wrong if you do
this. You write the minimum viable test that fails first. It might be
something simple like "does the function/method exist". You add to your tests
just in time to make the change in the real code.

------
tchaffee
It's a tool like any other and I reach for it when tests will help me write
code faster and at a higher level of quality. Which is pretty often with new
code.

Also always before a refactor. Document all the existing states and input and
output and I can refactor ruthlessly, seeing as soon as I break something.

Tests are also great documentation for how I intend my api to be used. A bunch
of examples with input, output, and all the possible exceptions. The first
thing I look for when trying to understand a code base are the tests.

When do I not write tests? When I'm in the flow and want to continue cranking
out code, especially code that is rapidly changing because as I write I'm re-
thinking the solution. Tests will come shortly after I am happy with a first
prototype in this case. And they will often inform me what I got wrong in
terms of how I would like my api consumed.

When did it fail me? There are cases when it's really difficult to write
tests. For example, Jest uses jsdom, which as an _emulator_ has limitations.
Sometimes it is worth it to work around these limitations, sometimes not.

Sometimes a dependency is very difficult to mock. And so it's not worth the
effort to write the test.

Tests add value, but like anything that adds value, there is a cost and you
have to sometimes take a step back and decide how much value you'll get and
when the costs have exceeded the value and it's time to abandon that tool.

------
AYBABTME
In new code, I'll usually write high level black box tests once enough code is
in place to start doing something useful. I rarely write unit tests except for
behavior that is prone to be badly implemented/refactored, or for stuff that's
pretty well isolated and that I know I won't touch for a while.

Then as the project evolves, I start adding more high level tests to avoid
regressions.

I prefer high level testing of products, they're more useful since you can use
them for monitoring as well, if you do it right. I work with typed languages
so there's little value in unit tests in most cases.

Sometimes I'll write a test suite "first", but then again only once I have at
least written up a client to exercise the system. Which implies I probably
decided to stabilize the API at that point.

Like others have said, tests often turn into a huge burden when you're trying
to iterate on designs, so early tests tend to cause worse designs in my
opinion, since they discourage architectural iterations.

------
quantified
Almost never. I’m roughing things out first, or iterating the APIs. When the
functions, data and interactions seem to stabilize, then I’ll start to put
tests in.

Once, I started with tests, but I had to rip up a lot along the way.

It is helpful to ensure testability early on. It might be easier for some devs
to figure it out by actually coding up some tests early.

I won’t argue against anyone who is actually productive using hard-core TDD.

~~~
MichaelMoser123
I have a similar experience, but now I am forced to use Ginkgo, and the tool
doesn't make sense without TDD and BDD - behavior driven development, so some
people must be using it.

~~~
Huggernaut
Hi, I sort of maintain Ginkgo and Gomega when I have time (not much these
days), having picked it up during my years at Pivotal, where it was originally
developed. BDD/TDD is practiced extensively (as in, 100% of the time) at
Pivotal. I'd be happy to talk to you more about the process or tools if you
would like. Good luck!

------
nscalf
I’ve always been highly skeptical of this approach. Often what you’re doing is
so clear cut that tests are entirely unneeded. In fact, outside of the most
complicated cases, I don’t even use unit tests. I have black box testing that
I use to check for regression. My biggest reasoning for this is that test code
is effectively another code base to maintain, and as soon as you start
changing something it’s legacy code to maintain.

All that being said, I haven’t spent much time on teams with a particularly
large group of people working in one project. I think the most has been 4 in
one service. The more people working in a code base, the more utility you get
from TDD, I believe. It’s just tough to have a solid grasp on everything when
it changes rapidly.

------
notadoctor_ssh
I recently started doing this. My project involved using three different
services, where one of them was internal. I only had API documentation for
these services and because of many reasons, there was a delay in obtaining the
API keys required and I was stuck on testing my code. That's when I decided to
write unit tests and mock these services wherever I am using and started
testing my code. There were zero bugs in these integrations later.

While doing this I also found one more benefit, at least for my use case. The
backend for user login was simple when I started, but it started growing in a
few weeks. Writing test cases saved me from manually logging in with each use
case, testing some functionality, then logging out and repeating with other
use cases.

Not sure if it is a practical benefit or not, but writing test cases initially
also helped me rewrite the way I was configuring Redis for a custom module so
that the module can be tested better.

My only issue is that it takes time, and selling higher-ups this was kind of
difficult.

~~~
MichaelMoser123
Thanks, an interesting perspective. Do you plan to go with this approach for
your other projects as well?

~~~
de_watcher
More fun is when you get an API documentation and no access to the actual
system. You develop the whole thing and then fly out to their site, you've got
3 days to get your software and hardware certified by them, and the
certification costs a fortune.

------
gorgoiler
TDD works best at the interface where there is the lowest likelihood of API
churn.

Writing a test for something an MP3 ID tag parser is a good case for TDD with
unit tests. It’s pretty clear what the interface is, you just need to get the
right answer, and you end up with a true unit test.

Doing TDD with a large new greenfield project is harder. Unless you have a
track record of getting architecture right first time, individual tests will
have to be rewritten as you rethink your model, which wastes a lot of energy.
Far better is to test right at the outermost boundary of your code that isn’t
in-question: for example a command line invocation of your tool doing some
real world example. These typically turn into integration or end to end tests.

I tend to then let unit tests appear in stable (stable as in the design has
settled) code as they are needed. For example, a bug report would result in a
unit test to exhibit the bug and to put a fixed point on neighboring code, and
then in the same commit you can fix the bug. Now you have a unit test too.

One important point to add is that while I reserve the right to claim to be
quite good at some parts of my career, I’m kind of a mediocre software
engineer, and I think I’m ok with that. The times in my career when I’ve
really gotten myself in a bind have been where I’ve assumed my initial design
was the right one and built my architecture up piece by piece — with included
unit tests — only to find that once I’d built and climbed my monumental
construction, I realized all I really needed was a quick wooden ladder to get
up to the next level which itself is loaded with all kinds of new problems I
hadn’t even thought of.

If you solve each level of a problem by building a beautiful polished work of
art at each stage you risk having to throw it away if you made a wrong
assumption, and at best, waste a lot of time.

Don’t overthink things. Get something working first. If you need a test to
drive that process so be it, but that doesn’t mean it needs to be anything
fancy or commit worthy.

------
koliber
I do sometimes. It depends. I want to do more of it.

Here are cases where I've genuinely found it valuable and enjoyable to write
tests ahead of time:

Some things are difficult to test. I've had things that involve a ton of
setup, or a configuration with an external system. With tests you can automate
that setup and run through a scenario. You can mock external systems. This
gives you a way of setting up a scaffold into which your implementation will
fall.

Things that involve time are also great for setting up test cases. Imagine
some functionality where you do something, and need 3 weeks to pass before
something else happens. Testing that by hand is effectively impossible. With
test tools, you can fake the passing of time and have confirmation that your
code is working well.

Think about when you are writing some functionality that requires some
involved logic, and UIs. It makes sense to implement the logic first. But how
do you even invoke it without a UI? Write a test case! You can debug it
through test runs without needing to invest time in writing a UI.

Bugs! Something esoteric breaks. I often write a test case named
test_this_and_that__jira2987 where 2987 is the ticket number where the issue
came up. I write up a test case replicating the bug in with only essential
conditions. Fixing it is a lot more enjoyable than trying to walk through the
replication script by hand. Additionally, it results in a good regression test
that makes sure my team does not reintroduce the bug again.

------
e12e
I don't write as many tests as I'd like in general (adding tests to a legacy
project that has none is a struggle - often worth it, but needs to be
proitized against other tasks).

I once had to write an integration for a "soap" web service that was...
Special. Apparently it was implemented in php (judging by the url), by hand
(judging by the.. "special" features) - and likely born as a refractor of a
back-end for a flash app (judging by the fact that they had a flash app).

By trial and error (and with help of the extensive, if not entirely accurate,
documentation) via soapui and curl - i discovered that it expected the soap
xml message inside a comment inside an xml soap message (which is interesting
as there are some characters that are illegal inside xml comments.. And
apparently they _did parse_ these nested messages with a real xml library, I'm
guessing libxml.) I also discovered that the Api was sensitive to the _order_
of elements in the inner xml message..

Thankfully I managed to conjure up some valid post bodies (along with the
crazy replies the service provided, needed to test an entire "dialog") - and
could test against these - as I had to implement half of a broken soap library
on top of an xml library and raw post/get due to the quirks.

At any rate, I don't think I'd ever got that done/working if I couldn't do
tests first.

Obviously the proper fix would've been to send a tactical team to hunt down
the original developers and just say no to the client...

------
jandrewrogers
I write approximately as much test code as application code, but it never
makes sense to write tests first.

I frequently redesign/rewrite an implementation a few times before committing
it, often changing observable behaviors, all of which will change what the
tests need to look like to ensure proper coverage. Some code is intrinsically
and unavoidably non-modular. Tests are dependent code that need to be scoped
to the implementation details. Unless you are writing simple CRUD apps, the
design of the implementation is unlikely to be sufficiently well specified
upfront to write tests before the code itself. Writing detailed tests first
would be making assumptions that aren't actually true in many cases.

I also write thorough tests for private interfaces, not just public
interfaces. This is often the only practical way to get proper test coverage
of all observable behaviors, and requires far less test code for equivalent
coverage. I don't grok the mindset that only public interfaces should be
tested if the objective is code quality.

When practical, I also write fuzzers or exhaustive tests for code components
as part of the test writing process. You don't run these all the time since
they are very slow but it is useful for qualifying a release.

------
devgoth
One of my teammates has an AWESOME response to testing and here it is:

"The point of writing tests is to know when you are done. You don't have to
write failing tests first if you are just trying to figure out how to
implement something or even fix something. You must write a failing test
before you change prod code. How do you do square this seeming circle?

\- Figure out what you need to do

\- Write tests

\- Take your code out and add back in in chunks until your tests pass

\- Got code left over? You need to write more tests or you have code you don't
need

Without the tests, you cannot know when you are done. The point of the failing
test is that it is the proof that your code does not do what you need it to
do.

Writing tests doesn't have to slow down the software development process.
There are patterns for various domains of code (e.g., controller layer,
service layer, DAO layer). To do testing efficiently, you need to learn the
patterns. Then when you need to write a new test, you identify and follow the
pattern.

You also need to use the proper tools. If you're using Java or Kotlin, then
you MUST use PIT ([http://pitest.org](http://pitest.org)). It is a game
changer for showing you what parts of your code are untested."

\- Steven, Senior Software Engineer on our team

~~~
extra_rice
When people say writing tests first slows you down, they are usually only
looking at the upfront costs. They do not factor in the costs of maintenance,
and having to fix and/or extend a previously written code.

Send my best regards to Steven. I share the views as his.

------
Timberwolf
I tend to swap between two modes.

If I'm working with well-known tools and a problem I understand reasonably
well, I'll approach it in ultra-strict test-first style, where my "red" is,
"code won't compile because I haven't even defined the thing I'm trying to
test yet". It might sound a step too far but I find starting by thinking about
how consumers will call and interact with this thing results in code that's
easier to integrate.

However, if I'm using tools I don't know well, or a problem I'm not sure
about, I much prefer the "iterate and stabilise" approach. For me this
involves diving in, writing something scrappy to figure out how things work,
deciding what I don't like about what I did, then starting again 2 or 3 times
until I feel like I understand the tools and the problem. The first version
will often be a mess of printf debugging and hard-coded everything, but after
a couple of iterations I'm usually getting to a clean and testable core
approach. At that point I'll get a sensible set of tests together for what
I've created and flip back to the first mode.

------
Diederich
I will usually write one or a few tests that exercise ideal 'happy paths'
before starting proper implementation, assuming it can be done fairly quickly.
I don't hold on to these very tightly; they will often change as things go
forward.

Once I have those basic tests passing, I will often write a couple more tests
for less common but still important execution paths. It's ok if these take a
little longer, but only a little.

Beyond the obvious 'test driven' benefits, I find that, especially for the
first round, writing those tests helps me solidify what I'm trying to
accomplish.

This is often useful even in cases where I go in feeling quite confident about
the approach, but there are some blind spots that are revealed even with the
first level, most simple tests.

I find the basic complaints that others have posted here about pre-writing
tests largely valid. "Over-writing" tests too early on is, for me, often a
waste of time. It works best when the very early tests can be written quickly
and simple.

And if they can't be, then I'll frequently take a step back and see if I'm
coming at the problem from a poor direction.

------
ellimilial
Yes, most of the time for the non-exploratory code that is not deeply
ingrained with external framework.

When starting a new module / class I put a skeleton first, to establish an
initial interface. Then I change it as I find, while writing tests, how it can
be improved.

When dealing with bugs - red / green is incredibly helpful with pinpointing
the conditions and pointing exactly where the fault lies.

When introducing new functionality I do most of the development as tests. Only
double checking if it integrates once before committing.

Going test first pushes your code towards statelessness and immutability,
nudging towards small, static blocks. As most of my work is with data, I find
it to be a considerable advantage.

It provides little advantage if you already rely heavily on a well established
framework that you need to hook to (e.g. testing if your repos provide right
data in Spring or if Spark MRs your data correctly).

I tend to change/refactor a lot to minimise the maintenance effort in the long
run. I would spend most of the time testing by hand after each iteration if
not for the suite I could trust at least to some extent.

------
dannypgh
Sometimes.

If I'm writing something where I know what the API to use it should be and the
requirements are understood, yes, I'll start with tests first. This is often
the case for things like utility classes: my motivation in writing the class
is to scratch an itch of "wouldn't it be nice if I had X" while working on
something unrelated. I know what X would look like in use because my ability
to imagine and describe it is why I want it.

There are times, however, where I'm not quite sure what I want or how I want
to do it, and I start by reading existing code (code that I'm either going to
modify or integrate with) and if something jumps out at me as a plausible
design I may jump in to evaluate how I'd feel about the approach first-hand.

I'm short, the more doubts or uncertainty I have about the approach to a
problem (in the case of libraries, this means the API) the longer I'll defer
writing tests.

------
thorwasdfasdf
I know this will probably get downvoted. It's impossible to predict how your
app will fail ahead of time. So test driven development (for the most part) is
a waste of time. Every test you write will either continue to pass forever not
providing useful information to you or will need to be updated when new
features are added to the software, making them costly to have in place.
Meanwhile they reveal very few defects that wouldn't easily be caught with a
basic smoke test you need to do anyway.

Of course, there are always exceptions. if you have software that is highly
complex but the outputs are very simple and easy to measure then it might
actually be a good idea.

------
Elrac
Almost never.

With the kind of software I mostly write these days, I'm fortunate to be able
to incrementally develop my code and test it under real-world conditions or a
subset thereof.

So my approach is exploratory coding -- I start with minimum workable
implementations, make sure they work as needed, and then add more
functionality, with further testing at each step.

The upside is that I don't have to write "strange" code to accommodate
testing. The downside is that I'm forced to plan code growth with steps that
take me from one testable partial-product to the next. A more serious
downside, one I'm very aware of, is that not every project is amenable to this
approach.

~~~
hackerm0nkey
> With the kind of software I mostly write these days, I'm fortunate to be
> able to incrementally develop my code and test it under real-world
> conditions or a subset thereof.

What kind of software you write if you don't mind me asking ?and are your
"real-world conditions" tests automated ?

> The upside is that I don't have to write "strange" code to accommodate
> testing.

Can you elaborate more as what you mean by "strange" ?

~~~
Elrac
For the past 2 years, most of my work has been in porting some fairly simple
legacy message forwarding and conversion programs from C to Java. So on our
test servers I can swap out the C programs for drop-in replacements in Java
and watch them (via log files) working -- or not. If my programs fail I can
either observe crashes and stack trace or the message receiving programs will
crash or loudly object to bad data from me. Usually one day's worth of traffic
will exercise enough of my program's logic that failure to fail for a day
constitutes a successful end-to-end test.

Yes, this is kid stuff. My current work is about as sophisticated as typical
undergrad Computer Science projects. We can't all be doing rocket science!

I used to write automated test setups for my programs, providing streams of
pre-canned messages and such. That worked out OK. I suppose it's great to have
test suites to avoid regression and such, but I ended up regretting all the
effort I sunk into testing. So far it's been my experience that I would sink a
lot of time into creating a test suite that could exercise my programs as
thoroughly as simple exposure to real-world message traffic.

I hope my attempt to be brief didn't come across as derogatory when I wrote
"strange." Here's an example: I like to make a lot of my fields and methods
private. It's handy that my IDE warns me when fields and methods aren't used,
or when final fields aren't initialized. Obviously, for "classic" unit tests
I'd have to at least expose my methods at the package level to call them from
out of class. Another example: my apps rely on a fair bit of configuration
data and some embarrassingly tight coupling between my classes. A JUnit-
friendly program would call for a lot of mockups, as well as a lot more coding
to interfaces rather than concrete classes, probably a lot more reliance on
design patterns. My coding style for these projects yields a small number of
compact classes but is very hostile to unit testing.

To be clear: For many other projects, your mileage may vary dramatically. I've
successfully done TDD in other projects where that made a lot more sense.

------
ellius
My take on tests is that they serve two purposes:

1\. As a security system for your code

2\. As a tool for thought, prompting the application of inverse problem
solving

Both of these have costs and benefits. If you consider the metaphor of the
security system, you could secure your house at every entry point, purchase
motion sensors, heat detectors, a body guard, etc. etc. If you're Jeff Bezos
maybe all of that makes sense. If you're a normal person it's prohibitively
expensive and probably provides value that is nowhere near proportional to its
cost. You also have to be aware that there is no such thing as perfect
security. You could buy the most expensive system on earth and something still
might get through. So security is about risk, probability, tradeoffs, and
intelligent investment. It's never going to be perfect.

Inverse thinking is an incredibly powerful tool for problem solving, but it's
not always necessary or useful. I do think if you haven't practiced something
like TDD, it's great to start by over applying it so that you can get in the
habit, see the benefit, and then slowly scale back as you better understand
the method's pros and cons.

At the end of the day, any practice or discipline should be tied to values. If
you don't know WHY you're doing it and what you're getting out of it, then why
are you doing it at all? Maybe as an exploratory exercise, but beyond the
learning phase you should only do it if you understand why you're doing it.

------
fergie
The answer to this question depends on the type of programming you do.

1)

Working as a part of an enterprise team on a big lump of TypeScript and React?
Then you probably don't write tests before you code because a) TypeScript
catches all the bugs amirite? and b) Your test runner is probably too hairy to
craft any tests by hand, and c) You are probably autogenerating tests _after_
writing your code, based on the output of the code you just wrote, code which
may or may not actually work as intended.

2)

Working on an npm module that has pretty tightly defined behaviour and the
potential to attract random updates from random people? Then you _need_ to
write at least some tests ahead of time because it is the only practical way
to enshrine that module's behavior. You need a way to ensure that small
changes/improvements under the hood don't alter the module's API. This means
less work for you in the long run, and since you are a sensible human being
and therefore lazy, you will write the tests before you write the code.

------
chvid
In general I write unit tests as I implement. I would not write them unless I
was convinced they were immediately beneficial to my productivity. Some parts
of my code is not covered. In the beginning the test cases just outputs the
result to standard out, in the end I would use assertions or disable them if
they were dependent on external systems.

------
gtyras2mrs
I don't. My approach probably isn't ideal. But I find it really hard to start
with tests for new solutions.

With a basic understanding of the problem and the expected solution, I start
off directly with prototype code - basically creating a barely working
prototype to explore possible solutions.

When I'm convinced that I'm on the right track (design-wise), I start adding
more functionality.

When I'm at a stage where the solution is passable - I then start writing
tests for it. I spend some time working through tests and identifying issues
with my solution.

Then I fix the solutions. And clean up the solution.

At this point my test cases should cover most (if not all) of my problem
statement, edge cases and expected failures.

When it comes to maintaining the solution, I do start with test cases though.
Usually just to ensure that my understanding of the bug or issues is correct.
With the expected failure tests, I then work on the fix. And write up any
other test cases needed to cover the work.

------
Leace
> I mean how many of you stick with this test driven development practice
> consistently?

I do for some projects. For example currently I'm working on a project that
has a high test coverage and most bugs and enhancements start first as a test
and then they're implemented in code. TDD makes sense when the test code is
simpler to write than the implementation code.

> Can you describe the practical benefit?

It may take some time to write initial tests but as I'm working with some
legacy enterprise tech serializing all inputs and testing on that is a lot
faster than testing and re-testing everything every commit on real integration
servers.

Tests provide you with a safety net when you do refactors or new features so
that the existing stuff is not broken.

> Do you happen to rewrite the tests completely while doing the
> implementation?

Yeah, I do. There are two forces at play - one of them pushes to test that
cover more stuff in a black-box matter - they won't be broken as often when
you're switching code inside your black-box. On the other hand if you've got
finer grained test when they break it's obvious which part of code is failing.

> When does this approach work for you and when did it fail you?

It works for projects that are hard to test other way (we've got QA but I want
to give them stuff that's unlikely to have bugs) and for keeping regressions
at bay. It did fail me if I didn't have necessary coverage (not all cases were
tested and the bug was in the untested branch).

I wouldn't also bother to test (TDD) scratch work or stuff that's clearly not
on critical path (helper tools, etc.) but for enterprise projects I tend to
cover as much as possible (that involves sometimes writing elaborate test
suites) as working on the same bugs over and over is just too much for my
business.

------
strangattractor
Never - I always attempt to make an end to end implementation. Once the code
works I go back an examine the what I did and try to make it simpler. This
often requires refactoring and changing API's etc. Writing a bunch of test
would simply add inertia to that process. After I am satisfied with the code
or get bug reports I will add test. Test code is code and often has bugs.
Every line of code I write is a liability so I try to limit it to necessary
things. I have never seen the write test approach work. Once all the test code
is written people become reluctant to change things because they have to
change both the code and test code doubling the work/time. One just ends up
with well tested but not so good code. If you a M$ and can afford to pair
program it might be more feasible.

------
svavs
Nope. We are judged by our individual performance by how many points (tasks)
we complete. Since this has been a metric, unit testing is always an
afterthought. During code checks / reviews, some developers will ask others to
add unit tests, which will be hastily written.

------
roland35
I mainly write embedded software and I tend to write my tests using Robot
Framework. I generally start by writing the new feature since I need to probe
around how it will work on the hardware, but generally write the test before
the feature is actually finished. This is because the test itself will help me
recreate the conditions I need the hardware to be in during debugging! One
example is sending a specific sequence of serial commands over the CAN bus, or
hitting a sequence of buttons on the user interface.

I am still trying to figure out the best way to do unit testing with embedded
C (working with Unity right now), but with Python development I try to write
unit tests only for more tricky code.

------
jlangr
Sure, still, 20 years on. Not dogmatically so to reach some coverage goal, but
anything with real logic, yes. I don't test-drive React components, for
example (instead the goal is to get all real logic out of them).

Benefits--not pushing logic defects gives me more time to invest in other
important stuff; I end up with tests that document all the intended behaviors
of the stuff I'm working on (saves gobs of time otherwise blown trying to
understand what code does so I can change it safely); I'm able to give a lot
of attention to ensuring the design stays clean. Plus, it's enjoyable most of
the time.

"They incur technical debt and bugs at the same rate as other code." Not at
all true.

------
karmajunkie
There's two basic kinds of code I write: the kind where I know what I'm doing
before I start, and the kind where I don't.

For the latter, its when I'm exploring a codebase or an API, writing a spike
script just to see how things work and what kinds of values get returned, for
example. Many times I'll turn the spike into a test of some sort, but a lot of
times I just toss it when I'm through.

For the former, yes, I generally write tests before implementation, though I'm
not religious about it. I'm just lazy. I'm going to have to test the code I
write somehow, whether that's by checking the output in a repl or looking at
(for example) its output on the command line. Why you wouldn't want to capture
that effort into a reproducible form is beyond me. (And if you're one of those
people who just writes up something and throws it into production, I really
hope we don't end up on a team together!) I generally just write the test with
the code I wish I had, make it pass somehow, rinse repeat. Its not rocket
science. Its just a good scaffold to write my implementation against.

That said, I don't usually keep every test I write. As others have noted, that
code becomes a fixed point that you have to deal with in order to evolve your
code, and over time it can become counterproductive to keep fixing it when you
change your implementation slightly. So the stuff I keep generally has one of
three qualities: * it documents and tests a public interface for the API, a
contract it makes with client code * it tests edge cases and/or bugs that
represent regressions * it tests particularly pathological implementations or
algorithms that are highly sensitive to change.

Honestly, I feel like people who get religious about TDD are doing it wrong,
but people who _never_ do TDD (i.e. writing a test first) are also doing it
wrong in a different way. There's nothing wrong with test-soon per se, but if
you're never dipping into documenting your intended use with a test before you
start working on the implementation itself, you're really just coding
reactively instead of planning it, and it would not surprise me to hear lots
of complaints in your office about hard-to-use APIs.

------
thrower123
No. Most of the time I don't really have any spec to base hypothetical tests
on, and I have to be exploring what is even possible as I go. When I'm
throwing things at a third-party API or service to see what sticks, writing
tests first is wasteful.

If I'm doing something that is pretty well defined and essentially functional,
where I know the inputs and outputs, I'll sometimes do the TDD loop. It can be
good for smoking out edge cases; although unless you start dtifting into brute
force fuzzing or property-based testing you still have to have the intuitions
about what kind of tests would highlight bugs.

------
conradfr
I can't even code like that except maybe for simple tasks in a mature project.

I'm more a "Make It Work, Make It Beautiful, Make It Fast" person and don't
see it working by writing unit test first.

~~~
ptx
I think the value of TDD's red–green–refactor cycle is in making sure that
after you "Make It Beautiful" it still works, and again after you "Make It
Fast" it still works. Otherwise, if you don't automate the test first, you end
up testing manually three times.

~~~
hackerm0nkey
exactly, lots of people are missing the point.

------
YorickPeterse
I usually start with a REPL, then play around with that for a while. For
languages without a REPL I usually just create a one time file in the project
and play around with that.

Once I am a bit more comfortable with the code and have a better understanding
of what I need, I will start writing some tests. I usually don't write too
many early on, this way I don't have to go back halfway through development
and change all my tests. Only when I'm confident enough with the code do I
start writing extensive tests and try to cover all cases.

------
chynkm
I look upto many persons in the industry who have achieved big milestones. One
of them is Chris. His talk and post
-[https://quii.dev/The_Tests_Talk](https://quii.dev/The_Tests_Talk), will
provide you with a tonne of information regarding TDD. If you are into GO, you
should definitely check out Chris's book - [https://github.com/quii/learn-go-
with-tests](https://github.com/quii/learn-go-with-tests).

------
m00dy
This is a great hoax in software industry. I generally use it for conversation
starter in company's lunch

------
why-el
Depends. If it's a bug or a new feature built on top of code that is already
tested, then I will write the tests because they fit nicely into already
existing testing infrastructure. The test therefore "proves" the bug, and the
fix eliminates it.

For outright new code (think new objects, new API, and so on), I tend not to
write the tests first because they become a cognitive load that affects my
early design choices. In other words, I am now writing code to make the tests
pass, and have to exert effort not to do that.

------
he0001
I resisted TDD at first and then became a firm believer and then dropped it to
then turn around again to practice it again.

The major thing is that the tests becomes a boundary of sorts which enables
you to do a lot more then if you didn’t have it. It can also be done horribly
wrong which was the reason why I stopped using it.

I see it as a tool to see how good your code and abstractions are. Large tests
=> leaky abstraction. Many details (Mocks/stubs) in the tests => leaky
abstractions.

Also it reminded me that sometimes I’m trying to satisfy the language instead
of just solving the problem. As soon you are trying to satisfy your language,
code style/principle or architecture, you are now trying to solve something
that has nothing to do with the problem and just causes the code to be
designed wrong, or that I should move it somewhere else. Though if I need to
tweak the code to make it more testable, I always do that.

I also have a rule, never test data, only test functionality. This have worked
very well over the years creating pretty clean code and clean tests and I
believe less bugs, or at least it’s hard to be sure. Though my perception that
during the periods that I switched between the practices the TDD code had less
bugs and I could confirm them faster than the code which had no tests. Also
the code that was produced with TDD was a lot easier to make new tests for,
where the non TDD code were really hard to write tests for, if I wanted to for
example confirm a bug or a feature.

------
SkyPuncher
I follow a practice I like to call "test minded development".

I write tests at the earliest point I feel appropriate - but rarely before I
actually write code. I tend to work on greenfield projects, so writing tests
before I write code rarely makes sense.

IMO, TDD only makes sense if you already know what you're going to write. This
makes a lot of sense if you're working on a brownfield project or following
predictable patterns (for example, adding a method to a Rails controller).

If I'm doing actual new development, as I code, I tend to write a lot of
pending tests describing the situations I need to test. However, I don't
typically implement those tests until after.

One of the biggest factors for me is so much of my code deals with handling
some degree of unknown - what the client will need, exact how an API works,
how errors/invalidations are handled, unexpected refactoring, etc.

In this case, it doesn't make sense to create tests before I write the
underlying code. Most tests will have mocks/stubs/simulations that make
assumptions about how the code works. At that point, a pre-written test is no
better than code, since it's just as likely to contain errors.

I much rather do real-time debugging/interacting while developing then capture
the exact interactions of outside systems.

------
hackerm0nkey
> Do you write tests before the implementation? Absolutely, day in day out.
> New code and bugs fixing a like. It's the proof that I need to know that
> whatever code I am doing is an exact fit to the problem it's trying to
> solve.

> Can you describe the practical benefit? Testing first help me clarify my
> intentions, then implement a realisation of those intentions through code.
> Testable code has the side effect of being well modularized, free from
> hidden dependencies, and SOLID.

And it's also about making sure that whatever code you write, there's a
justification for it and a proof that it works, could be seen more like a
harness protecting you from writing things that you don't need, YAGNI.

> Do you happen to rewrite the tests completely while doing the
> implementation? I follow the classic TDD cycle, RED/GREEN/REFACTOR and I can
> not be any happier.

> When does this approach work for you and when did it fail you? The only
> exception to the above is exploratory code. I.e. the times where I don't
> know how to solve a given problem, I like to hack few things together and
> poke the application and see what happens due to what I have changed.

Having verified and learned more about how to solve that problem, I delete all
my code and start afresh but this time TDD the problem/solution equipped with
what I have learned from my exploratory cycle.

If you are in doubt or need further information to help you make your own
decision about the matter, I can not recommend enough the classic TDD by
Example from Kent Beck as a starting point.

For a more real-world view with an eye on the benefits of adopting TDD, have a
look at Growing Object Oriented Software Guided by Tests, aka the Goose book.

------
mikekchar
Test Driven Development is not actually synonymous with Test First
Development. Test First is a method that you can use to do TDD. It's quite a
good method, but it's not the only one.

To answer your question properly, you need to back up a bit. What is the
benefit of TDD? If you answer is "To have a series of regression tests for my
code", then I think the conclusion you will come to is that Test First is
almost never the right way to go. The reason is that it's very, very hard to
imagine the tests that you need to have for your black box code when you
haven't already written it.

You might be wondering why on earth you would want to do TDD if not so that
you can have a series of regression tests for your code. Remember that in XP
there are _two_ kinds of testing: "unit testing" and "acceptance testing". An
acceptance test is a test that the code meets your requirements. In other
words, it's a black box test regression test. You are very likely to do
acceptance testing after the fact, because it is easier (caveat: if you are
doing "outside-in", usually you will write an acceptance test to get you
started, but after you have fleshed in your requirements, you normally go back
and write more acceptance tests).

If acceptance tests are regression tests, why do we need unit tests. A common
view of "unit tests" is to say that you want to test a "unit" in isolation.
Often you take a class (or the equivalent) and test the interface making sure
it works. Frequently you will fake/mock the collaborators. It makes sense that
this is what you should do because of the words "unit" and "test".

However, _originally_ this was not really the case as far as I can tell (I was
around at the time, though not directly interacting with the principle XP guys
-- mostly trying to replicate what they were doing in my own XP projects. This
all to say that I feel confident about what I'm saying, but you shouldn't take
it as gospel). Really right from the beginning there were a lot of people who
disliked both the words "unit" and "test" because it didn't match what they
were doing.

Let's start with "test". Instead of testing that functionality worked, what
you were actually doing is running the code and documenting what it did --
without any regard for whether or not it fit the overall requirements. One of
the reasons for this is that you don't want to start with all of the data that
you need and then start to write code that produces that data. Instead you
start with a very small piece of that data and write code that produces that
data. Then you modify the data and update the code to match that data. It is
less about "test first" as it is about decomposing the problem into small
pieces and observing the results of your development. It does not matter if
you write the test first or second, but it's convenient to write the test
first because before you can write the code, you need to know what change you
want the code to enact.

One of the reasons why the term "BDD" was invented was because many people
(myself included) thought that the phrase "Test Driven Development" was
misleading. We weren't writing tests. We were demonstrating _behaviour_ of the
code. The "tests" we were writing were not "testing" anything. They were
simply expectations of the behaviour of the code. You can see this terminology
in tools like RSpec. For people like me, it was incredibly disheartening that
the Cucumber-like developers adopted the term BDD and used it to describe
something _completely different_. Even more disheartening was that they were
so successful in getting people to adopt that terminology ;-)

Getting back to the term "unit", it was never meant to refer to isolation of a
piece of code. It was meant to simply describe the code you happened to be
working with. If we wanted to write tests for a class we would have called it
"class tests". If we wanted to write tests for an API we would have called it
"API tests". The reason it was called "unit test" (again, as far as I can
tell) is because we wanted to indicate that you could be testing at _any_
level of abstraction. It's just intended to be a placeholder name to indicate
"the piece of code I'm interested in".

I think Michael Feathers best described the situation by comparing a unit to a
part in a woodworking project. When you are working on a piece, you don't want
any of the other pieces to move. You put a clamp on the other pieces and then
you go to work on the piece that you want to develop. The tests are like an
alarm that sounds whenever a piece that is clamped moves. It's not so much
that you are "testing" what it should do as you are documenting its behaviour
in a situation. When you touch a different part of the code, you want to be
alerted when it ends up moving something that is "clamped" (i.e. something you
aren't currently working on). That's all. The "unit" you want to clamp depends
a _lot_ on how you want to describe the movement. It might be a big chunk, or
it might be something incredibly tiny. You decide based on the utility of
being alerted when it moves.

So having said all that, what is the benefit of TDD? Not to test the code, but
rather to document the behaviour. I've thought long and hard about what that
means in practical terms and I've come to the conclusion that it means
_exposing state_. In order to document the behaviour, we need to observe it.
We have "tests", but they are actually more like "probes". Instead of "black
box" interactions (which are fantastic for _acceptance tests_ ) we want to
_open up_ our code so that we can inspect the state in various situations. By
doing that we can sound the alarm when the state moves outside of the bounds
that are expected. The reason to do that is so that we can _modify code in
other places_ safe in the knowledge that we did not move something on the
other end of our project.

Anything you do to expose state and to document it in various situations is,
in my definition anyway, TDD. Test First is extremely useful because it allows
you to do this in an iterated fashion. It's not so much that you wrote the
test first (that's irrelevant). It's that you have broken down the task into
tiny pieces that are easy to implement and that expose state. It just happens
to be the case that it's extremely convenient to write the test first because
you have to know what you want before you can write it. If you are breaking it
down in that kind of detail, then you might as well write the test first. And,
let's face it, it kind of forces you to break it down into that detail to
begin with. That's the whole point of the exercise.

There are times when I don't do test first and there are times when I don't do
TDD. I'll delve into both separately. First, I frequently don't do Test First
even when I'm doing TDD if I'm working with code that has already got a good
TDD shape (exposed state with documented expectations). That's because the
"test" code and the production code are 2 sides of the same coin. I can make a
change in the production behaviour, witness that it breaks some tests and then
update the tests. I often do this to stress tests my tests. Have I really
documented the behaviours? If so, changing the behaviour should cause a test
to fail. If it doesn't, maybe I need to take a closer look at those tests.

Additionally, I don't always do TDD. First, there are classes of problems
which don't suit a TDD breakdown (insert infamous Sudoku solver failure here
-- google it). Essentially anything that is a system of constraints or
anything that represents an infinite serious is just exceptionally difficult
to break down in this fashion (woe be unto those who have to do Fizz Buzz
using TDD). You need to use different techniques.

Jonathon Blow also recently made an excellent Twitter post about the other
main place where you should avoid TDD: when you don't know how to solve your
problem. It is often the case that you need to experiment with your code to
figure out how to do what you need to do. You don't want to TDD that code
necessarily because it can become too entrenched. Once you figure out what you
want to do, you can come back and rewrite it TDD style. This is the original
intent for XP "spikes"... but then some people said, "Hey we should TDD the
spikes because then we don't need to rewrite the code"... and much hilarity
ensued.

I hope you found this mountain of text entertaining. I've spent 20 years or
more thinking about this and I feel quite comfortable with my style these
days. Other people will do things differently and will be similarly
comfortable. If my style illuminates some concepts, I will be very happy.

~~~
MichaelMoser123
Thanks for your perspective. One issue is that it is often hard to tell in
advance whether you know what you are doing or not. Also in my experience some
implementation details can lead to a revision of the interface as well.

~~~
mikekchar
My best advice is to try it both ways (assuming you are interested in Test
First/TDD). You'll find a sweet spot that works well for you. This is an area
where I think there are lots of things that can work well. For me, my TDD is
probably the sharpest knife in my kit, so I rely on it. For others, maybe
there are other things. Don't let anyone tell you that there is only one way
to do it. Of course you have to find a way to collaborate effectively (and
that's the real difficult part), but in terms of growing as a developer I
think you've got a lot of viable paths.

~~~
MichaelMoser123
Thanks for your advise. I think it is an advantage to learn about different
ways to look at a problem. Thankfully there are a lot of ways to look at
problems when working in the software business.

------
jaimex2
Nope. Can't stand TDD.

Write the code, secure it from refactoring stuff ups with your tests.

------
ChrisMarshallNY
Here's what I tend to do: [https://medium.com/chrismarshallny/testing-harness-
vs-unit-4...](https://medium.com/chrismarshallny/testing-harness-vs-
unit-498766c499aa)

Basically, sometimes, it makes sense to write tests beforehand, but most of
the time, I use test harnesses, and "simultaneous test development."

Works for me. YMMV.

------
s188
I don't do TDD on the first version. For me, the first version is a throwaway
version. If it turns out to be commercially viable, that's when I start with a
brand new codebase incorporating all the lessons from the first version, but
this time using TDD. TDD has it's place. I just don't think it's cost
effective on the first version.

------
cjfd
Yes, I am pretty consistent in it. It has the great benefit of leading to
highly reliable software even in the face of complex requirements and many
requests for changes.

Rewriting the tests completely does not really happen. Sometimes I am not
entirely sure of all the things that the production code should do so then I
go back an forth between test and executable code. In that case one needs to
be very aware of whether a failure is a problem in the executable code or the
test code.

It pretty much works all the time. Occasionally there are the exceptions. If a
thing is only visual, e.g., in a webinterface it may be best to first write
the production code because the browser in my head may not be good enough to
write a correct test for it. Also, in the case of code that is more on the
scientific/numeric side of things one may start out with more executable code
per test than one usually would. I still write the test first in that case,
though.

------
lowercased
mostly tests while doing implementation, but not 100%.

however, I've started working on a project with others, and am becoming a bit
more adamant on "this needs tests". Codebase had none after a year, and the
other dev(s) are far more focused on code appearance than functionality.
Fluent interfaces, "cool" language features, "conventional commit" message
structure, etc are all prized. Sample data and tests? None so far (up until I
started last week).

I've had push back on my initial contributions, and I keep saying "fine - I
don't care how we actually do the code - change whatever I've done - just make
sure the tests still run". All I've had is criticism of the code appearance,
because it's not in keeping with the 'same style' as before. But... the
'style' before was not testable, so... there's this infinite loop thing going
on.

~~~
hackerm0nkey
yeah, that's the problem. If they don't get the value of what a test gives
them, and then the focus shifts to aesthetics and conventions, etc...

Personally when I review code, I look for the test, I need to find a way to
tell me why that code exists and a proof that it works.

~~~
lowercased
I had this issue with a different client about 6 months ago. I understand
there are 'coding styles' that some companies stick with, and I'm not strictly
opposed to them. I do bristle when I'm presented with the 'one true way' from
devs who spend all their time on one project, or one tech stack, or one
company. I jump around a lot, and multiple companies have "the one true way",
and the conflict. Processes around commits, flow, commenting, etc - these vary
more than some people would care to admit.

In response to that, I've started to care and focus more on tests and sample
data to illustrate the core issues, changes and value for an issue. You want
to change the code from 4 lines in to 1 4-line chained fluent interface to
match other bits of the code, or to try out your new builder syntax? I really
really have grown to not care too much - as long as I have some tests to
demonstrate when something stopped working (or when our understanding of the
project changed).

------
cjg
In my experience TDD is great when you know what you want a particular piece
of code to do.

The other place it works well is code written as a pair - with one member
writing tests and the other writing the implementation - the challenge is on
to pass the buck back to the other pair member - i.e. find the obvious test
that will cause the code to fail / find a simple implementation that will
cause the tests to pass. This is great fun and leads to some high-quality code
with lots of fast tests.

The benefit of TDD is that your coverage is pretty high - and you aren't
writing anything unnecessary (YAGNI).

I don't think I have ever rewritten tests that I have written (TDD or
otherwise). They might get refactored.

TDD doesn't work so well when you only vaguely understand what you are trying
to do. This is not a coding / testing problem - get a better vision -
prototype something perhaps - i.e. no tests and very deliberately throw it
away.

------
aazaa
This is a big topic, and you're asking some good questions. Rather than tackle
it all, I can recommend expanding your questions with the following.

When someone tells you they don't write tests first, ask them how they
refactor. How do they _know_ the changes they made didn't break anything?

You can fool yourself with test-first, but it's quite difficult to do if
you're rigorously following the practice. First write a failing test. Next,
write only enough production code to fix the failing test. Optionally
refactor. Rinse and repeat.

Code created this way can prove that every line of production code resulted
from a failing test. Nothing is untested, by definition. The code may be
incomplete due to cases not considered, but everything present has been
tested. Note that it's possible to break this guarantee by writing production
code unnecessary to get the test to pass.

------
kraftman
I have had one excellent experience with TDD. I re-wrote the stubbing library
Sinon in Lua, and as I wrote a feature I wrote the test first, then made the
test pass. Since I wanted it to match Sinon as much as possible, the
requirements were exact, meaning the tests I wrote never had to be refactored.
The whole thing was really smooth and worked really well.

The issue I find is that generally we aren't writing code we know the exact
requirements for, so doing TDD means that not only are you refactoring your
code as you understand the problem better, but you're also refactoring your
tests, which increases the workload.

Maybe that's a sign that we need to spend a lot more time designing before
implementing, but I've never worked anywhere that happens enough to use TDD as
nicely as my experience with my Sinon clone.

------
jammygit
My understanding is that tests are mostly supposed to increase iteration speed
- the opposite of what most comments here suggest.

> The change curve says that as the project runs, it becomes exponentially
> more expensive to make changes.

> The fundamental assumption underlying XP is that it is possible to flatten
> the change curve enough to make evolutionary design work.

> At the core are the practices of Testing, and Continuous Integration.
> Without the safety provided by testing the rest of XP would be impossible.
> Continuous Integration is necessary to keep the team in sync, so that you
> can make a change and not be worried about integrating it with other people.

> Refactoring also has a big effect

\- Martin Fowler

[https://www.martinfowler.com/articles/designDead.html](https://www.martinfowler.com/articles/designDead.html)

~~~
jdlshore
Lots of people are writing tests as an end goal rather than a means to the
“malleability” end goal. They end up writing tests that satisfy code coverage
metrics, but are too closely tied to implementation.

------
steve_adams_86
I realized recently that only 10% or so of the code I tend to write truly
benefits from thorough testing. The rest can be handled more broadly through
integration testing which is less about the code specifically and more about
expected end use cases, like user workflows. I find those tests very useful. I
only write those after the flow is established and more or less finalized.

I used to write a lot of tests and discovered over summer that it costs too
much in terms of time spent writing, changing, and debugging tests for what
you tend to get out of it.

I do think writing a lot of tests for a legacy or relatively old system is a
great way to uncover existing bugs and document expected behaviours. With that
done, refactoring or rebuilding is possible and you gain a great understanding
of the software.

------
Razengan
I have never been able to "grok" the idea of writing tests before writing the
implementation. My brain just doesn't work that way. It's like a speed bump
that makes me lose the idea or inspiration if I try to think of it in terms of
"tests" first.

However, when I need to overhaul something that already exists, e.g. the core
of a game engine, I've gotten into the habit of writing tests for current
behavior, so that when I rip it out its replacement works the same way, or at
least retains the same interface, so I don't have to replace the whole pyramid
on top before I can compile again. :)

This has also helped me realize the value of tests, but later on in the
development cycle, not as the base before actually writing anything.

------
adamzapasnik
No, my clients wouldn't work with me then...

On a serious note, I find it hard to write the tests at the beginning for a
code that I'm not sure how/what is gonna do. What do I mean by that? Well, as
all you probably have experienced, requirements change during development,
sometimes 3rd party/microservices/db constraints don't let us achieve what we
want. We have to come up with hacky/silly solutions that would require us to
rewrite most of the tests that we wrote.

A lot of times I don't even know how to code the stuff I'm required to build.
How am I supposed to write tests in that kind of situations? I think it would
be like building abstractions for problems that I don't know very well yet.

------
buildbot
I've never had success doing this in languages like Java or Python, but where
it's been very very helpful is in hardware description languages. Since you
are often implementing a piece of hardware with known input and output specs,
writing tests first can work and show you how much of you design meets the
spec as you go forward.

Plus writing HDL without tests is basically guaranteed to create something
nonfunctional.

I hate unit testing in for example Java though, individual functions are
typically very basic and don't do much. A service? Integration tests? Sign me
up, but unit testing to 100% coverage a function with ten lines that reads a
bytestream to an object and sets some fields is boring, and fairly difficult
to mock.

------
ravenstine
I usually practice Test Along The Way rather than Test Driven Development. A
lot of problems aren't well understood in the beginning, in which case it
doesn't make sense to me to write tests in a way that will end up forcing a
stupid design upon my code.

------
golergka
I write both client-side and server-side code.

In server-side, almost all code is atomic, very functional and is very easy to
cover with tests, on different levels. I start with several unit tests before
implementation and then add a new test for any bug.

The client, however, is a completely different story. It's a thick game
client, and through my career I honestly tried adopting TDD for it - but the
domain itself is very unwelcoming to this approach. The tests were very
brittle, with a ton of time spend to set them up, and didn't catch any
significant bugs. In the end, I abandoned trying to write tests for it
altogether - at least I'll be able to write my own, functional and test-driven
game engine, to begin with.

------
sudhirj
Depends entirely on the code and the context - if it's something like a
library, mostly yes. There I already know what I want the library to do, and
writing the test first is a great way to get a feel for what the ergonomics
are like in actual use. Also a great way to spec out what the library will and
will not do. Then I write to make it work, and once the tests are passing I
keep refactoring to make it neater and easier to understand, and then maybe
add benchmarks and move on to optimization.

If it's an application or framework, I usually drive it from the UI, so tests
are more an afterthought or a way to check / ensure something.

I find the best balance is to have thick libraries and thin applications, but
YMMV.

------
jakespracher
Not always before, but always eventually and usually once I’m done with
“exploration”. Testing done right should be a time saver in the long run! I
think many people are turned off by testing and especially unit testing
because it ends up being difficult and maintenance is more of a pain than it’s
worth. There are many good strategies to make it easier that in my experience
have yet to be well adopted:

[https://m.youtube.com/watch?v=URSWYvyc42M](https://m.youtube.com/watch?v=URSWYvyc42M)
[https://www.destroyallsoftware.com/talks/boundaries](https://www.destroyallsoftware.com/talks/boundaries)

------
52-6F-62
Extremely rarely. Probably the thing I do the most is write the implementation
as I think it should function, then write the test to meet the result I want,
and correct from there. Sometimes the test requires a correction and sometimes
the implementation does.

------
uptownJimmy
I am quite curious to know what percentage of working devs ever write tests at
all. Because I don't think it's the majority, based on my own professional
anecdata.

But I've not yet been convinced that any of the various polls are very
authoritative. So I dunno.

~~~
wyldfire
I write tests for each bugfix and feature with rare exception.

It's a great practice to have a regression test suite that you can use to run
your code in a simple context. The unit test suite can catch all kinds of low-
hanging fruit instead of waiting until you deploy the code to your target
device or production service (or even the release cycle for those).

------
cygned
No, in fact, we don’t use coded tests at all, deliverables are tested by the
product owner(s). On the code level, we hardly see bugs, not even when
refactoring. I often wonder if it is luck or expertise, and whether we would
benefit from writing tests.

~~~
mattmanser
I used to wonder the same thing, until I inherited a project with tests.

The tests never caught a bug for the first 3 years. Got in the way a lot
though.

Then they finally caught 1.

Not worth it.

~~~
jlangr
I don't see unit tests "catch bugs" often, either, in the sense that the CI
build fails due to defective code pushed up.

And even with TDD, I don't often find myself breaking a lot of things that
were already working, though it does happen. In those infrequent cases, it's
extremely valuable to know I broke stuff that was working. I.e., it's pretty
sad to ship changes that broke other behaviors, things you had the faintest
clue that you were impacting.

What I do see gobs of, when doing TDD, is the tests preventing crap code from
getting integrated in the first place, i.e. when I or others first write the
code (or change the code of others). From the testing perspective, that's the
real thing they do--gate the defects from ever leaving your desktop, and in a
far faster manner than most other routes.

Unless, of course, one is a perfect coder.

In any case, TDD has more important benefits that I've also gotten. Easily
worth it for me.

------
clarry
No, although sometimes I do wish I were working on problems that are so simple
to test.

------
wmu
I found TDD unpractical. Can recall one situation when used TDD. I was writing
a printf-like formatter in C++ and prepared a lot of tests cases in advance.
That approach worked well at early state of development. however, further
development had revealed quirks I hadn't predicted. As a result, the number
and complexity of tests got increased.

My typical practice is to work out an API, write early scratches of
implementation and test only simple cases. Then I can inspect two things: how
the API works in real code and what else should be tested. In other words
tests help to establish an API, then to stabilize the implementation.

------
jeremija
I've always found it easier to write the skeleton of a module first (or an
interface, depending on the language), and then write the tests to cover the
main functionality, then the tests for the edge cases, and then finish the
implementation.

I usually finish by checking the test coverage and trying to make it reach
100% branch coverage if I have the time. The coverage part is important
because it usually makes me realize things could be made simpler, with fever
if/else cases.

I could never get used to writing only the tests first simply because all the
compilation errors get in the way (because the module doesn't exist yet).

------
BerislavLopac
Definitely, whenever I have a clearly defined design to implement. If I don't,
I generally try to design the API/interface first, then write at least the
basic tests before implementing.

That being said, I never write all the possible tests before starting with the
implementation. They're called unit tests for a reason -- I generally write at
least a few tests for a particular unit (say, a function or method) and then
write the implementation before starting working on another. And I often go
back and add extra tests for an already implemented unit to cover some edge
cases and error conditions.

------
rm_-rf_slash
Acceptance tests, yes. Nothing puts my mind at ease like knowing that when I
feel I am done coding, I can follow a list of actions on a spreadsheet and
call it done when every row is green.

Integration tests, sometimes. Depending on the complexity of the system, I
might skip this part. If it is a collaborative work then integration tests are
(in my view) mandatory for ensuring that everyone’s code plays nice with
everyone else’s.

Unit tests, almost never. Unless it’s something absolutely production
critical, pull-your-hair-out-at-5-on-a-Friday kind of feature, it’s usually
not worth the extra time putting unit tests together.

------
champagnepapi
I mean I would "like" too, however, working in a startup there are some
limitations. Our primary focus is building software that will improve metrics
for our business, often time that software will only last a few months or so.
As a result, it's just too costly for us to write tests for code that will
only last a couple months most likely. Again, we would like to, and there are
certainly some parts (small-parts) that have lasted for more than a few months
and those don't have tests either, in that case, we "just never got around to
it".

------
jononor
Never did TDD as prescribed. But I try to write one test together with the
implementation, to have basic coverage to cover the most silly things, and to
ensure testability. This way it is easy to add more tests later, _if and when
I find that they are needed_.

Also I try to test at subsystem / API boundaries whenever possible. Small
units like a function rarely get their own tests, they are covered implicitly
by being used. This avoids tests of arbitary internals (that should be free to
change) becoming a maintenance burden. External APIs should be stable.

------
allworknoplay
I often list out and sometimes actually write many of the tests that’ll need
to pass before I write much code. Definitely not real TDD, but just listing
out the cases helps me stay focused, keep from missing things, pick back up
after a distraction, etc. Before doing this I usually have gotten to the point
where I've got classes and database migrations at least scaffolded out, but
mostly empty.

I find literal TDD distracting and unhelpful, but having a list of things I
need to handle that doubles as tests I can't forget to write is a really nice
balance.

------
jacobush
Almost never when writing new code. It feels like editing a manuscript before
it even exists. If I "write" the code in pseudocode in my head or on paper
first, I might write some tests first.

~~~
hackerm0nkey
tests can be used to clarify your intentions and what you are attempting to
build. It's best place when you are actually starting from a clean slate.

Just write enough code to make the test pass, no more no less. Refactor and
repeat.

In recent years I've never written any new piece of code without test first
and can not be any happier. Beside the confidence a test gives you, it really
a great way to pin down your thoughts and write what makes them meterialize.

~~~
jacobush
Yeah... I have read that and told that to others _many_ times. But the thing
is some of my best work (rare, though) has not been to spec. To _any_ spec.
More like, when you are doodling on a paper. _" This might be a lake with a
duck. Nope scratch that, it's actually a dragon and those are its scales.
Yep."_

~~~
hackerm0nkey
Yes, I understand what you mean. When I face uncertainty, I explore with not
so much emphasis on testing. But once I am over with that exploratory phase, I
funnel the learning into the TDD process to solidify my implementation and
guide my design.

BTW, I rarely work from a well-defined spec these days.

------
h0h0h0h0111
If the problem is complex, or I don't well understand the requirements, I
write tests first.

For complex tasks, breaking down the problem into a single requirement per
test helps me understand it better, and ensure I don't introduce regressions
while refactoring or adding new requirements.

However, a lot of modern code is hooking up certain libraries or doing common
tasks that don't get a lot of value from unit tests (mapping big models to
other models, defining ReST endpoints, etc), so I don't generally write unit
tests for those (but integration)

------
littlestymaar
I almost never write test beforehand, unless I'm working on a really complex
subject, where I need to write all tests first to grasp the problem entirely.
Otherwise, I write tests before submitting the pull request.

I almost never write tests for personal projects, but 100% of the time when
working in a team. IMO, tests are not here to prevent bugs, but they are part
of the developer documentation: a coworker must be able to make any change
they want to my code without asking me anything, and tests are the biggest
part of that.

------
claudiug
For new development, no. for the last 10 years I realise that stuff will
change fast. when is solid, and feature is clear and will remain same for at
least few months, yes, then I do it :)

------
soylentgraham
No. Every time I try, I quickly fall out of the habit. I do write high-level
first and then implement code after establishing API/what I need these days (I
end up re-writing that a few times) which seems ideal for TDD but just doesn't
seem to work that way.

What I do do is add a unit test for (almost) every formal bug I come across,
(to prove the bug, and fix it) so that but never happens again. Which over the
years seems to have given the best results for backwards compatibility,
stability etc

------
aliswe
I heard that Martin Fowler once said, only test what subjectively "can fail".
If you make tests for more than that, you are by your own definition wasting
your own time ...

------
40acres
Like all things in life "it depends".

I rarely practiced TDD until I started working on a piece of software that
could take anywhere from under a minute to an hour to finish running. This
means I must be able to isolate the specific portion of code effectively to
save time and focus on the problem at hand. The APIs for this model are well
written so I can recreate a bug or test out new code effectively by stitching
some APIs in a unit test. It's incredibly helpful in that sense.

------
meddlin
No.

I've had Sr. Soft. Engs. ask me why I thought we needed unit tests at all.
I've had managers not know what they were. I've worked on projects where I was
the only developer who wasn't afraid of the technology, but management
couldn't give proper requirements. I've also worked in code bases where no
testing framework (of any kind) existed.

I don't mean to sound combative. Looking back those places would benefit
immensely from structured testing. Life got in the way though.

------
Xelbair
I usually write an implementation first, but when testing it i try to re-
evaluate the requirements - i do not look at implementation at all. Test
should handle all edge cases.

And i do modify tests, because sometimes assumptions are wrong(or domain
experts change their mind or got something wrong)

Sadly functionality requirements are very soft in my industry(once it was
requested from me to do a perfect fuzzy match..)

Most of the time i am required to modify untested legacy code, that starts
with test(if it is even possible).

------
jermaustin1
When I build a new web app, I start with a REST API. The input and output are
known, so bugs are less likely. As I roll out new API methods, I add it to a
postman collection, and add a couple of tests to it. Then with each deploy, I
point my postman to that environment and hit run.

When building the client application, I typically just manually test as I go
along. Bugs happen, but because the foundation code is all REST API, the bugs
are usually and easily fixed.

------
davidbanham
Whenever I can and it makes sense, yes. When I write a test first the
resulting code is always better.

Sometimes the change is so trivial or type safe that it's not worth it, so I
don't.

Sometimes I don't understand the problem well enough, so I learn more about it
by doing some exploratory coding and prototyping. I usually come back and
write a test after the fact.

Sometimes the project is on fire and I'm just throwing mud a the wall to see
what sticks.

------
choiway
No at the start of a project. Yes for refactoring.

I used to write tests for all pure functions because they were the easiest
tests to set up. They are also the easiest to debug so the test didn't really
help unless you're checking for type signatures.

I think that implementation tests are important but I found that I suck at
figuring our how to set up a test before the actual implementation. So I do
them after the fact and judiciously.

------
unixsheikh
No. I have always felt that TDD gives a false feeling of safety and
satisfaction, and that it is mostly a waste of time that could be better spend
optimizing and refactoring.

Testing simple code is simple and therefore pretty much useless. Testing
complicated code is complicated and therefore more likely to fail by making to
few or to many assumptions in the test, or completely screwing up the test
code itself.

~~~
rocgf
A couple of takeaways from your post:

\- knowing your code does what you expect is a _false_ sense of safety

\- if something is simple, it is useless

\- if something is complicated, it is useless

~~~
unixsheikh
That's the problem. You assume that because you have a couple of passed tests
that you know your code. Tests are like any other code, they incur bugs at the
same rate as other code, and as such can very much give a false sense of
safety.

------
Double_a_92
Only for functions that have a clear input and output. You can nicely think of
all the edge cases, and then get going with the actual code.

But most of the time, fixing some bug or implementing a feature is more of
experimenting and prototyping at first. Writing tests for every futile attempt
would be a waste of time.

At best we design some small architecture first with interfaces, and then
create the tests off that.

------
gfiorav
Most of the time. It helps me design the code that I'm about to write.
Sometimes I go further and even write the docs ahead of the tests.

------
muzani
Manual tests, yes. More in the lines of "user should be able to do this".

The developer does a round of these tests as best they can. Then they toss it
to QA who tries to break it, but must also do the same tests.

It prevents a lot of bad design bugs, but adds almost no overhead.

Automated testing should be applied only where this manual testing becomes
tedious or where we often make mistakes in testing.

------
1337shadow
If I'm changing a software that's in production: yes. If I'm just messing
around with a prototype I'm the only user of and don't yet know if it's going
to be useful or go in production: no. I do like to write tests that actually
write tests though, see such patterns in django-dbdiff, django-responsediff,
or cli2 (autotest function).

------
chibik31
I could write a test plan first, but I haven't always fully designed the
interfaces until I've ploughed into the code and figured out what needs to be
passed where, so there would be a lot of repeat effort in fixing up the tests
afterwards.

Effectively, in writing tests first you make assumptions about the code. These
don't always turn out to be true.

------
Nursie
Personally, nope.

I could write a test plan first, but I haven't always fully designed the
interfaces until I've ploughed into the code and figured out what needs to be
passed where, so there would be a lot of repeat effort in fixing up the tests
afterwards.

Effectively, in writing tests first you make assumptions about the code. These
don't always turn out to be true.

------
FpUser
Normally no. I do not do TDD either. If however I am writing some complex
algorithm where I know that I'll make few bugs here and there I would write
test.

Being too proper, doing everything by the book does not always translate to
better code or good ROI

Also being an older fart and programming for so many years I am usually pretty
good at not making too many bugs anyways.

------
dexterbt1
I follow the Functional Core, Imperative Shell pattern.

I do TDD on Core, especially on mission critical code.

The Shell however have almost zero automated tests.

~~~
jononor
Thanks for that term, it sounds related to the style I prefer. Do you have any
recommended references?

~~~
dexterbt1
Try to watch Gary Bernhardt's "Boundaries" screencast (2012), if you haven't
yet.

There are also some collection of links in github such as
[https://gist.github.com/kbilsted/abdc017858cad68c3e7926b0364...](https://gist.github.com/kbilsted/abdc017858cad68c3e7926b03646554e).

The pattern articulated by Gary also best resonated my thoughts and experience
in building software.

------
okaleniuk
No. Doesn't agree with exploration.

I once had a job that didn't require any. I was given a specification and my
job was to write some DSL-code of it. This would have been an excellent setup
to practice TDD! Unfortunately, I wrote a script that basically translates the
specification from word documents into DSL snippets and quit soon after.

------
ryanthedev
I use tests when I need to debug certain sections of my code. It's faster in
the overall development process.

Also most of my tests revolve around business logic, where I need to test
multiple versions of data.

The best advice I could give would be test cases around errors.

That's usually where most bugs are found, when something doesn't return what
you expect.

------
thrownaway954
I fall under the category of, if I'm getting paid to do it then it's up to the
client and in that case most want things done fast and cheap so testing isn't
going to get done. when i'm doing open source stuff, that's for me and others
to benefit and learn from, so i test EVERYTHING i can so i learn.

------
onion2k
No, because I'm always under pressure to show something on a screen as early
as possible, but I really wish I did.

------
JoeMayoBot
As an independent consultant, I do what the customer is doing. If they like
TDD, then that's what I do. If they dislike it, then I do what they do. Most
of my customers don't write tests first. I have one open source project where
I always write tests first to make sure I don't get out of practice.

------
Snetry
Most of the time: no But for things where certainty and is a requirement I'll
do it without a second thought

------
Smithalicious
No, because even if I write the test before the code, the _real_ test will
usually be written after the _real_ code. That is to say, I write a test, then
I write code, then not much later I change the code to different code, and
then I have to change the test to a different test anyways.

------
Jugurtha
No. I suppose only really smart thinkers and high level architects do that. In
a book or a blog post, not in a real thing. It's similar to Aikido or a kata
based martial dance where the oponents are either imaginary or aren't allowed
to hit back/must follow a script if they exist.

------
forgottenpass
Not often. But when you're implementing something that needs a lot of
iteration, a test case that you never commit can be a good alternative to
copy-pasting statements into REPL. Then as it comes together you can just
clean up that garbage test and turn it into something worth committing.

------
lalaithion
I write tests NOT for correctness, but in order to refine the API for public
methods/functions. If I'm going to have some fairly complicated behavior, I
want to write an interface that is easy to use first, because otherwise I end
up writing an interface that's easy to implement.

------
de_watcher
When I contemplate how I'm going to implement it and it starts to look
overwhelming and tedious then I go and write some tests.

Other situations are coming just from experience: you know that some part will
have a lot of special cases, so you implement a test as soon as you think of
the next special case.

------
apocalyptic0n3
I attempt to, but it often doesn't work out that way. I'd say maybe 40% of my
tests are written before the implementation and I start 75% of the remaining
implementations by writing tests before forgetting them and completing the
implementation and writing tests for it after.

------
senderista
Writing a simple test suite as a sanity check during development can be
helpful, but TDD is idiotic. If you don’t believe me, read this:
[https://news.ycombinator.com/item?id=3033446](https://news.ycombinator.com/item?id=3033446).

------
donatj
Others have said similar, but for better or worse the three cases where I
usually write tests are

    
    
        - when it’s really easy
        - when it’s really important
        - before a refactor
    

The last one is arguably the most important and has saved me a lot of
headaches over the years.

------
LeftHandPath
I’ve actually found it helpful to start by writing tests, and then in the
actual code, documentation (just about what the code does or should do),
especially when I’m not entirely sure how to solve a given problem.

But for code that’s trivial to implement, it seems unnecessary...

------
Bnshsysjab
One thing I have got into the habit of is live updates - saving a file
triggers a rebuild, which then gets reflected by live.js in a browser,
automating as much as I can during testing to reduce manual actions saves me a
bunch of time.

------
mathgladiator
Kind of, I write my code then test it via unit tests. Then I focus on
achieving 100% code coverage by tuning the code and beat the devil out of it.

I have found that beating code is a great way to preserve my sleep and save
the next person a headache.

------
iamsb
I definitely think about the tests. I will sometimes write boundary value
analysis on my notebook. But very rarely I would write tests before I write
code.

I just realized that Never have I ever version for programmers will be quite
interesting.

------
Callmenorm
I do sometimes. I do it when I really know what the outcome should be, and I
know that this will speed up my implementation. But when I'm just throwing
ideas around, the tests are never written before the implementation.

------
truth_seeker
I prefer BDD with parameterized test templates over TDD.

I give priority to end to end working of the software stack.

I always makes sure there that test suite should be executed in parallel
threads.

I make sure that tests are written before I merge my code in the master
branch.

------
jefftk
Kind of. For a lot of my work I first elicit the bad behavior, then fix the
code to no longer be broken, then write a test that automates what I did
manually initially. The manual stage is the first step in writing the test.

------
thih9
Yes.

I like BDD, it helps me focus on the goal.

I feel that it lets me find the right approach faster.

Also, helps avoid distractions and optimizing things too early.

Related: „Write tests. Not too many. Mostly integration.”,
httsps://kentcdodds.com/blog/write-tests/

~~~
thih9
Typo in url: [https://kentcdodds.com/blog/write-
tests/](https://kentcdodds.com/blog/write-tests/)

------
jammygit
I write my manual acceptance tests before I write a big fix. It forces me to
straighten out in my head what the intended behaviour is and how to prove it.
Imho, it has saved me a lot of time from confused starts

------
dver
I've written integration test rigs prior to development, not unit tests.

------
WrtCdEvrydy
Yes. For bug tickets, I will regularly write a test that fully encapsulates
the issue then try the suggested fix with the test being used for validation.

At the new feature level, I have found not a lot of use for TDD.

------
OJFord
Only when fixing a bug (job done when tests pass) or reworking something that
I believe isn't well tested (job done when I'm happy with the refactor & tests
still pass).

------
theshrike79
No, because I very rarely do algorithmic stuff that would benefit from unit
tests.

When I start to build something, I don't exactly know how it will work and
what the output will be.

~~~
hackerm0nkey
What code won't benefit from unit testing ?

If you don't know what to test, then you don't know what you are implementing.
Find what that is, clarify it and pin it down with a test then move on to the
implementation to make it happen.

If you don't know what the output is like, then do some exploratory throw-away
work to know a little more. Then write the test that you would've written if
you knew what the output is like.

tackle your problems one a at a time, not knowing what to test is not a good
enough excuse to not test first.

Make it Work - Make it Right - Make it Fast (while still under the protection
of your first-written test)

------
Quarrelsome
depends if its a critical component. For stuff that needs to be atomic and
handles critical data then yeah. Otherwise less so.

TDD is really good for stuff that doesn't have a native test workflow
(headless invisible stuff) as it can double up as a test harness, so for stuff
like message queues its great. For user interfaces its pretty crap though
because you already have a test harness and your eyes are much better at
testing.

------
aloer
sometimes I do TDD for modules that are simultaneously developed for both
frontend and backend (nodeJS)

This way I can mostly stay in the frontend where the tooling is better and the
results are more visual, but still be confident that I don't violate the
proper abstraction that the backend requires.

Kinda like predefining the shared module contract between front- and backend
and forcing myself not to forget about it when in the (frontend) flow

------
exabrial
Only when the requirements are tremendously clear.

Most projects combing "discovery" with "development" making up-front test
writing a poor use of time.

------
axilmar
I write the tests before the implementation only when I need to design an API,
in order to get a feeling of what I want the API to be like. Otherwise, no.

------
stdoutrap
A hilarious parody rap song about writing tests

[https://youtu.be/WkODp99eQ4M](https://youtu.be/WkODp99eQ4M)

------
jimnotgym
I prefer TDD. I like the way it breaks down the coding process. I have been a
total failure at convincing my team to do the same, however.

------
konart
Only when I'm sure the requirements won't change while I'm writing the
implementation. Which is 5%-10% of the time at best.

------
Scoundreller
In healthcare we do before big upgrades. We're convinced that some of our
vendors don't and we're their beta testers...

------
typedef_struct
I write tests whenever something breaks. That way I have a record of what has
broken in the past and nothing breaks twice.

Don't tell anyone.

------
btbuildem
In the entirety of my career, I've heard this talked about, extolled as a
virtue, and never ever seen it done in practice.

------
watwut
No. There are situations where I do that, when I am adding new option to
existing library where API is already set and clear.

------
petr25102018
No, almost never, with no intention in doing so in the future. I don't think
TDD is necessary nor better for anything.

------
mperham
I write the impl, then tests and then refactor based on the tests. Tests are
an excellent data point on impl ease of use.

------
vitomd
Knowing when to use TDD (or any other tool/method) is the difference between a
good and a great developer.

------
errantmind
No, and often not after implementation either, for the same reason I don't
write tests for my tests.

------
bobowzki
No. I tend to develop them in parallel.

------
janpot
No, I consider my tests to be as much part of my code as my code itself, so I
write them together.

------
BiteCode_dev
It depends of how much control I have over the topic. If:

\- I know the topic well. \- I understand the domain well. \- I can picture
the technology, expectations and API well. \- I have the mastery of my time
and deadline.

Then yes, I do.

E.G: I'm currently making a proposal for a new API in an open source python
lib called iohttp.

I have time. I have leeway. I have a good overview of the entire problem. So I
can afford thinking about the API first:

[https://github.com/aio-libs/aiohttp/issues/4346](https://github.com/aio-
libs/aiohttp/issues/4346)

Then I will write unit tests. Then I will write the code.

But that's a rare case.

Many times coding involves fiddling, trying things out, writing drafts and
snippets until something seems to start to solve what you want.

Other times, you want a quick and dirty script or you have a big system, but
you are ok with it to fail from time to time. The cost of testing is then not
worth it. You'll be surprised of how well a company can be, and how satisfied
customers says they are despite the website showing a 500 once in a while.

And you course, you have limited resources and deadlines. Unit tests are an
upfront payment, and you may not be able to afford it. Like it's often the
case, this means the total cost of the project will likely be higher, but the
initial cost will be in your range of price. And you may very well need that.

One additional things few people talk about is how organizations can make that
hard for you. You may be working in orgs where the tooling you need (CI,
version system, specific testing lib...) may be refused to you. You may even
be working in companies where you cannot get clear information about the
domain, but get a vague specs, and the only way to design the software is to
ship, see it break, and wait for customers to report to you because otherwise
the marketing people don't let you talk to them. I'd say change job, but it's
not the point.

At last, you have the problem of the less experienced devs. TDD is hard. It
requires years of practice in the field to be done properly because you build
a system in a completely abstract way. Dependency injection, inversion of
control, mocking and all that stuff you need to make a properly testable
system is not intuitive: you learn it on the way. Even harder is the fact you
have to use it in your head since your are not coding it, but what wraps it,
first. And even more terrible is the fact that baldly implemented, over used
and over engineered, design patterns make problem worst, not better.

------
woodrowbarlow
i work on embedded development (lately, bare-metal firmware), and tooling is
often lackluster. i have never worked on an embedded codebase with a unit-
testing framework that's good enough that people actually use it. yes, i know
some projects have managed it, but i have not had the pleasure of working on
such a project.

with that said, if i'm about to tackle something that i _know_ is likely to
have bugs, especially parser implementations, i will always start by isolating
it into a separate file, mocking the dependencies, building test cases, and
compiling and running natively on my workstation. i write test cases before i
start the implementation, and continue adding to them throughout the process.
when i'm satisfied, i copy-paste back into the real codebase and do light
integration testing.

these tests ultimately get thrown away, but i genuinely feel that they help me
arrive at a correct implementation more quickly than integration testing
alone. honestly, it just helps me feel more confident that i'm not going to
embarrass myself when the code hits the field. this technique doesn't really
help me with business logic, unfortunately, because accurately mocking the
dependencies is insurmountable.

tl;dr: i use TDD when i think it will save me time, but i don't keep the tests
around because tooling sucks.

i'm posting this partially in the hopes that people have tooling advice for
me.

------
vkaku
No. I write tests after I write code.

------
hyperpallium
TDD hardens interfaces (tests rely on an interface, and must be changed if the
interface evolves). If they are already set, this is OK.

------
sam0x17
According to git history I do ;)

------
danShumway
It depends entirely on the project.

For most (but not all) software projects, writing tests before you write code
is wrong.

For many (but not all) API-driven projects, I will write tests _alongside_ my
code. So if I would write a few dummy lines of code at the bottom of a file to
confirm something is working, I'll write that up as a test instead.

In order for that process to work, writing tests needs to be extremely easy --
you need to be able to add a test anywhere without thinking much about it or
wasting time pre-organizing everything.

On that note, shameless self-plug for Distilled
([https://distilledjs.com](https://distilledjs.com)), a testing library I
wrote that I use in all of my projects, and that I like a lot.

The reason Distilled prioritizes flexibility is that I strongly believe there
is no single, right way to do testing that can be applied to every project.

\- For Distilled, I do TDD development where I write tests before code. This
is because Distilled has a rigid API and behaviors, and because I use my tests
as documentation. Distilled aims to have 100% coverage:
[https://gitlab.com/distilled/distilled/blob/stable/src/tests...](https://gitlab.com/distilled/distilled/blob/stable/src/tests/distilled/overview.js)

\- For Serverboy, I only do integration tests based on test ROMs and comparing
screenshots. Those screenshots get written out to my README and show the
current emulator compatibility status. With Serverboy, I only care about the
final accuracy of the emulator: [https://gitlab.com/piglet-
plays/serverboy.js/blob/master/tes...](https://gitlab.com/piglet-
plays/serverboy.js/blob/master/tests/runner.js)

\- For projects like Loop Thesis ([https://loop-thesis.com](https://loop-
thesis.com)), I do a combination of unit tests and integration tests. I don't
aim for 100% code coverage, and I think of my tests as a form of technical
debt. For Loop Thesis I'm also adding performance tests though that let me
know when the game is getting more or less efficient.

\- And with exercises or experiments, I add tests haphazardly on the fly
alongside my implementation code, putting very little thought into
organization: [https://gitlab.com/danShumway/javascript-
exercises/blob/mast...](https://gitlab.com/danShumway/javascript-
exercises/blob/master/eventor.js)

So every project has slightly different requirements and goals, and those
goals drive my testing practices, not the other way around.

------
pgcj_poster
Yeah. I also floss every day, clean up the kitchen as I cook, keep off-site
backups of my personal data, call my mother regularly to thank her for raising
me, read the terms and conditions to online services, keep an up-to-date to-do
list, and change all my passwords once a month.

~~~
corodra
Suddenly I feel very lonely for the fact I do actually clean the kitchen
during and right after cooking...

~~~
hombre_fatal
It's a weird quirk of the brain how easy it is to wait until the food hardens
onto the cooking utensils before cleaning it (sometimes needing a hammer and
chisel to remove it) rather than just rinsing it off immediately with water.
Yet, despite knowing this, picking the former route every damn time anyways.

If you're like this, try putting on some small bluetooth headphones. Now
cleaning the dishes just becomes a way to keep yourself busy while listening
to an interesting podcast.

~~~
corodra
It's weird... I'm a real lazy person. I do the dishes immediately because of
this reason, it's faster and easier. I absolutely hate doing crusty dishes.
So, I'm just confused in general why people wait. Its straight up less effort
to do it sooner than later.

~~~
na85
Do you have kids? I always had the same opinion until I had my daughter, and
then all of a sudden there is a 2 year old who needs supervision and it
becomes easier to clean up after bed time.

~~~
corodra
Procrastinating and having your attention pulled in multiple places are two
different problems. You acknowledge it's easier and want to do the dishes
immediately. But when you realize your two year old is painting the walls with
poop or filled the toilet with all the toilet paper and tried flushing,
resulting in a flood... Everyone can agree, that takes slightly more
precedence than dishes.

------
kpU8efre7r
Unrelated but my last employer wanted a full design document and then a full
test plan down to each individual unit test before we wrote any code. But then
we would constantly be called out in review about not having enough detail and
if you.out enough detail you'd get called out for skipping ahead in the
process. They also had no coding standard or style guide to speak of. I'm glad
I left that job.

------
kd3
It depends on the specific situation whether or not I write tests before,
during or after feature development. But having experienced the benefits of
having a reliable test suite, I can now never go back to not having one. Gives
so much more confidence of things working and especially not breaking.

