
Kent Beck: “I get paid for code that works, not for tests” (2013) - fagnerbrack
https://istacee.wordpress.com/2013/09/18/kent-beck-i-get-paid-for-code-that-works-not-for-tests/
======
kornakiewicz
But we don't write tests to check if our code works. We write tests to be able
to change it in the future with certain degree of confidence that we don't
break anything and - if so - what exactly.

There are other techniques which can give similar confidence, but tests are
the easiest one.

~~~
raldi
I agree with your assessment that the primary reason we write unit tests is to
be able to quickly make changes in the future without fear of breaking
something: You do some extra work today so that you can save a tiny bit of
work (and worry) tomorrow, and over time that accumulated future benefit
outweighs the penalty paid today.

However, many in the industry forget that this is the underlying reasoning, as
seen just two days ago here on this site. Read through the top-rated comments
on this post:
[https://news.ycombinator.com/item?id=13119138](https://news.ycombinator.com/item?id=13119138)

They describe an emergency situation where a single "3" needed to be changed
to "4" ASAP or people would lose their jobs, and everyone's applauding the
gatekeepers who insisted on significant refactoring and the creation of
additional tests before the change could be approved.

I agree with those who say those improvements should maybe have been demanded
immediately _after_ the fire was out, but those who would have delayed the
firefighting out of blind allegiance to the rules seemed, to me, to have
forgotten that the rules are there to serve the programmers (particularly,
their ability to quickly ship working code), and not the other way around.

A rule that's failing to do that should be changed or ignored.

~~~
wnevets
Everything in moderation.

~~~
nurettin
That's rather extreme.

------
lordnacho
I'll put the question to the other readers:

How often do you find, despite having written tests, that there is some bug in
your software? And how many of those times did you think that you should have
considered it beforehand, rather than that it would be impossible to foresee?

In my experience the most useful tests are the ones that came from some
unforeseen bug, which was then fixed and a test case built around it, so that
it wouldn't get "unfixed".

The least useful tests are the ones for cases you know not to invoke, because
they are obvious. Like how you know when you divide by a variable, you know it
can't be zero. So you make sure it can't be zero, making the test case a bit
moot.

~~~
andrewstuart2
Perhaps I'm just an outlier, but I think every time I've gone in to write
tests for my code (usually post-testing rather than red/green testing), it's
rather quickly invalidated some assumptions I made when writing the tests.

"Oh yeah, that's going to need to be thread safe."

"Oh yeah, that might be nil. Rather frequently."

In my experience, testing has been much more beneficial as an exercise of my
mental model of the code and its interaction than for refactoring. But to that
end, I can think of quite a few times that I've been _very_ grateful for a
unit test suite while I did a large refactoring.

~~~
bluesign
I agree, tread safety and race conditions I got hit a lot, other than that
test coverage provided a lot I guess

~~~
Sean1708

      > tread safety and race conditions
    

I feel like this comment wouldn't be out of place in an F1 thread.

------
rqebmm
People get too hung up on the question "to test or not to test" instead of
asking the question "where and when should I test".

I started my career writing iOS clients, and the obsession with TDD was
baffling. 80% of my code was usually either UI or simple Core Data
manipulations, while the last 20% was mostly API parsing and a touch of
business logic. I wrote a few tests for parsing corner cases or business
logic, but they never really gave me any confidence or helped with
refactoring, instead taking up time and adding overhead whenever I made
changes. I supposed I didn't have enough coverage to get the benefits, but
what tests would I write for my UI? What tests would I write for simple Core
Data queries (which is assuredly unit tested already)? What tests would I
write for my parsing libraries (which are already unit tested)?

Then I started working on the (Python + Flask) API backend, and tests were
self-evidently necessary. Python is dynamically typed, which can result in
lots of corner cases when doing simple data manipulations. Python is
interpreted, which means the compiler/IDE won't warn you about syntax issues
without running the code, and you can't catch even the simplest logic errors
without running the function. Most importantly, the API's entire job is
translating data, inputs are in the API parameters or database, and output is
the JSON. It's a perfect function, and tests were obvious. I wrote something
like 600 in a week, then used them to make some major refactors with
confidence.

What I learned from those juxtapositions was that unit tests and automation
are invaluable in certain circumstances. Specifically _any system that creates
machine-readable output_ like JSON, populating a database, or even a non-
trivial object factory should be unit tested like crazy. Any system that
creates human-readable output, like views or changes in an unreachable
database (something like an external API or a bank account) needs to be human-
tested, there's just no way around it.

~~~
lotyrin
Mostly anything a human can test, a machine can test even if it requires end-
to-end UI-automation integration testing. If there's a test that's part of the
suite that is worth running every time something changes, it's simply a matter
of cost: How much time will automation take, how long will the automation
last, what is the cost of maintenance, is that less than the cost of a person
doing it manually?

People tend to be bad at forecasting these kinds of costs, its easy for
prejudice for or against test automation to cloud one's ability to be
impartial in the forecast.

------
giis
As someone who worked as dev (4yrs) later moved as tester (4yrs) and finally
returned to dev again.

Here's my personal thoughts/experiences:

\- Testing job is underestimated.

\- In General, Developers considered superior to testers.

\- What makes Tester position difficult is 'repetitive tasks' . Yes you can
automated tasks, but you still need to do some tasks that can't be automated.
These are manual & repeated tasks, often boring.

\- Some developers are so lazy. for ex: while testing we found 'python syntax
issue!'

\- Management thinks testers can be replaced once they automated everything.
Obviously they push for this.

\- I know for sure, there are projects with passionate developers but no-one
can really take care of their testing side.

\- Dev underestimate/avoid unit-tests & rely on testing team to find basic
issues.

~~~
humanrebar
Don't blame the devs completely. Not many bosses give out raises or bonuses
for good testing. Or even for not shipping bugs. Being a hero and fixing the
bug you shipped gets much more visibility and accolades.

~~~
giis
I'm not blaming all devs, but most devs :) But i guess there is wide-spread
believe with Managers/bosses, testing is secondary to development.

I've seen testers, who find bugs and also fix them but they don't get credits
they deserved.

------
woliveirajr
And that's a fantastic observation.

When you get out a bit of the IT world, you'll find that people who demand
software want to receive something: software. They bought an app, they want
the app. Simple as that.

If you are good enough to have your code working without tests, good. If you
don't need documentation, good. If you paint your walls with use cases, good.
All that doesn't matter, if you deliver the app you were hired for.

And if your app doesn't work... well, everything you've done doesn't matter
either. Because you were hired to deliver an app.

Of course tests are good, documentation is good, self-documenting code is
good. But only for the IT. For non-IT people who's contracting you (you can be
your company, too), he just want the app. The software. Working.

~~~
joncrocks
Sure, but they also expect that the 'non-software' bits work OK/that you've
thought of not only that the software works now, but at some point in the
future as well.

If you tell people "You told me you wanted software, not MAINTAINABLE software
:-)" they'll say "Aren't you a professional? Shouldn't you just be doing this
stuff? Isn't it just implied?"

So yes, they're paying for 'the software', but the tests are part of it,
maintainability, as well as things like security and scalability are things
that should be considered as well as just whether the software 'works'.

~~~
woliveirajr
Yes, I do agree that it's expected. But I also bet that very few non-IT people
have the slightly idea on what it requires. So, let's say after 6 months that
any contract ended, they'll need some update.

They'll go to the market and ask how much it costs for someone to include a
new functionality. They'll get $100, $90... and your bid. You know you have
all those tests and documentations properly done, then you can charge just $10
and win. And you can charge $70 and also win. It's up to you, because you know
how hard/easy it'll be.

And if the software was made by someone else, who charged less in the
beginning? You don't know how worky it'll be, so you have to charge a bit
higher, like that $90.

So, from the client perspective, the difference from a well tested software
and a barely-works one is just the initial price. After all contracts are
over, they (remember, non-IT) can't know if the new functionality has a fair
price or not. And who didn't developed it can't be sure about the
maintainability, either.

Having tests (and everything else) was good for you, for your future. But, in
general, the client didn't knew all this. He just paid for the software...

------
mikegerwitz
> "If I don’t typically make a kind of mistake (like setting the wrong
> variables in a constructor), I don’t test for it."

But for those of us who work on a team, it's far more complicated than that,
and you have no idea who might be touching your code in the future.

~~~
jkire
"When coding on a team, I modify my strategy to carefully test code that we,
collectively, tend to get wrong."

I think addresses that point quite nicely?

~~~
camelNotation
Not necessarily. In many firms the turnover rate is significant enough that
you have no idea who will be working on your code in six months or so. Unless
you are on the sort of team that never changes, you can't really use your past
experience as a guide for a future team's strengths/weaknesses.

~~~
emodendroket
If your goal is to test every possible scenario now or future I think you'll
eventually find it's not realistic.

------
kartan
I asked one time to my teammates why we had test. They just didn't answer. For
them it was just a dogmatic approach.

That doesn't means that you should not have them. But you at least should be
able to answer that question to be able to evaluate the value that they bring
and how much tests do you need and where.

------
dekimir
I like Beck's vision for the future, and I agree that we should keep
experimenting in order to learn which tests tend to work and why. But we don't
need to do it all manually -- we can use computers to automate and speed up
such experiments. To that end, I've started a project to automatically
generate unit tests from C++ source:
[https://github.com/dekimir/RamFuzz](https://github.com/dekimir/RamFuzz)

Right now the generated tests are pretty superficial and silly, but the key is
that they are randomized. Because of this, we can run millions of variations,
some of which will prove to be good tests. Right now I'm hacking an AI that
will pick those good instances from many random test runs. If it works, we'll
be able to simply take source code and produce (after burning enough CPU
cycles) good unit tests for it. This will be a huge help in letting the human
programmer only do "enough" test writing -- the AI will take care of the rest.
Additionally, the solution can be unleashed on cruft code that no-one dares
touch because of a lack of tests and understanding.

(Yes, there will be a business built around this, but that's for next year. :)

------
makecheck
It doesn’t make sense to write no tests at all but I _understand_ this
sentiment based on problems with testing that I have seen before:

\- 1. Test infrastructure is too complex. If I have to create a bunch of
config files, obey a questionable directory structure, etc. before I can even
write my test case, there is a problem. There should be very little magic
between you and your test front-end.

\- 2. Test infrastructure is too lacking. It is also a problem to have too
_little_ support. There should be at least enough consistency between tests
that you can take a look at another test and emulate it. There should be
clearly-identified tools for common operations such as pattern-excluding
"diff", a "diff" that ignores small numerical differences, etc. depending on
the purpose.

\- 3. Existing tests should not be overly-brittle. Do NOT just "diff" a giant
log file (or worse, several files), and call it a day; that means damn near
_any_ code change will cause the test to “fail” and create more work.
Similarly, make absolutely certain when you develop the test that it _can_
fail properly: temporarily force your failure condition so you know your
error-detection logic is sound.

\- 4. Tests should not be overly-large. Do not just take some entire product
and throw it at your test, creating a half hour of run time and 40,000
possible failure points just because it _happens_ to cover your function under
test. It is vital to have a small, focused example.

If your test environment has problems like these, I fully understand the
desire to balance time constraints against the hell of dealing with new or
existing test cases, and wanting to avoid it completely.

And if you’re in charge of such an environment, you owe it to yourself to
devote serious time to fixing the test infrastructure itself.

------
mbreedlove
I think the biggest problem with TDD is that there are two types of code,
trivial and non-trivial.

I think testing trivial code is a waste of time and does nothing but improve
coverage numbers.

When you think about a non-trivial problem to write tests, you don't always
know what the final code will look like. Maybe you forget an edge case or some
small detail in the requirements that will cause you to restructure the code
and approach the problem in a different way. In which case, you now need to
re-write your tests. You might as well just write tests around the final
version.

~~~
awinder
What happens where trivial code later needs to grow? What happens when trivial
code is invoked by non-trivial code and you need to make a change?

If it's truly trivial code, you should be able to test it trivially, so I'm a
little unsure of why this becomes a make-or-break issue for some people.
Pretty sure more time and energy is wasted determining if code is trivial and
needs to be tested versus just testing the damned thing ;-).

~~~
marcosdumay
Trivial code does not grow, it gets replaced. If the replacement is non-
trivial, test it. If it's trivial, don't. Anyway, you've saved some useless
tests that would need to be rewrite on the first change.

Trivial tests may be trivial, but they are numerous: their need grows
exponentially with code size. And they generate almost all the false positives
you will get.

------
faragon
I'm glad to read that. In my opinion, the problem starts when tests become a
religion, e.g. forcing to put unit tests everywhere, no matter if it makes
sense or not: just put tests, in order to justify that if whatever goes wrong,
you can use the excuse of "it fails, but at least it is test covered".

In some case unit testing is necessary, e.g. for ensuring that a hash function
works exactly as defined. However, there are other cases where unit testing is
absurd, and black-box API tests or automated tests could do a better job on
error coverage. As an example, imagine the Linux kernel filled with unit
testing everywhere: plenty unit testing religion fun, but no guarantee of
getting anything better, but a risk of new bugs because of the changes and
increased code complexity.

~~~
EugeneOZ
You don't understand what for are tests. Tests don't give a shit if your code
works today - you write them to freeze today's state of code (doesn't matter,
if your code correct or not).

~~~
faragon
I do. My point is that not all code is of same kind. E.g. for the case you
point, I do extensive unit testing for synchronous code with inputs and
output/s that does significant stuff, in order to avoid breaking past things
with new changes. You can check that I try to honor what I say, here:
[https://github.com/faragon/libsrt/blob/master/examples/stest...](https://github.com/faragon/libsrt/blob/master/examples/stest.c)

However, there are cases where unit testing is not suitable, or it is not a
guarantee, or it is an additional risk, e.g. event-driven or low level stuff,
multithreaded code, etc.

~~~
EugeneOZ
Multithreaded and async code needs tests even more! It's much more difficult
to test, but it doesn't mean such code shouldn't be tested. Where code is
complicated, chances to create unintended changes are higher.

~~~
faragon
Sure. I was arguing against "religion" ("put unit tests everywhere, for every
single thing"), not as "anti tests". There are many kind of tests.

------
paulddraper
"I get paid for code that works, not for maintainable code."

Ah, I get it. _That_ explains the piece of s--- I'm looking at right now.

That said, the title might be sensationalist, but I agree with the holistic
sentiment of the text.

~~~
daxfohl
[https://xkcd.com/1513/](https://xkcd.com/1513/)

------
fiatjaf
There are so many unneeded tests being written I can't even begin to point
them out. Here's an example: [http://entulho.fiatjaf.alhur.es/notes/the-unit-
test-bubble/](http://entulho.fiatjaf.alhur.es/notes/the-unit-test-bubble/)

I've seen dozens of GitHub repos with a "tests/" directory that only contains
tests for the constructor and ignores all the parts that should be tested. You
don't need to test a constructor, this is stupid. If your constructor is not
working none of the other tests will -- BUT HEY, your constructor is working,
it is not hard to see it.

------
halayli
One benefit of testing is that it can highlight whether your abstractions make
sense. If you need to pull in the world to test a small module then probably
your dependencies are not right and what you thought was a unit turns out to
be more than that.

When I am writing a module/function, I tend to continuously think of how this
can be tested, which helps me design better abstractions.

For example if you're writing a class that uses a socket read/write, when
testing you probably need to mock them. If you weren't planning on writing
tests then probably you'd have ended by having the methods embedded in the
class itself as read/write/close when those methods don't belong to the class
and should be in another module called Socket that inherits a Socket
interface. Now that you have a socket interface it becomes easier to test your
class by passing a mock Socket interface.

------
Cpoll
The title quote is a bit out of context...

> I get paid for code that works, not for tests, so my philosophy is to test
> as little as possible to reach a given level of confidence (I suspect this
> level of confidence is high compared to industry standards, but that could
> just be hubris). If I don’t typically make a kind of mistake (like setting
> the wrong variables in a constructor), I don’t test for it. I do tend to
> make sense of test errors, so I’m extra careful when I have logic with
> complicated conditionals. When coding on a team, I modify my strategy to
> carefully test code that we, collectively, tend to get wrong.

------
DanielBMarkham
Mixing up business and tech.

On the business side, you don't get paid for code at all. You get paid to make
something people want. The fact that you're using programming to do that is
inconsequential.

On the tech side, you're not delivering anything unless somebody, somewhere
can test it, even if only one time.

So yes, you are getting paid for tests. In fact, that's the only thing you are
getting paid for. The nub of the question is what the tests look like and how
many you should have.

~~~
mbrock
It's like a carpenter saying "I don't get paid for load testing table tops, I
get paid for durable furniture."

~~~
DanielBMarkham
Absolutely. There's some subtle wordplay going on here -- frankly it's
probably done on purpose to draw out a lot of public discussion.

------
w8rbt
The point is that there is a limit to testing. And some people go way
overboard with it. You'll never get 100% coverage. It's simply not possible.

Now, that doesn't mean you should not test. It means you should understand the
limits of unit testing and test what is important as best as you can. Most
every software engineering class at universities will cover this in-depth.

~~~
adrianN
Of course you get 100% coverage if that's your goal. SQLite for example is a
big project with 100% coverage:

[https://www.sqlite.org/testing.html](https://www.sqlite.org/testing.html)

If you want rock solid software you need to spend the time to properly test
it.

~~~
cleeus
well, not at all times. [http://cve.mitre.org/cgi-
bin/cvename.cgi?name=CVE-2015-3416](http://cve.mitre.org/cgi-
bin/cvename.cgi?name=CVE-2015-3416)

~~~
adrianN
Great, you've proved that 100% test coverage doesn't mean 0% bugs. The actual
question is whether there is a significant difference in reliability in
software with 80% test coverage compared to software with 100% coverage.

~~~
cleeus
at which point we first need to talk about the type of coverage ... tree
coverage or line coverage (or something else?)

~~~
icebraining
In the case of SQLite, it's 100% branch coverage and 100% MC/DC coverage:
[https://www.sqlite.org/th3.html](https://www.sqlite.org/th3.html)

------
at612
For me, the takeaway from that article is this:

> Different people will have different testing strategies based on this
> philosophy, but that seems reasonable to me given the immature state of
> understanding of how tests can best fit into the inner loop of coding. Ten
> or twenty years from now we’ll likely have a more universal theory of which
> tests to write, which tests not to write, and how to tell the difference. In
> the meantime, experimentation seems in order.

Indeed, we still "don't know" how to test—more generally, and given the
abundance of methodologies and their tendency to go through a hype and dump
cycle, I would say we still "don't know" how to write code in the first place.

We'll get there eventually, but for now I would take whichever approach,
methodology, tools, and language that I use as having a "best before" date,
and invest in it accordingly.

------
qwertyuiop924
I don't write tests for all my code. But if I'm not writing an automated test
(frequently when I'm writing a single-use script, which is something I do a
lot, as I'm merely a hobbyist), I still "test" my code, function by function,
at a REPL.

I've long since learned the hard way that if you don't test the functions as
you write them, the bugs get buried in the system, and become very hard to
find. When you test your code as you write it and modify it (formally on
larger projects, informally on smaller ones) this doesn't happen.

That's the advantage of tests, so I can get Beck's point: If the function is
so painfully simple that you already know if it's right, (say, an accessor)
just by looking, then it's not worth writing a test for it.

------
kefka
Seems pretty simple, honestly. I've written enough unit tests to throw in my 2
bits.

Put a sane, normal value test. This will pass unless shit's broken.

Then test edge cases. Test min, max.

Then test some impossible values. If they correctly fail, you pass.

~~~
nrinaudo
Or, you know, write property-based tests instead, so you only need to worry
about the logic and not the test values.

I've always found that if you let the author of a piece of code decide on what
value that code should be tested with, he'll test for the edge cases that he's
thought of (and dealt with), not the ones that actually crash production.

~~~
couchand
On the other hand, if you ask the author to think of the edge cases first,
they're more likely to list them and then write code that handles them. Still
no guarantee, but better than writing tests for code you just "finished".

~~~
nrinaudo
But still, you end up with tests for what the author feels his code should
handle, not necessarily what the real world is actually like.

Don't get me wrong, that's still valuable - if only for the non-regression
aspect - but I feel property-based testing is a superior approach.

Write your code ("this is a function that sorts lists of ints"), write a
property ("when given a list of ints, after sorting it, elements should be
sorted and the list should have the same size"), let the framework generate
test cases for you. Whenever something breaks ("lists of one element come back
empty"), extract the test case and put it in a unit test to make sure the same
issue doesn't crop up again in the future.

------
phkahler
The key to success with that attitude:

>>I get paid for code that works, not for tests, so my philosophy is to test
as little as possible to reach a given level of confidence (I suspect this
level of confidence is high compared to industry standards, but that could
just be hubris).

Is the humility in the parenthetical. There is a difference between arrogance
and confidence.

~~~
Humdeee
Sounds like he'd be a team player and a great all-around guy to work with...

In reality, there's a considerable amount of respect given to people who are
great at what they do, but are also humble and without the boated ego.

------
vesak
In other words "be smart, don't be stupid". Do you really need to write a test
for that single expression setter?

But then again, it may be easier to just set a single round goal like 100% for
test coverage. Writing that test for the single expression setter won't cost
you a lot.

~~~
repomies691
> Writing that test for the single expression setter won't cost you a lot.

Yeah, usually it won't cost anything to the guy who writes the test, but he is
actually paid. What it costs to the business/customer/etc and whether the
value added is worth of the cost is totally different matter.

I think there is clear incentive for writing unnecessary tests for certain
developers - it is non-risky general work, where it is is difficult to fuck up
anyway. If you are able to sell test-writing hours, what's the downside?

~~~
vesak
Difficult to even imagine a project that could fail business-wise because some
developer wrote too many easy unit tests. Do you know of such cases?

~~~
bryanrasmussen
probably won't fail business-wise, but will cost incrementally more for each
test business-wise.

~~~
vesak
Significantly?

~~~
repomies69
Depends on the business case. I don't think this answer can be generalized
either way.

You can lose a business because of the heavy costs which add little value,
however it is not easily arguable that a specific cost was the deal-breaker.

------
everyone
If someone posted that question on SO now it would be insta-downvoted and then
removed for being vague.

------
acqq
There are enough people in the industry who are actually paid for writing the
tests and discovering the potential failures of the mission critical code,
where the tests are fundamentally important.

I've had a small team nicely paid for months only to prove and document the
that the product my company was to deliver won't fail in some specific
scenarios, specified by the contract.

Those who don't produce the mission critical code (or believe what they
produce is not on the critical path) unsurprisingly see the investment in the
tests questionable. Of course, there is always a real danger of doing
something "just because it is done" even if there's no real need.

~~~
keithnz
I think possibly you are not quite understanding what Kent meant. Those people
you mention are not getting paid for tests, they are being paid to give
confidence that some software system works, they use tests to do this, just
like Kent does. His point is about delivering working code with a certain
level of confidence.

Meaning if tests don't increase your confidence, but simply put in to tick a
box for having a test, you aren't getting paid for that (or if you are getting
paid for that, someones lost sight of what they are trying to achieve)

~~~
acqq
In my specific case, that I've mentioned, the tests surely increased the
confidence as they were the part of the whole process, where the results from
the tests were used to modify the product in question until it was able to
pass all the tests. The tests allowed to proactively "solve the bugs" which
would otherwise produce much more problems if the product had been used in
production without these tests.

------
z5h
I happen to work in a team. So I get paid to write code that works, that other
devolopers can make sense of and hack on as well. That's why I write lots of
tests.

If Kent Beck is coding in a private bubble, he can do whatever he wants that
makes code work.

------
lawpoop
This is like saying "I'm paid for code that works, not proper syntax."

But proper syntax is what gets you code that works. You aren't paid directly
for it, though. Tests are not a direct path to code that works, but they can
be a big help.

------
efsavage
The full quote is a bit more nuanced and captures this, but the key to this
mantra is properly defining what "works" means.

Code that "works" doesn't just mean it runs/compiles/passes CI/etc. It has to
_continuously add value_. It can do this by running properly and efficiently
across a wide variety of likely or infrequent conditions, as well as some
exceptional scenarios. It can do this by being written clearly and not adding
technical debt. It can do this by being as simple and/or as replaceable as
possible. And ultimately, it can even add a final gasp of value by being
easily deletable.

------
digi_owl
And yet programmers wants to be held in the same regard as engineers.

[https://www.youtube.com/watch?v=0WMWUP5ZHSY](https://www.youtube.com/watch?v=0WMWUP5ZHSY)

------
kafkaesq
_I get paid for code that works, not for tests, so my philosophy is to test as
little as possible to reach a given level of confidence (I suspect this level
of confidence is high compared to industry standards, but that could just be
hubris). If I don’t typically make a kind of mistake (like setting the wrong
variables in a constructor), I don’t test for it._

Which is unfortunately complete opposite of how TDD was interpreted,
especially in its glory days (and in some corners, up until the present day).

------
sporkenfang
This is why I advocate for end to end testing of a whole system's expected
behavior to augment a small set of unit tests (the unit tests are for edge
cases).

Nobody needs division by zero tests if there are already guards in place so
that can't happen, but it's quite helpful to have a "A goes in, B should come
out" view from a client/user perspective. As long as behavior appears correct
to the client and is not exploitable you're good to go.

------
aryehof
I contend that tests need cover functional and non-functional _requirements_.
Everything else is to some degree optional.

Of course given no formal requirements, all that is possible are tests of the
technical implementation. Regression errors will still be inevitable for
customers and stakeholders, despite the "programmer" being able to claim his
or her tests passed.

We need to find a way to stop kidding ourselves and find a way to test the
_right_ thing.

------
davewritescode
I'm not a big fan of this mentality. Writing enough tests to be bug free just
isn't enough. Sure it's bug free today and that's great but will it be bug
free tomorrow after a junior dev modifies it?

I'm not advocating testing getters/setters but not testing because "I don't
write those kinds of bugs" can burn the next junior dev who might.

Testing is as much about finding bugs early as it is making your more agile in
the future.

~~~
emodendroket
I submit that testing a bunch of trivial things isn't actually going to make
it easier to catch bugs. I recommend this article: [http://rbcs-
us.com/documents/Why-Most-Unit-Testing-is-Waste....](http://rbcs-
us.com/documents/Why-Most-Unit-Testing-is-Waste.pdf)

------
alexjray
You get paid to make the best technical decision for the company/project/who
ever it is that's paying you. You get paid to communicate and understand when
test are needed and when they are not. Startups will probably have a lot less
test than bigger companies ; unless your a security startup or something that
needs a solid foundation that you can trust.

Its all a trade off that needs to be communicated to whoever is paying you.

------
rubicon33
"Different people will have different testing strategies based on this
philosophy, but that seems reasonable to me given the immature state of
understanding of how tests can best fit into the inner loop of coding. "

The problem with comments like this is that they're too ambiguous. Someone who
doesn't want to spend the time to write unit tests, will use this ambiguity as
a mechanism for justifying their laziness.

------
____nope
You write tests to automate tests you would have to otherwise perform
manually. That is the only reason tests exist, to automate the boring task of
testing.

That's one problem with the TDD mindset. If you start by looking for things to
test, you might come up with unlikely scenarios or cases that don't matter
much for your user.

~~~
sidlls
Or even worse you start writing code for "testability" and it becomes a
bloated mess of one- or few-liner functions that are only called in one
location by some other function.

------
bvinc
"You get paid to write code, not tests" -My boss after telling me I should
quit writing tests

I agree with Kent Beck mostly. I would add that tests can also be used to
maintain invariants for future changes to increase maintainability. I just
hope this quote isn't taken out of context.

------
beders
It is a stupid, broad statement without proper context.

It really depends on what your project is, what your goals for maintainability
are and what programming language you use.

Two things about testing: \- test to confirm your spec \- if you have trouble
writing tests, your design is probably flawed

------
rubicon33
I first write good code. I then write unit tests to protect my code from:

a) Bad team mates. b) Future developers.

I've been burned one too many times with junior devs making cavalier changes
in code they don't understand. Unit tests were THE solution for catching these
changes.

------
jasode
Just an fyi about some nuances of TDD that are overlooked based on the 60+
comments I see so far.

Most comments seem to equate:

    
    
      "regression tests"=="TDD"
    

... but it's really...

    
    
      "regression tests" is subset of "TDD"
    

I'm not a practitioner of TDD but I my understanding of its components are:

1) the _ergonomics & design_ of the API you're building by way of writing the
tests first. In this sense, the buzzword acronym could have been EDD
(Ergonomics Driven Development). Writing the _usage_ of the API first to see
how the interface _feels_ to subsequent programmers. Arguably, a lot of
incoherent/inconsistent APIs out there could have benefitted from a little TDD
(e.g. func1(src, dst) doesn't match func2(dst, src))

2) a sort of _specification of behavior by usage examples_ ... again by
writing the tests first. Consider the case of programmers trying to figure out
how an unfamiliar function actually works. Let's say a newbie Javascript
programmer wants to know how to use .IndexOf()[1] What do many programmers do?
They skip all the intro paragraphs and just hit PageDown repeatedly until they
get to the section subtitled as "EXAMPLES". With TDD, instead of examples
being relegated to code comments _" sqrt(64) // should print 8"_ , it formally
encodes the "should print 8" into real syntax that's understood by the
automated test tools. (Test unit frameworks typically use the keyword _"
Expect()"_ as the syntax.)

3) an IDE that's "TDD aware" because it creates a quick visual feedback loop
(the code that's "red" turns to "green") during initial editing. The TDD
"artifacts" can also act as a "dashboard" for subsequent automated builds
alerting you that something broke.

So TDD is a "workflow" and from that, you address 3 areas: (1) design (2)
documentation (3) quality assurance via regression tests. With that
background, the original Stackoverflow question makes more sense: how many
"test cases" do I write because it looks like I can get bogged down in the
test case phase?!?

[1][https://developer.mozilla.org/en-
US/docs/Web/JavaScript/Refe...](https://developer.mozilla.org/en-
US/docs/Web/JavaScript/Reference/Global_Objects/String/indexOf)

------
mmanfrin
O: my manager is the person who asked the StackOverflow question that prompted
this.

------
Nomentatus
Funny thing is, the most frequent positive result of tests for me hasn't even
been mentioned, I don't believe. The biggest benefit I got was an ongoing
education about how the program I was writing ACTUALLY functioned - enabling
me to correct my assumptions before the shit hit the fan. This isn't quite the
same as catching errors, since often you still want the algorithm you wrote as
you wrote it, but knowing more about what's really going on gives you a heads
up to avoid future problems, conflicts, etc. Of course, you may program
differently. I was always big on asserts back in the day, and two-thirds of my
debugging (by instance not hours) was spent fixing asserts, and thereby
learning that some assumption I was making about the program, was wrong at
least some of the time. Always good to know.

------
inputcoffee
Tests might help you write better code.To the extent they do, you should use
them.

That's like saying "I get paid for functioning software, not writing code."

Yes, you get paid for the output of the act, not the act itself.

------
andrewbinstock
Beck elaborates on this point of view in the current issue of Java Magazine:
[http://bit.ly/2g6YEo2](http://bit.ly/2g6YEo2) (loads slowly)

------
Zelmor
>Indeed, since this answer, 5years ago, some big improvements have been made,
but it’s still a great view from a inspiring person

Such as?

------
dnprock
Writing test is like investing. You have to pick the tests that return the
most reward. Simply firing shots is wasteful.

------
ninjakeyboard
I've referenced this stack post a few times too. I feel like it might be easy
to take this out of context.

------
z3t4
you _will know_ when and why to write tests ... The same bug keeps coming up,
you spend most of your time manually testing, you are not sure this change
will break anything, or you are too scared to touch the code.

------
emodendroket
Interesting comment although I don't feel like the article adds much to it.

------
amelius
That's why you need to pay another guy to find code that breaks :)

~~~
crpatino
And then, because the tester gets paid to write test, not to understand the
product, you will operate with huge blindspots.

[Edit] I erased a cheap shot I took at Ken Beck. I still think the title of
the article is stupid, but what the guy actually said is much more nuanced
than that.

------
benyarb
Why not get paid for both?

------
xiphias
The start of the comment is really out of context here. What he wrote about
the team case (what most of us are payed for) is this:

When coding on a team, I modify my strategy to carefully test code that we,
collectively, tend to get wrong.

------
sauronlord
It is a begging comment and not clear what is said beyond "create some
automated tests until you feel good"

Allow me to explain:

Production use of code IS testing (manual, etc). Because it is an observation
of the system state.

All of the world is testing. Every system is inherently a quantum mechanical
one (ie: the observer is constantly testing the state of various systems to
ascertain some level of confidence)

If you are going to test anything... then you should test the Use Cases (ie:
Interactor objects). Don't have Use Case/Interactor objects that encapsulate
intent? Well, you better understand it since the world, and therefore
software, is all about intent.

