
Testing is a separate skill and that’s why it can be frustrating - kiyanwang
https://medium.com/planet-arkency/testing-is-a-separate-skill-and-thats-why-you-are-frustrated-7239b1500a6c#.s23ovtlo5
======
koonsolo
The best testers that I know really enjoy breaking stuff. They have this
special mission to break anything they get their hands on. They take great
pride in finding the stuff that the developer probably didn't think of. They
smile when it behaves weird, they laugh when it crashes. Their ego rises when
they tell the developer that they were smarter.

And therefore, the one who built is, is not the person who makes it his lives
mission to break it. Breaking it means more work, more searching for bugs, and
then testing again. And nobody likes doing that. Except for awesome testers.
They love looking at the face of a desperate developer getting frustrated by a
nasty bug.

~~~
a3n
> Their ego rises when they tell the developer that they were smarter.

I've been in QA for a few years, after an initial career in development.

I do find great pride and joy when I break things, and I do laugh sometimes
when I see silly behavior. Testing can be thankless, and even anti-
appreciated, so the validation of finding something is welcome.

But I never "tell the developer that I'm smarter," and I've never thought that
way. They are merely two different roles, with different focuses and different
allocations of time. The tester role exists in part because we are not
perfect.

The relationship between the two roles would be much better without "I'm
better than you" or "you can't do what I do, so you test" attitudes. Both
sides, and the organization, profit when we work together with respect and
appreciation.

~~~
RandomOpinion
> _The relationship between the two roles would be much better without "I'm
> better than you" or "you can't do what I do, so you test" attitudes. Both
> sides, and the organization, profit when we work together with respect and
> appreciation._

The relative pay scale for a developer and tester with equivalent education
and coding experience already speaks volumes about how deep that respect goes.
Also, note who gets laid off first when the budget gets tight.

~~~
a3n
In a way that makes sense, when things are bad. You can sell bugs, but you
can't test nothing ("first write a test that fails" notwithstanding).

------
svantana
No doubt testing is a skill, and a very useful one, though I doubt this is the
main frustration. Most people (at least myself) got into coding because they
enjoy creating something from nothing. Maintaining and testing code is boring
because it doesn't really do anything new, it's mostly fixing edge cases.
Personally I try to make testing more creative by making ambitious testing
tools (fuzzing, self-testing base classes, etc), which makes it more fun and
therefore more likely to get done.

~~~
logfromblammo
A programmer walks into a bar, checks his bank balance, checks his blood-
alcohol level, checks his stomach capacity, checks the menu price, checks for
bartender availability, checks for open seating, and orders a beer.

A tester walks into a bar and orders a beer. Then orders 0 beers. Then orders
-1 beers. Then orders pi beers. Then orders 2.2 billion beers. Then orders 1
cerveza. Then orders an aircraft carrier. Then changes the programmer's beer
order to 2 beers. Then cancels an order for a beer. Then flips the calendar
back to 1980 and orders a beer. Then flips the calendar to 1925 and orders a
beer. Then sets the clock to 23:59:45 and orders a beer....

Those bar jokes represent not only different skills, but different
_stereotypical personalities_. The builder-programmer has to be cautious,
methodical, and well structured, whereas the tester has to channel the spirit
of the chaos monkey to find something that no one else ever expected.

------
michaelfeathers
I don't think this is a good way of framing things. In fact int can be
counter-productive.

Automated testing is the process of writing code to understand your code. We
already have to understand our code. Testing is being deliberate about that
understanding and writing it down. The same design/testing thoughts that lead
us to edge cases can lead us their elimination without testing at all. It's an
integrated process, not something separate.

~~~
sharemywin
QA is a separate set of eyes. No one writes a news article without an editor.
QA brings a different interpretation of specifications. Developers writing
there own unit tests is a single point of failure. The developers
understanding of the specification.

~~~
jschwartzi
The problem with having a separate test group is that developers who don't
test don't write anything to be testable. They don't add features that allow
fault injection, and they don't modularize code to be independently tested. As
a result, testing any of their code is highly tedious.

Developers should also write their own tests. QA should have tests too. It
shouldn't be a matter of throwing software over the fence to a separate group.

~~~
alkonaut
Hopefully no company has a QA department or QA person ("Tester") that _writes_
unit test for code already written by a developer. Unit tests can't be
separated from the code, and has to be written in conjuction with the code.

That said, it's often possible to write higher level tests in advance as a
specification, or afterwards as a regression/integration suite. Testing code
can be used for many things such as UI regression testing, performance testing
etc. That kind of test is black box and not tied to a specific piece of code.

So I think it's hard to debate "who tests" or "who writes tests" without
discussing exactly what kind of test.

------
TheAceOfHearts
As it relates to the topic of testing, I'd invite readers to check out "An
introduction to property based testing" [0]. It's available in both video
format with slides, and in the form of two blog posts. I found it insightful
when I first stumbled on it.

I'm not frustrated with testing in the slightest. I consider it a fundamental
requirement for any serious production application.

I get the impression that people tend to be far too dogmatic about testing
methodologies. Write lots of unit tests, as long as they add value or help
improve stability. Not everything needs unit tests. It depends on what the
module does, and how it relates to the application.

[0]
[https://fsharpforfunandprofit.com/pbt/](https://fsharpforfunandprofit.com/pbt/)

~~~
gowan
this technique is also called pairwise testing[1]. there are some interesting
optimization problems around this approach. it can significantly reduce the
number of tests while providing good coverage.

[1] [https://en.wikipedia.org/wiki/All-
pairs_testing](https://en.wikipedia.org/wiki/All-pairs_testing)

------
BurningFrog
The same is true of Pair Programming. It's a lot of fun and very productive
_when you know what you 're doing_.

Ideally, you need several weeks of pairing with experienced pairers to have
learned the skill. Even then, you usually need 1-3 days pairing with a new
person to get to know each other's styles.

Having two programmers who have never done pairing try it is a bit like having
people who never danced try to Tango based on youtube videos. I suspect many
people who hate pairing encountered it that way.

------
PaulHoule
One problem with testing is that you're not just testing your code, but you
are testing your assumptions. If you make a mistake in your code, you might
make the same mistake while testing.

------
ntrepid8
I'm not so sure they are separate skills. It's possible to write functions
that are easy to test and functions that are hard to test.

I find that the two tasks of writing functions and tests for those functions
are closely intertwined. I like for developers to write their own unit tests,
and then for Q/A to develop the functional/integration tests from the
perspective of the client (machine or human).

It's very difficult to come in after the fact and write unit tests for someone
else's code, especially if they weren't thinking about writing testable code.

------
jlg23
Yes, testing is a skill, no, developers (at least none I know) are not
frustrated because they lack the skill.

Testing is (usually) debugging without understanding the code behind the
application one is testing. Software developers really, really understand that
working on a black box is much less productive for someone knowledgeable than
code reviews.

What frustrates me about "testing" is the TDD-idiocy of writing a million unit
tests - tests that verify only that a specific code unit works. Real "testing"
is executing functional tests: User clicks "Signup", we have a new record in
the users table, the subscription total changes and the
admin/moderator/community manager can immediately see the change in the admin-
UI. A good test will catch as many violations of the expectations as possible
while minimizing the effort/code required to perform the test. A good test
maximizes meaningful output while keeping the required effort minimal.

But, what am I talking.. the whole post is just a prelude to the post
scriptum: "Watch our video!!!!!"

------
klodolph
That, and correct code is often not very testable, at first.

My process is write code -> repeat until it works -> refactor code until the
tests make sense -> break up code into individual tested commits -> submit for
review.

After enough years of this you end up with a grab bag of techniques both for
writing testable code and for writing tests.

------
msimpson
For me, the frustration regarding testing comes not from it necessarily being
a separate skill set but that writing tests can be daunting considering the
tools at hand. Consider, for instance, that only recently has JSDOM even
gained a way to log issues inside its own runtime environment, making all
usage of it previous to this fix amazingly difficult to debug:

[https://github.com/tmpvar/jsdom/pull/1108](https://github.com/tmpvar/jsdom/pull/1108)

Now this is only one in a myriad of nuanced struggles which I can potentially
face as a developer when deciding what and how to test. When these tools
facilitate this need without creating such an undue burden then I, and I'd bet
many others, will naturally gravitate toward automated testing.

------
rb808
Is this about testing or unit testing? Personally one of the problems is that
unit testing has such a hype train it leads to people using it always when it
isn't always necessary. With unit testing brings all the baggage: the DI, IoC
containers, factories, mocks & all the stale test cases that were written a
few years ago and no-one ever looks at but take 10 minutes to run.

My current project has objects that are only ever used once in one place but
still have abstract interfaces, lots of injected parts & test cases. Its only
a simple class, it doesn't need all this extra complexity. For me its
frustrating that lots of people think its "good design". Sure - you need to be
able to test your application but unit testing everything is rarely the right
way.

~~~
draw_down
If you think that's bad, try inheriting a code base that has too few tests.

~~~
morbidhawk
I wish I could get to the bottom of this. I've seen all kinds of arguments for
and against whether or not to unit test and how much / when unit testing
should happen. I haven't been able to figure it out yet.

\- I heard it mentioned before code reviews found more bugs in an experiment
more than unit testing did (I heard this study was mentioned in Code Complete
but I haven't checked the source on this). I wonder if reading code catches
more bugs than writing code wouldn't this be an argument for spending more
time reading our code rather than writing code that tries to understand it?

\- Unit testing has the added benefit that when you change existing code or
refactor you know that the behavioral unit tests guarantee those same
behaviors happen after refactoring (my first point above doesn't give the same
benefit so this may be worth a lot more than finding bugs in code reviewing
new code (as it isn't reasonable to read all the existing code over again).

\- I've also heard that code coverage is a terrible metric and can cause bad
unit test practices, trying to cover all lines of codes instead of trying to
test for specific behaviors.

\- Some unit tests may become irrelevant as the design changes and also have
maintenance costs

Since my team does unit testing while striving for 100% test coverage of our
back-end web service code, we put a lot of time into unit testing (quality of
the unit tests might be questionable sometimes as we aren't testing for all
behaviors only ensuring all code is covered), so I wonder if we spent all the
time that we spend unit testing and instead would read over all the code areas
affected by the new code we and other teammates are writing to ensure we
understand how it works, would it be more beneficial to read the code than to
unit test it? I'm not convinced either way myself

~~~
marvin
I'm convinced, after working on a (micro)service-heavy 2+ million line code
base for a couple of years now.

Our thousands of unit tests are invaluable, primarily for providing a strict
definition of what the code should do, but also for catching bugs when
refactoring. We don't often introduce bugs that break tests when refactoring
simple parts of the system. But for the complex parts of the system, the tests
both maintain a ground truth for how the code should behave, and make it
trivial to ensure correct behavior after refactoring.

If we hadn't had the tests, we would be almost guaranteed to change the
intended behavior of the code when modifying complex parts of the system.
_Perhaps_ no big deal if you're writing an Instagram clone and crashes will
show up in your logs and prompt you to investigate, but critical when your
code processes millions in financial transactions and subtle logging errors
could cause a nightmare.

As always, it depends on your problem domain and priorities. There's an
expensive overhead to having good test coverage. Many failed tests will be
false positives.

------
smnplk
The article is about TDD. You can see on the page, that they are offering a
TDD course using ruby/rails. TDD in Ruby community is overhyped. This
infection spread to management too. If you apply to Ruby job today, you better
have TDD or BDD in your CV. If you don't, you are garbage to them. I have
experience with people infected with TDD in ruby land, they often look down on
people who don't write tests for every single line of code they write. I am
all for unit testing, but can we at least acknowledge that example based
testing is just no panacea for bad software.

 __No, easily testable code does not always imply good design.

 __People that don 't do TDD are actually doing TDD too, they test code in
REPL/console/browser/whatever, they just don't save those "tests".

 __You can not write perfect example based tests, that would cover all
possible inputs and states of your system. You need generative testing
solutions for this. Let the program write tests for you. [0] [1]

See what Leslie Lamport [2] has to say about TDD -->
[https://www.youtube.com/watch?v=-4Yp3j_jk8Q&t=3158s](https://www.youtube.com/watch?v=-4Yp3j_jk8Q&t=3158s)
He nearly flips out on the guy :)) Surely Leslie knows what he is talking
about.

I am ok with unit testing as a communication/documentation tool, so developers
know how to use some piece of library. And to prevent regressions with new
releases, but I would argue that you can never completely prevent those
anyway, requirements change all the time. Maintenance of a large test suite
can quickly get out of hand.

[0]
[https://wiki.haskell.org/Introduction_to_QuickCheck1](https://wiki.haskell.org/Introduction_to_QuickCheck1)

[1] [https://clojure.org/about/spec](https://clojure.org/about/spec)

[2]
[https://en.wikipedia.org/wiki/Leslie_Lamport](https://en.wikipedia.org/wiki/Leslie_Lamport)

------
Glyptodon
The thing that frustrates me is running into situations where creating the
infrastructure and code to set up and run a test is orders of magnitude more
complicated than actually building the application. This is typically not at
the unit test level (which mostly isn't that bad), but rather at the "wow, I
basically need to implement a local version of this 3rd party API that doesn't
have a test environment" sort of level. And that's not even getting into the
times I've had to work with "APIs" that are so bad they aren't even testable
(wishing wells).

------
NumberSix
Someone who is genuinely good at writing code should be able to produce code
that has no or very few bugs. Similarly, they should be pretty good at
inferring what is wrong in the rare case of bugs by simply running the
program. Therefore, they would find writing unit tests and similar rituals
unnecessary and time consuming.

They would most probably write specialized tests where appropriate to catch
subtle bugs that elude even their superior skills. This would be a matter of
personal judgment based on the problem at hand.

~~~
RandomOpinion
Donald Knuth did say once that he's not much of a fan of unit tests. See his
answer to the second question of his interview at:
[http://www.informit.com/articles/article.aspx?p=1193856](http://www.informit.com/articles/article.aspx?p=1193856)

Then again, there are probably less than a dozen people at his level of
ability on the entire planet so unit tests will be part of our lives for the
foreseeable future.

~~~
kwhitefoot
Surely no one is a fan of unit tests. I'm not even though I am responsible for
a few hundred in the program that I work on. They are valuable for several
reasons, including: writing them often exposes an unconscionable degree of
coupling between the various components of the code and they often trap
incautious changes when someone fixes a supposedly unrelated bug. I would need
fewer of them if I could use a functional language language with a good type
system like F# instead of VB and C#.

------
tombh
I'm surprised there's so little comment here about testing as a facilitator.
As in something that refines your coding ability at the coalface, rather than
merely something reassuring you after the fact. For me testing is like the
entire team and paraphernalia of surgery, setting me up to undertake the
specifics of a critical act, unincumbered by the weight of, "I'm saving
someone's life" or, "I'm solving the problem of searching the entire internet
in a few milliseconds".

Testing let's me create far more _complicated_ things, not just merely
_sturdier_ things.

------
hermitdev
A good tester is invaluable to the developer that is willing to embrace them.
Not all developers have a good relationship w/ their testers, but they should
do the best to create the best communication possible. Some developers view
testers as enemies, but they shouldn't. They're actually your best friends.
Toss your ego aside and listen to your tester. Developers/engineers test what
what they were asked to do. aka. sunny day scenarios. These never cover the
"what if the user did this..." scenarios, which are also important.

------
jlebrech
I don't understand how as a developer I have to write code to test my code,
surely there should be a non-coding solution, especially end-to-end.

and if you leave end-to-end to a tester they'll have to learn to code.

------
mannykannot
Good testers have an ability to anticipate where bugs are likely to be found.
It is not intuition, it comes from understanding of the purpose of the system
and many aspects of its implementation, such as its architecture, as well as
general systems knowledge.

This ability to foresee problems before they are manifest is also an important
skill for developers. Those who do not have so much of it will spend more time
finding and fixing problems they did not anticipate.

------
draw_down
I mostly agree with the truism that good code is highly testable, so in a way
it's not separate. If you have lots of trouble testing your code you maybe
just don't design things very well yet. Tests have a way of immediately
highlighting structural issues with what you're building.

------
edibleEnergy
I think the skill difference comes down to learning to write code vs learning
to run code. Testing, manual or automated, involves executing your software,
whereas often while coding you've only ran the code enough times to make sure
it does what you expect of it most of the time.

------
psyc
Separate skill from what? Programming is already an agglomeration of many
different skills.

------
davidgrenier
I always felt that building (on Linux et al.) was a separate skill and that's
why it can be frustrating.

------
matt_wulfeck
The engineering compression has removed the opportunity to specialize.

------
nickpsecurity
This is a long article that essentially repeats the claim without evidence or
solutions. The claim is false for basic testing but true for sophisticated
testing. Let me illustrate by comparing the coding and testing step:

Coding: There’s a spec in their head of what the code is supposed to do. This
usually has output, may produce side effects, and may have an output. They
write a series of steps in some functions to do that. They then run it on some
input to see what it does. They’re already testing.

Testing. Typing both correct and incorrect inputs into the same function to
see if it behaves according to the mental spec. This can be as simple as
tweaking the variables in a range or just feeding random data in (i.e.
fuzzing). Takes less thought and effort than coding above.

So, the claim starts out to be false. The mechanics of testing are already
built-in to the runtime part of the coding phase. The others are slight
tweaks. The basic testing that will knock out tons of problems in FOSS or
proprietary apps takes no special skill past willingness to do the testing.
Now let’s look at a simple example of where testing might take extra knowledge
or effort.

[https://casd.wordpress.ncsu.edu/files/2016/10/kuhn-
casd-1610...](https://casd.wordpress.ncsu.edu/files/2016/10/kuhn-
casd-161026-final.pdf)

So, the government did some research. They re-discovered a rule that Hamilton
wrote about for Apollo program: most failures are interface errors between two
or more interacting components. That interfacing used them in ways they
weren’t intended. The new discovery, combinatorial testing, was that you can
treat these combinations of interfaces as sets to test directly in various
random or exhaustive combinations. Just testing all 2-way interactions can
knock out 96% of faults per empirical data. Virtually all faults drop off
before 6-way point.

Why is this sophisticated enough to deserve a claim like in OP’s post? First,
people don’t learn the concepts or mechanics during the course of programming.
You have to run into someone that’s heard of it, be convinced of its value,
think in terms of combinations, and so on. Once you know the method, you might
have to build some testing infrastructure to identify the interfaces & test
them. There’s also probably esoteric knowledge about what heuristics to use to
save time when combinations go past 3-way toward combinatorial explosion. So,
combinatorial testing is certainly a separate skill whose application could
frustrate the hell out of developers. Until they learn it and it easily knocks
out boatloads of bugs. :)

Regular testing of making inputs act outside of their range? Nope. Vanilla
stuff same as the coding you’re doing. Easier than a lot of the coding
actually since the concepts are so simple. Basic arithmetic and conditionals
on functions you already wrote. What stops basic testing from happening is
just apathy. Incidentally, that also stops them from learning the
sophisticated stuff for quite a while.

