
Ask HN: Resources for learning manual software testing? - tasdev
My partner is going to be testing software I&#x27;ve written. He handles the business side of things and isn&#x27;t a programmer.<p>Can anyone suggest some resources for him to read to how to best touch our software?
======
dankohn1
I agree that it's largely a mindset. From
[https://twitter.com/sempf/status/514473420277694465](https://twitter.com/sempf/status/514473420277694465)
:

"QA Engineer walks into a bar. Orders a beer. Orders 0 beers. Orders 999999999
beers. Orders a lizard. Orders -1 beers. Orders a sfdeljknesv."

~~~
curiousGambler
Don't forget, he came in on February 29th.

~~~
mathgeek
Two years in a row.

~~~
marpstar
In two different time zones.

~~~
avinoth
Via front and back entrance at the same time

~~~
trumbitta2
Walking like a Fremen in the desert.

------
el_benhameen
While other commenters are correct that manual QA is a mindset, there are
readings that can help develop that mindset.

I have new QA engineers read the first five or six chapters of "Testing
Computer Software":

[http://www.amazon.com/Testing-Computer-Software-2nd-
Edition/...](http://www.amazon.com/Testing-Computer-Software-2nd-
Edition/dp/0471358460)

to get a feel for the mindset and methodologies and to help them understand
what testing can and can't accomplish.

"Lessons Learned in Software Testing", mentioned by another commenter, is
another good resource. Lots of good anecdotes:

[http://www.amazon.com/Lessons-Learned-Software-Testing-
Conte...](http://www.amazon.com/Lessons-Learned-Software-Testing-Context-
Driven/dp/0471081124)

Both are a bit dated in some ways ("Testing" has a section on filing paper bug
reports), but the lessons and thinking are still highly relevant.

------
stingraycharles
Hmmm. I think you're looking at this the wrong way: it is not he who should be
learning more about manual testing, it is you who needs to learn about how to
write manual tests.

Manual testing is not at all that different from, say, integration testing:
you write a specification of a task that needs to be performed, you write down
the expected output, and you compare it with the actual output.

What you end up with is a document containing dozens of pages full of small
tables with test specifications, somewhat like [1].

So, to sum it up, it is you who should be doing the hard work of finding out
what to test. You make a document full of tests which are as specific as
possible, and let your partner walk through it. He doesn't understand what to
do? Then you failed at being specific. He cannot find the functionality you
ask for? Either a usability issue, or once again, not specific enough.

Hope this helps you somewhat!

[1] [http://www.polarion.com/products/screenshots2011/test-
specif...](http://www.polarion.com/products/screenshots2011/test-
specification.png)

~~~
crdoconnor
What you are describing is _exactly_ the type of test that should be
automated.

Manual testing should be exploratory, it shouldn't be following a script.
Computers are there to follow scripts.

~~~
a3n
I do software QA on a physical device, that has a computer in it. We set up
scenarios that exercise the software in specific ways. It is very much manual,
following written tests driven by software requirements. This is specifically
software testing, although we use the hardware to exercise the software.

Even exploratory has written tests that basically say "explore," and they are
often assigned with a particular focus.

~~~
crdoconnor
For something like what you do I find that there's often a cost/benefit trade
off to be made:

#1 Create a mock system that you can run automated tests against.

#2 Only do the manual tests.

Which one is the 'right' decision depends largely on the expense of creating
that mock system, the complexity of the system under test, the nature of the
bugs you're getting from customers and the frequency with which your software
changes.

Simple, infrequently changing system? Expensive to set up a mock system? #2.

Complex, frequently changing system? #1 will help more than you realize.

>Even exploratory has written tests that basically say "explore," and they are
often assigned with a particular focus.

Of course. However, exploratory shouldn't mean following a script and it
shouldn't mean doing repetitive work.

------
henrik_w
"Explore It" by Elisabeth Hendrickson [1] is a short, easy-to-read
introduction to exploratory testing ("manual testing") that has many concrete
ideas for what and how to test SW.

[1] [http://www.amazon.com/Explore-Increase-Confidence-
Explorator...](http://www.amazon.com/Explore-Increase-Confidence-Exploratory-
Testing/dp/1937785025/)

~~~
jacques_chester
Related is the "Test Heuristics Cheat Sheet" [1] that she and others put
together.

[1] [http://testobsessed.com/wp-
content/uploads/2011/04/testheuri...](http://testobsessed.com/wp-
content/uploads/2011/04/testheuristicscheatsheetv1.pdf)

------
SotA89
A good start would be the ISTQB foundation level syllabus. While the ISTQB
seem to be a litte outdated in terms of their views on the software
development process: A focus on sequential waterfall-like models - it is a
good resource to learn the vocabulary of software testing. Furthermore it
explains different types and stages of software testing:
[http://www.istqb.org/downloads/viewdownload/16/15.html](http://www.istqb.org/downloads/viewdownload/16/15.html)

~~~
klunger
Yes, this is a real problem! I learned testing in a waterfall environment
(basically followed IEEE standards, ISQTB processes) and now work at a company
that is more Agile. So many of the skills/techniques are fundamentally
incompatible.

~~~
UweSchmidt
Predictably the concept of testing in an agile environment has also been been
explored, even if not in the core syllabus:

[http://www.istqb.org/certification-path-root/agile-tester-
ex...](http://www.istqb.org/certification-path-root/agile-tester-
extension/agile-tester-extension-in-a-nutshell.html)

------
rodent54
Testing (effective testing I should perhaps say) is linked to the domain that
it operates in. Understanding the nature of "what" the software does is often
more important than "how" to test.

"How" you test will be impacted by other things as well. Some environments
(companies) have a need to formally record testing. Others use 'non IT people'
to run the testing. Some have expert users who know the app inside out as
'testers'. etc etc The need for how much detail is in the test scripts, and in
fact if you document manual test scripts will depend on nature of your
company.

You will find a couple of schools of thought on "how" to test. ISTQB is formal
and has a good bag of technique, the other school of though has some good
ideas (like session based testing) but IMHO tends to throw the baby out with
the bath water. The ISTQB technique can be applied in an agile environment
what you would not use the documents they describe.

What I have personally found is that a good tester picks up ideas, techniques
(BVA, EP etc), and applies these where the will return the best value.

I see the arguments in the testing world a bit a kin to dev's fighting over
strongly typed vs. loosely typed.

Automation is good BUT if you don't know what you need/want to test then
really it is a means to get is a mess really quick.

------
nchelluri
I think something that drove it home for me is an actual written test script
at my first part time job (before university). I was testing a tool called
Internet Call Manager (if you used dialup and received a call while on the
internet, this software would pop up a notification on your screen and allow
you to decide whether to ignore the call or to take it).

Basically it was a table with the left hand columns being the instructions to
perform, in point form, and the definitions of the expected/correct behavior,
and the right hand columns being checkboxes and blank spaces to write in,
indicating whether the software performed correctly.

It was super clear and to the point, and it was just a document that could be
easily updated (and was, I believe I later made some modifications to the
script when new versions of the software came out, but it was so long ago that
maybe someone else was the one to do it).

Maybe you could write one of those up and he'd get a better idea for what his
job was, and you could run through it with him a few times. After he gets the
hang of it, I think it will have some value outside of just testing the code:
he may come to understand how changes in one part of the code bring up issues
in unexpected places (and get an intuitive grasp for, say, code reuse); he
will be a true expert on the product (I've always noticed that QA people are
often better versed in software than the assigned Product Manager, come demo
time); and perhaps he'll start to grasp at a more physical level what your
work actually entails, and it'll help give him context for software
development as a process.

\--

"The programmer, like the poet, works only slightly removed from pure thought-
stuff. He builds his castles in the air, from air, creating by exertion of the
imagination. Few media of creation are so flexible, so easy to polish and
rework, so readily capable of realizing grand conceptual structures.... Yet
the program construct, unlike the poet's words, is real in the sense that it
moves and works, producing visible outputs separate from the construct itself.
[…] The magic of myth and legend has come true in our time. One types the
correct incantation on a keyboard, and a display screen comes to life, showing
things that never were nor could be." \- Fred Brooks

Let him learn some of the magic behind the poetry :) To your whole idea
(biz/product guy getting hands dirty with product work), hear hear, bravo,
etc.

------
ume
The Ministry of Testing is a good starting point,
[http://www.ministryoftesting.com](http://www.ministryoftesting.com)

If doesn't sound like you are providing an API but if you are feel free to
mail me directly (email is in my profile) for some resources; my company works
in that area of testing.

~~~
DCoder
> _(email is in my profile)_

The "Email" field in your profile is private, others can't see it. You need to
put your email in the "About" field for it to be publicly visible.

~~~
a3n
Some manual testing would have uncovered that.

~~~
ume
Got me!

------
cheriot
To be completely unhelpful, I've found it's largely instinct. Some people can
look at a thing and find a way to break it. Only those people benefit from
formal QA processes.

------
V-2
First and foremost one has to know the domain, what are the things that tend
to go wrong. And this is platform specific knowledge.

I mean, you're testing a web app? Disable JavaScript in the browser.

Testing an Android app? Rotate the phone to change screen orientation,
especially when there's a background operation going on - that's a typical
spot for bugs, but no amount of general manual testing know-how will tell you
that. And so on

------
lhoward
I am assuming that as programmer, you have completed the unit/system testing
so your partner should focus on the acceptance testing such as usability. I
have found creating personnas and putting yourself into the position of those
personnas a good starting point.

------
lhoward
I am assuming that you have done the unit/system testing as the programmer, so
your partner should look at the acceptance testing. I have found creating
personnas (of your end customers) and testing as though you were one of them a
really good starting point.

------
mieciu
Give uTest a try, you can participate in crowd-sourced test cycles and gain
both experience and money.

And they have really cool resources over here:
[https://university.utest.com](https://university.utest.com)

------
Tharkun
TMAP Next is a good read. It's a little heavy, but there are some very
insightful chapters on the what and the how of testing.

------
nodelessness
I found Art of Software testing to be a good guide on the subject. I recommend
reading it.

------
unoti
Regardless of what else you read, try this one technique for manual testing.
You're probably interested in getting more serious about testing because of a
couple of major defects that you have seen in the software after your last
release. Create a test plan document that walks through the procedure of
verifying those defects are not present in the software. On each release, add
to the test plan to make sure new features you've added work properly. As you
work on the product, the test plan will grow. But it won't grow as much as you
might expect, because often several new features can be tested just by making
a couple of edits to the test plan.

The goal of testing is to prevent defects from surfacing in production. So
track every defect that surfaces in production, so that you can watch that go
to zero over time.

Whenever a defect comes up in production, edit the test plan such that you
would have caught that defect. Now you won't be bitten by that class of defect
in production again.

If you keep updating the test plan in this way you will see a dramatic drop in
defects released to production. Once you've done this for a while, you will
probably discover that your biggest source of defects released into production
have to do with how different your test environment is from your production
environment. So you will then start attacking that issue by setting up a
proper staging environment, where the staging environment mirrors production
as closely as practical.

Then you will start to discover that your biggest source of defects released
into production becomes other things, such as little problems with your
release methodology, which you can then address.

But the key concept here is: document what your test plan is, and continuously
improve it. It's important to note that you must actually follow the
documented procedure for this to work. If you write a document so big that you
won't actually do it, you're doing it wrong, make a smaller document. If you
feel like you only need to do 2 minutes worth of resting, document what you
will do during those 2 minutes. You can start with an empty test plan and that
will work, as long as you continuously improve your test plan. The same goes
for the procedures that you use to deploy. Always follow the same procedure
exactly as documented, because you will need to improve that procedure.

I have followed these procedures at a number of companies and in a variety of
environments, and seen it turn chaotic messes around many times.

Once you have this process down solid, you can automate some or all of it. But
the important thing is the overall set of processes around testing and
deploying software, and the process for improving those processes. How much of
it is automatic versus manual matters a lot less.

As for resources, I'd recommend books on continuous improvement. Because as
you get better at testing, you'll discover that General process improvement is
what you really need in order to cover the range of things that cause defects
in production.

------
Morendil
Point him to "Lessons Learned in Software Testing" by Bach, Kaner and
Pettichord:
[http://www.amazon.com/dp/0471081124](http://www.amazon.com/dp/0471081124)

Also, "manual testing" is a slightly unfortunate monicker for the activity we
are discussing. It is bound to generate some degree of incomprehension or even
hostility on the part of some people, for no foreseeable benefit. "Testing"
will do. It is something you do with your head primarily, your hands being
involved to pretty much the same degree that they are in programming (and we
don't usually call _that_ "manual programming").

~~~
regularjack
I believe the OP used the term "manual" to differentiate from "automated"
testing, e.g. unit tests.

