
Blackbox testers versus Whitebox testers? - perseus323
I have a question. I work as a Software Development manager. I proposed to my VP of engineering to get a few QA testers to test our product in a `blackbox` fashion: running it, stress testing it, trying to break it but they don't look into the code. These QA testers could be high-school students or undergrads as long as they have basic understanding of Linux commands and can prepare good test reports.
Now my VP came back and said this is not such a good idea. We should get seasoned developers for our QA department (which I know we will be able to afford only a few), hand them over the full source code, have them test the product as well as look into the source code.<p>Is my VP's suggestion a good idea? I don't think so. I think it will only complicate things and there will be a lot of arguments over code style/quality between the developers and the QA and my fear is that too much attention will be paid to coding style that the product won't be tested properly.<p>Your thoughts?
======
grumps
QA is a funny thing and is completely dependent on your industry, mission of
the program, and clients/customers. I don't you've provided enough info to
provide a solid answer. I would say that if you're a startup trying to ship
code often and fast, then a black box approach will be better. If you're
trying to supply to an enterprise some sort of maintainable working product
for the next 10 years with support then you'd better be doing "white box."

Are you eventually handing over source code to your client/customer? If so, an
audit of code is probably a good idea, and some sort of "white box" testing
might be a good idea.

If you're providing it as binary or compiled program then I'm not sure it's
essential to have a code audit by an independent team. However, for your own
needs the code should be reviewed by developers outside of the team for
maintainability.

RE:"arguments over code style" Do you have company standard for code style? If
yes, then the only comments should be if it's in violation of the code
standards.

The cost of "white box" testing will be significantly higher than the cost of
"black box". In addition "white box" could be slower to implement.

~~~
pasbesoin
"Real" and competent QA is darned hard to find. I've worked with people
practicing QA as a profession (in stable, big-corp environments) whom I
wouldn't trust to sign off on a shipping product.

First, what you're describing is really more in the nature of QC (Quality
Control). Quality Assurance, when practiced well, has oversight over much more
of the development process than simply testing what is "thrown over the wall".
This may, where people are clueful and budget allows, sitting in on
requirements meetings. It can also include the handoff to the customer --
whether internal or external. QA doesn't manage the project, and it doesn't
manage the product. It pays attention, makes sure people are understanding
each other, and makes sure that the result does what everyone needs it to.

Testing can be a significant portion of this.

Too often, with black box testers, I've seen them run through a set of fairly
superficial tests and call things done. It doesn't help when those tests are
written before hand -- perhaps, due to lack of communication earlier in the
development process -- with an incomplete, outdated, and/or simply wrong
understanding of the product _and its environment_.

Many black box testings also don't pay much or any attention to what they
consider -- if they observe them at all -- to be superfluous anomalies.
Something outside the written test parameters and description glitches, and
they shrug their shoulders and move on.

Better testers look into those glitches -- and they are given the time,
permission, and perhaps resources to do so. Which is more important? That the
text in your dialog is spelled correctly, or that the application is borking
your data one time out of thirty?

White box testing... Well, there can be the predilection to write the tests
too close to the code. This can be somewhat akin to what can happen to
testing, overall. The testers have a (too) limited understanding of the
application and environment. When they sit down with the developers, the
developers essentially tell them what to test and how. The result is that the
testing ends up very much carrying the perspective -- and limitations -- of
the developers. Developers are too important/expensive to do testing, so they
"dictate" to the testers what and how to test.

Some good amount of a common perspective towards testers ("second tier staff")
may come from this.

In _good_ white box testing, testers use their understanding of the code to
better test edge cases, analyse potential bottlenecks, etc. They may also be
able to edit and recompile. This can -- with sufficient understanding and
caution -- enable them to more quickly, thoroughly, or at all execute test
cases that would otherwise take much more effort, time, and the involvement of
other staff. They may also tweak the test environment in this goal (in a
manner often not permitted -- for example, giving them some traditional devops
control over the environment).

White box testing isn't just about understanding the code. It's about being
able to control the environment to get into all the nooks and crannies that
otherwise might be outside the bounds of patience and/or the time and
resources available to the project.

It should _not_ be a... "code review", per se. A style review and criticism.
QA is not _writing_ the code. They are analyzing its performance -- its
resulting correctness.

Finding people who can really, competently do this latter is not easy. And in
addition to a smaller number of candidates and higher salary, it requires more
in other resources from the organization. Time for these people to become
thoroughly familiar with the environment, and the project. Respect from the
rest of the development staff, and real and ongoing dialog.

It requires QA who are smart enough and knowledgeable enough to get up to
speed without being a drain on other resources. And who can understand the
changes they make and correctly analyze whether they represent real concerns
and also and particularly the real application and environment as they do and
will or could exist.

(Could, both presently, or in the future. What may be acceptable this month
and year may cause problems down the line. A project focus may not catch this,
and a product focus may not catch this, but an environmental focus may.)

I've cleaned up enough garbage overlooked by the all too prevalent
superficial, and/or just too poorly informed and under-resourced, testers,
that I have little patience left for that kind of approach.

------
speeder
I never had whitebox testers...

But I know that blackbox testers have one advantage: they can do things that
are really unexpected.

Since they don't know how the thing work, they don't know what to expect, and
can do whatever they want, and sometimes whatever they want was not what you
had in mind.

As a programmer I do test every possible code path sometimes. Yet some people
seemly find "impossible" code paths, sometimes finding bugs that are not even
my logic fault (ie: sometimes it is a bizarre combination of factors that
break things for no clear reason, like the legendary tale of the CEO that
crashed the system every time he entered the lab... or the 500 mile e-mail)

~~~
speeder
I cannot find the link for the CEO tale...

It was a company that made security cameras, and the software crashed every
single time they invited the CEO to see it, making the CEO start to believe
they were full of shit.

Until one day, someone noticed that the CEO was the only person that entered
every single day in the room wearing a checkered shirt, and that they used a
version of jpeg for compression that disliked checker patterns.

The 500 mile e-mail: <http://www.ibiblio.org/harris/500milemail.html>

------
perseus323
Any comments? or thoughts?

