
QA? We don't need no stinking QA - tariq
http://thecodist.com/article/qa_qa_we_don_39_t_need_no_stinking_qa
======
absconditus
I have been working in QA for a few years now and have made several
observations.

* The people who are good at QA usually have the skills to work as a developer. QA is generally not as desirable as development to a young developer, so why do it?

* Many QA people are almost completely useless and exist just to generate paperwork so that the company can cover its ass. Warning signs for bad QA people seem to be certifications and fear of anything that does not strictly adhere to classic waterfall development. The bulk of all defects are found by a small number of people on my team.

* Most developers are horrible at testing and writing unit tests. We have thousands of unit tests and the main thing that they catch is that someone did not update the test after refactoring code.

~~~
imperialWicket
It is unfortunate how true these statements are. I followed to standard path
of starting in QA and moving to development. While I do not think it's a bad
path of itself, I think it is unfortunate that a lot of really great QA
analysts hang up their QA title in order to develop. I think one of the core
problems is that QA is treated as entry level; which is odd, because as you
point out, most devs are practically incapable of testing effectively.

Even though most orgs do not officially treat the positions as an entry level
(QA), and a mid-level (junior dev) - I think the general unspoken
understanding is that devs are higher on the food chain. The two positions
need to be established as more distinct and less like rungs in the same
ladder.

~~~
gamble
If it's hard to find good QA guys when there's a career path, it's even harder
when there's a wall between dev and QA. QA is considered entry level in most
companies because the jobs are entry level. Most companies barely value the
monkeys-on-keyboards style of QA, much less the more skilled and better paying
QA roles that would tempt smart people to stick with a career in QA.

------
kabdib
On the other hand, I've seen QA groups who do no QA at all. Instead, they do
frameworks.

"We need a test framework" are terrifying words to me. It means that I (the
dev) will have no one at my back. It means that tests will be written by
junior QA engineers (the senior ones are off writing and maintaining the
framework, right?). It means that I am responsible for my own tests, because
/no one/ else besides the customer will be testing my code.

That's generally all right; I think I do a good job. But not everyone does;
some devs think they can toss code over the fence to a group that will tell
them if it worked or not.

Everyone is responsible for quality; Devs must do unit tests, but they have
blind spots and that is what QA is for, namely higher-level assurance that
things are working as intended.

~~~
MartinCron
_It means that I (the dev) will have no one at my back._

Everyone should always code and test their code like that, right?

~~~
pavel_lishin
Eh, there's a difference between testing code you just wrote, and actual QA. I
know my code, and I'll be testing for specific things, and I'll know what not
to do. Our QA guy has no idea what I just did - so he's likely to uncover
things I didn't think to check for.

~~~
MartinCron
_Eh, there's a difference between testing code you just wrote, and actual QA_

That's what I'm challenging. I think the whole "developers can't test their
own code" business is a myth.

There really is something to the idea of the "curse of knowledge" and there
really is a different mindset when you're trying to make something work and
trying to make something break, but isn't switching between contexts and
mindsets and levels of abstraction a core part of being a developer?

Developers can get lazy and don't bother testing their own code. Developers
can get sloppy and don't bother thinking through corner cases. Developers can
be short-sighted and write poorly-encapsulated code that is brittle and
creates side-effects.

~~~
pavel_lishin
> isn't switching between contexts and mindsets and levels of abstraction a
> core part of being a developer?

Sure, but you can only do that so rapidly. Why not just let me stay in the
flow, and test the changes I expect, and pump out more code and bugfixes, and
let the QA look at things I might not catch?

I'd rather do a quick test of my code and send it off to QA than spend two
hours second-guessing myself as to whether I've tested everything in-depth
enough.

~~~
MartinCron
I think the throughput you get of pumping out more code (without really
testing it) may be a micro-optimization.

I only bring this up because I once worked at a place with a really strong,
borderline omniscient, QA Department and I felt myself getting sloppy and
checking in things that I knew were probably wrong, but I knew that QA would
find it and tell me how it was wrong. It appeared to be optimal for me as a
developer, I got lots of code pumped out, but as far as the whole organization
or product, it was a net loss.

Subsequently, I've worked on a few projects with no dedicated QA people, and
my overall "getting good software shipped to users in the real world"
effectiveness hasn't decreased.

------
imperialWicket
I could not agree more. And not only should all projects include QA, but they
should include QA by an actual quality assurance analyst or team. QA performed
by minimum wage (or low payed) temps or short-term services tends to be next
to worthless. Also, don't even think about using a spreadsheet to track
issues.

QA needs to be done, and it needs to be funded. Everything you put into will
return in less development time (overall, not necessarily in the short term),
and more satisfied users.

------
wccrawford
I mostly agree.

That MMO that is getting away with using customers as QA, and only fixing bugs
that NEED fixing? I bet they're making a nice profit. I'm sure the customers
are annoyed, but in my experience, gamers won't quit playing over bugs. Bad
customer service, price hikes, etc... Sure. But not bugs. Even if they stick
around for years.

However, business applications are a different story. Businesses can't afford
to be stopped from doing what they need to. And a QA person makes all the
difference in the world there.

For those that don't understand why, a programmer can't find their own bugs
because they've already done their utmost to find bugs in the code. They've
already stretched themselves to the limit before they sent the code to QA. You
can train, coddle, beat, or otherwise influence them and only get a few more
bugs found by programmers. It's much better for everyone's sanity and wallets
if you just pay for proper QA.

~~~
bradleyland
"However, business applications are a different story."

You've sure got that right. Our application conducts real-time purchasing
events (reverse auctions) where bidders are ranked based on a couple of
different configurable algorithms. An error in calculation means incorrect
ranking, which means the entire outcome is invalidated. Considering that it
can take weeks to put together a purchasing event of this type means that
there is a _lot_ riding on that single series of calculations.

Simply releasing and finding bugs in production would be financial suicide. QA
isn't even a question in our business. Everything is tested thoroughly when it
goes out the door.

~~~
bmj
I write software for clinical trial data collection, so I'm in the same boat.
Not only is every product tested, but QA must thoroughly document their tests.
At the end of the day, yeah, it slow the process down, but the software is
significantly more robust when it heads into field.

Having worked at many companies that didn't believe in QA at all, I think
having a team of testers is beneficial not only for software quality, but also
filtering potential bug reports from the field. When a client reports a
potential, QA does the initial leg work in duplicating the issue, then passes
it off to engineering.

------
Yhippa
One financial services company I worked at a long time ago was getting caught
up in the Lean hype. We did process maps of nearly everything including our
dev process. They put a big red circle around the QA piece noting that it was
a candidate for elimination since it was not value-add.

That is parts like coding and deployment were necessary because they actually
were important and helped move business ideas to the customer. QA to them was
a redundant step. This coming from a shop that avoided automated testing
because it was too expensive.

I was disappoint. QA to me is insurance against being human (which we all
are). The ended up not getting rid of QA but the fact that they seriously
considered it was shocking.

~~~
lojack
If the team considers automated testing waste then they truly are missing the
point of Lean. I disagree with you on the fact that having a QA step could be
considered waste. Note that this isn't the same as calling QA waste.

IMO QA is something that should be included throughout the task lifecycle.
Sure, after a task is marked as complete there should be some sort of
usability testing, but that isn't the sole job of QA. They should be working
before tasks are even started by defining the exit criteria and writing
integration tests. Giving QA its own step is wasteful because its effectively
pigeonholing the job.

------
dustingetz
quoting Linus: "There really are only two acceptable models of development:
'think and analyze' or 'years and years of testing on thousands of
machines'"[1]

i would never say that "we don't need no stinking QA", but i do believe that
teams who live by "think and analyse" need QA a lot less than teams who don't.

[1] [http://www.dustingetz.com/linus-think-and-analyze-
motherfuck...](http://www.dustingetz.com/linus-think-and-analyze-motherfucker)

~~~
dgabriel
Actually, Linus seems to be saying: we have tested the crap out of this, and
the patch broke something, and instead of playing guessing games, revert to
the thoroughly tested version and think more closely about the problem before
you submit stuff that _hasn't_ been tested on thousands of machines, and will
likely break one of them (because you didn't understand the whole problem).
This exchange doesn't really cast any light on whether you need _less_ QA.
Instead, it seems to say: we rely on past tests to benchmark quality.

I have had my own problems with incompetent QA "engineers," but I do think an
excellent QA team helps create an excellent product.

------
DanielRibeiro
For startups you usually don't have enough people to have QA team. Therefore
you have to take a more lean approach: Build Quality In.

On the other hand, when you are building your MVP, progress is not built
features, but validated learning. So QA tasks takes more of an approach of
building measuring tools (which have to be done by the people building the
MVP).

Even on a later stage startup, QA takes a whole new role, specially when you
start doing continuous deployment and start developing immune systems. These
can get quite complicated (think of Netflix Chaos Monkey[1]), and having a
dedicated team can be beneficial.

What we don't need is a 20th century waterfallish QA. But ensuring the quality
of a product[2] has never been more important.

[1] [http://www.codinghorror.com/blog/2011/04/working-with-the-
ch...](http://www.codinghorror.com/blog/2011/04/working-with-the-chaos-
monkey.html)

[2] [http://www.ashmaurya.com/2011/06/your-product-is-not-the-
pro...](http://www.ashmaurya.com/2011/06/your-product-is-not-the-product/)

~~~
MartinCron
_Build Quality In_ isn't just something you do when you don't have the luxury
of a QA department. It's something that you can do to prevent having to have a
distinct QA department at all.

~~~
DanielRibeiro
Agree. Having a seperate department can be helpful for large organizations, so
that you can have one team supporting many products. However, if you prefer to
hire generalists[1][2], then you can do away with it. Which is also a great
away to mitigate Conway's Law[3]

[1] <http://jessicamah.com/the-ceos-job-part-2>

[2] [http://blog.eladgil.com/2010/12/5-myths-to-building-
awesome-...](http://blog.eladgil.com/2010/12/5-myths-to-building-awesome-
mobile-team.html)

[3] <http://en.wikipedia.org/wiki/Conway%27s_Law>

------
steve8918
One thing also needs to be mentioned, is that supportability of features also
needs to be something that has to be addressed early on. The unintended
consequence of continuing to support every single combination of every single
option makes your support matrix, as well as your QA effort, enormous, and
your code incredible unmaintainable.

I work at a company where they have a 5 year old product, and the mentality at
the time was "Yes I know it doesn't make sense but if the user does X, then it
could cause the server to crash, so let's do Y." This grew to a series of
matrices that has now turned the act of adding 1 single feature into a 3 pages
worth of "If A then B, otherwise C". This has made the QA effort in terms of
testing incredibly complex, and even developers get confused as to what is
supported and what isn't.

For example, there is an instance where copying a directory takes a different
code path than copying a single file. Why? Because they wanted to "special
case" this situation. It makes the functionality and behavior utterly
unpredictable.

The takeaway that comes with experience is don't spend too much time trying to
prevent your users from doing stupid things. If they want to do stupid things
and the system crashes, it's okay to say "Don't do it, otherwise it will
crash." This keeps your feature set, your code, and your QA effort much
cleaner and more maintainable over the long haul.

------
ecaradec
I worked with the Windows QA of XBox for a while and it was totally awesome to
have super detailed information on bugs, where they occured and how often,
with core dumps and stacks in case of crashes.

Users and bosses thinks they have done more than their duty when they said you
that "it doesn't work", "nothing work" without more details. You have to run
after them to get more context and detailed information.

I love QA.

------
bobbles
I'm really glad to be working for a company that has the QA team involved from
requirements to production (& maintenance thereafter).

One of the major problems can be keeping the newer people & graduates
interested in the testing & QA process. There can be a lot of 'grinding'
through test cases and I guess it takes a certain mentality to get through it
on some days.

------
hello_moto
Sometime one must experience both side of the fences in the software
development process.

I've been a QA, QA Developer, Developer, and Integrator during my internships.

QA is often the less honoured role of all. This creates a stigma that QA is
some sort of crappy role to be in.

Being second class citizen requires more support than ever to work at the top
day to day. Unfortunately, most people are not built that way so quality
degrades because QA was treated like crap.

This leads to less advancement in the QA community because it's not a good
role to be in. You only see a few books and a few noted experts in the QA
community that continue to push better practices.

At the end of the day, it's a loss for all of us.

I wish there's none or minimum differences between QA and Developer as I agree
with one of the comments here that if you do that, there will be more jobs
created and IT professionals can experience different roles within our world.

My opinion is that QA should have a wide range of skills from load/performance
testings (that means knowing the tools and statistics), automation testings
(that means if DEV created hard-dependency issue that is not easily testable,
QA should step in and teach DEV the patterns and best practices to avoid such
situation), QA should understand business practices to beat up stupid Business
Analysts.

More importantly, QA should keep track deficiencies and ineffectiveness during
the development and potentially offer solution (or can be done through group
discussions).

Unfortunately, that's not how the world works. Telling someone that he's not
efficient or pointing out where the most bugs occurred when the team lead is
the one whose responsible for the feature isn't desirable.

Not to mention that certain type of companies don't necessarily get the most
ROI from having QA (if your typical Web 2.0 social networking tagging machine
had a few issues, users probably complain but everyone will move on).

------
bitmage
Ugh. I worked QA years ago and swore to never do it again. Management hates
you because you're telling them the product is not ready to ship. Developers
hate you because you're pointing out their errors. No one has your back, and
everyone wants to blame you.

The company had literally fired the whole testing department previously for
saying the product wasn't ready. With those annoying testers out of the way,
out the door it went. And promptly had multiple emergency point releases the
following week to fix showstoppers cropping up at the customer sites.

That got them to create a testing department again - but it was still without
honor in the company. And firing the whole group again was often discussed, as
they were _still_ delaying releases by finding bugs!

I departed. They got eaten in a merger several months later.

~~~
stan_rogers
Ugh indeed. And it's not like this issue hasn't been mentioned a time or two
before now. Joel Spolsky in 2000:
<http://www.joelonsoftware.com/articles/fog0000000067.html>

...and again in 2010: <http://www.joelonsoftware.com/items/2010/01/26.html>

...(with more than a couple of other mentions in between). I've worked at ISO
900x shops that worked to the letter of the law, so to speak, without actually
understanding what software QA was all about. A really good tester is "chaotic
neutral", and the best we can come up with as testers of our own work is
"lawful evil".

------
alain94040
Relevant: why QA jobs are more fun than developer jobs
[http://blog.foundrs.com/2011/09/19/why-i-prefer-testing-
over...](http://blog.foundrs.com/2011/09/19/why-i-prefer-testing-over-coding/)

------
jrockway
QA is the wrong solution to this particular problem. The problem is that in
Java you can call methods on uninitialized objects. If Foo can "foo" and Bar
can "bar", Foo can't "bar". But if you have "null", then suddenly it can "foo"
and "bar" even though it's not a subtype of anything that can do that, and it
doesn't implement those methods itself. Does that sound fucked up? Well,
that's because it is.

QA is nice, but it's a finite resource that you should save for things that
actually matter, not stupid things that a computer can fix for you in 30
seconds.

~~~
tkahn6
> If Foo can "foo" and Bar can "bar", Foo can't "bar". But if you have "null",
> then suddenly it can "foo" and "bar" even though it's not a subtype of
> anything that can do that, and it doesn't implement those methods itself.

Can't believe I'm defending Java's type system but I'm pretty sure you can
only call bar on a null Foo (Foo f = null) if you cast it to a Bar first.
Otherwise it'll fail to type check. What Java does allow you to do is call foo
on a null Foo and in that case you get a null pointer exception at runtime.

------
mrgoldenbrown
I would have liked to see a more realistic discussion of when QA helps the
bottom line and when it doesn't. Plenty of software is sold and tolerated by
users despite having bugs. If Microsoft waited until the bug count in Windows
was 0, they would never ship, and would go out of business. In other words,
this blog post is telling us that the correct amount of QA is higher than "no
QA", but it is not helping us determine what the right amount of QA is.

~~~
olegious
As a former QA myself, we never insisted on the bug count being 0, instead we
focused on the "critical" bug count being 0 and the "major" bug count being as
close to 0 as possible before shipping the product. Often it comes down to a
negotiation between QA, dev and product but a good company will give QA a
great deal of input in when a product is ready to ship.

~~~
absconditus
This strategy backfires when managers want the release out so they fight to
lower the severity for every defect that is entered. Then they generate a
report showing no high severity defects and release the software. Another
common problem is a high number of low severity defects that make the software
annoying to use.

I am not claiming that a bug count of zero is possible with most software, but
testing quality in at the end is the wrong approach. Nothing affects quality
as much as the skill level of the developers.

------
pavelkaroukin
While I agree we need QA (today), I would like to see different expectations..
Expectations that over time we will need less and less QA work.

------
YetAnotherAlias
I don't have much to add to the discussion. I just want to vent a bit of
frustration. I came in this morning and found out that 7 of my co-workers were
just laid off. We are a 30 people company so thats quite a big deal. And some
of my good co-workers in QA were axed. I guess some management guy thought QA
was unnecessary. Oh this seems to start to look like 2008 all over again!

~~~
jfricker
Sorry to hear that! Sounds like time to look for a new job as the best of days
are behind you now.

------
aidenn0
I think the big thing that confuses people is that at a lot of companies with
QA, QA is really "testing and QA" testing can (and IMO should) be done by the
developers, and when people have seen QA as primarily being testing, they
think they can do without if the developers take on testing.

------
Shenglong
Question:

When you're doing QA on a product, do you worry about border cases? I found
quite a few bugs related to time syncing and inadequate safeguards in a
project I worked on - but they were difficult to reproduce without a macro.

I'm still not sure whether people were happy or angry with me, for reporting
all those bugs.

~~~
wnight
The good devs would be happy because those rare bugs you found could take them
months (literally) to track down from user reports.

One company I worked at had both buffer overflow and timing problems and
didn't dedicate the time to fixing either properly so easily half the bugs we
found were the same giant ones (and thus always ignored even if crash-level).
This slowed us incredibly as you can imagine. The timing bugs could cause
really anything and the buffer overflows prevented us from stuffing commands
properly which kept us from ever reliably reproducing the timing bugs.

Eventually one of the lead devs fought through the backlog and implemented
scripting into the product itself and we were able to get a much better handle
on things through more intensive and repeatable testing.

These days I expect the developer has written specs for their code at the unit
level. How else do they know what they claim to be delivering? So the job of
QA (or, as I'd prefer - just another developer playing QA) focuses more on
testing the stuff the weird stuff, edge cases, etc.

Automate everything. Writing a system to parse screenshots to decide if your
windows drew properly is cheaper, and sooner than you think, than manual
testing. Especially if you count the cost of bugs you miss through
inattention.

------
jfricker
I used to run a QA department for a large e-commerce company and love what
this article has to say. Nothing gets executive attention like putting a
dollar value on downtime (like $1k per minute on a slow day).

------
TwoBit
I question how much good QA did for his project (Delta Graph), because it was
so buggy it was nearly unusable.

------
greenfield
A quibble: Although I have never doubted the importance of QA in the web
development process (it can save your bacon), in most cases I do find it
rather absurd and bit of a mockery when those who practice QA in a web
environment are allowed to have the word 'engineer' in their job title. Does
anyone else agree?

~~~
dredmorbius
Ask a _real_ engineer.

And by "real engineer", I mean one whose "engineer" is preceded by one of:
agricultural, chemical, civil, electrical, environmental, industrial,
mechanical, nuclear, or petroleum, and who is in possession of a state-issued
license.

------
pointyhat
I worked at a QA-heavy financial organisation.

The only reason they needed QA is the software was a giant toilet-clogging
turd. There were no formal unit or integration test cases so it was used to
catch regressions against massive test plans knocked up in Excel, Word and
god-knows what else.

