
Testers vs. TDD - ohjeez
https://www.functionize.com/blog/testers-vs-tdd
======
mumblemumble
This title makes my eye twitch.

The times when I've seen testers and TDD come into a "vs" situation, the whole
thing was invariably just senselessly tragic. The testers saw developers
writing tests as an immediate threat, and responded to in a very self-
defeating fashion. They would refuse to acknowledge the automated tests and
insist on manually testing everything. Perhaps even, paradoxically enough,
_increasing_ their manual testing load by creating manual versions of as many
automated tests as possible. This would, in turn, reduce the time available
for digging deep and doing the kinds of testing that aren't amenable to
automation (or to automating with a typical unit testing framework), while
also increasing the need for painful things like code freezes. Ultimately,
ironically enough, the work they were doing became increasingly redundant, and
the automated testing did, in the end, turn out to be a serious threat. Not to
the interesting and highly specialized work they wanted to be doing, but to
the senseless busywork they unnecessarily roped themselves into.

But not by necessity. It's so much nicer when QA can collaborate with and
guide the developers' automated testing efforts. It's such an easy win-win
situation: Developers get close guidance from software quality experts on how
to ship higher-quality software, and testers get to spend less time stuck in a
quagmire of tracking down and filing reports for a mess of silly little bugs
that never should have made it onto their desks in the first place. Which
should hopefully kick off a virtuous cycle of everyone helping everyone else
be more productive and happier at work.

~~~
woodpanel
In my experience, such "tragicness" came from the fact that the company
already made their decision: automate the tests, fire those other guys. So
once me and my colleagues showed up, the final nail was already in coffin.

The mindset of improving the testers work, enabling those humans to do better
work, is rare in big corps. I've witnessed many QA departments shutdown or
being shipped overseas.

In the latter cases, you would be inclined to root for team "manual work"
(i.e. at least some person gets a job somewhere), but corps are just
prolonging their cultural malpractice. Just by dumping labour costs you're not
improving productivity.

------
gbacon
> Test-driven development was supposed to eliminate the need for independent
> testing.

Who seriously makes this claim?

~~~
geoelectric
People without a professional background in test. Problem is everyone else
thinks we have a conflict of interest.

------
necovek
It seems author misses the point of TDD: TDD is a methodology that allows you
to write unit-testable code.

Writing unit tests after you write code is a very hard, an almost impossible
problem (you either refactor everything, or you end up with some "lower level"
of integration testing everything, or you end up mocking everything — unless
you refactor, your code is unlikely to be unit-testable). Though with
practice, one can learn to write mostly side-effect free unit-testable code,
but to a non-TDD-enlightened person, that code will look "too convoluted" with
simplistic "unneeded" functions. TDD is generally very "functional" in spirit
(so for pure unit-testing, no mocks).

Production-level applications do get by with integration and system tests, and
infrequently, that's good enough (other than slow test suite run times). But
unit tests allow for speedy test suite on top! Unit tests, by definition, do
not cover stuff that proper QA would cover: they only ensure that smallest
units of work (functions/methods/classes) do what they are supposed to.

One of the common pitfalls with TDD and unit-testing is that you end up having
a function that's sufficiently well covered with unit tests, but as it evolves
into something more complex, you are lazy or lack the time to properly split
unit tests as you split your original function. Then you end up with what I
called "lower-level integration testing" above: a bunch of tests testing a
single complex function instead of individual units of work.

The biggest advantage of unit tests is that you cover every integration point
with only a single integration test (eg. between two classes interacting), and
all the edge cases on either side with unit tests. This leads to fast test
times that do not test everything for every single combination of arguments.

Replacement for QA are instead system tests. They should neither test all the
combinations of system conditions, but should ensure that a couple of
critical, common paths through the _system_ are working.

QA teams today, when present, are in charge of managing and writing system
tests, but if they are not part of the development team, there's a lot of
tension and wasted hours in keeping things up to date.

~~~
lowbloodsugar
With some experience, I find it impossible to write code using TDD that is
difficult to refactor. Good code, that scores well on SOLID metrics, is easy
to refactor, because individual pieces are already split out into multiple
classes, rather than living as a series of statements in a single method in a
single class.

>One of the common pitfalls with TDD and unit-testing is that you end up
having a function that's sufficiently well covered with unit tests, but as it
evolves into something more complex,

A function should not evolve into something more complex. It should be split
out into multiple functions, each of which is simple. In addition to being far
easier to test individually, you will find that when you need to change it,
only some parts will need to be discarded.

~~~
necovek
I was exactly referring to a function that's split up into multiple simple
functions where tests aren't. I agree, it's not a pitfall of TDD, but rather
of doing it, then not doing it.

------
agentultra
I don't think unit tests are even the only form of testing prescribed by TDD,
although it might've been what Mr. Beck had in mind back then. It's test
_driven_ development: invalidate implementations that don't implement your
specification!

Unit tests are, unfortunately, not sufficient at expressing complex invariant
properties and assumptions.

So try property based tests.

You can't even have _Continuous Integration_ without _integration tests_ and
you shouldn't be practicing _Continuous Delivery_ without an effective testing
strategy. How else can you ship dozens of delta changes to production on a
heavily trafficked site without taking down the service?

Not sure what to test? System hard to describe? Try formal methods. Tools like
TLA+ or Alloy are great at helping us understand systems with global or
temporal properties and complex behaviors.

GUIs? There are nascent projects to add property testing to the browser [0].
There are tools to model GUIs [1]. There are books to use property based
testing on GUI-driven applications. It can be done [2]. It requires effort
from a _test driven development_ team.

I've seen QA Teams ruin businesses. When they act as gate keepers to the
production environment releases slow to a crawl, issues run unchecked for
weeks as high-level e2e test results are inefficiently communicated and the
source of the problem is rarely obvious or known. Developers then have to
dedicate time to helping testers investigate poorly understood test failures
which requires more developers to pick up the slack. More features requires
more testers. And the cycle slows down releases to the point that your horizon
for even a basic new feature is a couple months at best.

[0] [https://webcheck.tools/](https://webcheck.tools/) [1]
[https://sketch.systems/](https://sketch.systems/) [2]
[https://leanpub.com/property-based-testing-in-a-
screencast-e...](https://leanpub.com/property-based-testing-in-a-screencast-
editor)

~~~
pydry
>Unit tests are, unfortunately, not sufficient at expressing complex invariant
properties and assumptions.

They're also insufficient at clearly articulating most specifications. Unit
tests allow you to express clearly specifications in the form of code APIs
that don't have complex side effects. They suck at _everything_ else (that's
about 90% of all real code).

Good luck converting that user mock up into a unit test.

TDD is technically _possible_ with UAT - it just requires sophisticated
tooling that is, most of the time, more expensive to build and maintain than
your actual app. It also requires love/investment to make it _fast_ enough.

Its not worth it most of the time and TDD with unit tests is an insufficient
salve. Hence QA teams and developers who have a feedback loop that involves
running the code and clicking stuff.

------
davnicwil
I think the most important point that I'm not sure the article emphasises
prominently enough is the purpose of both is completely different, which is
why of course one doesn't replace the need for the other.

Testers' work is focused around the user. User input in, user outcome out.
Tests pass to the extent the user has a satisfactory outcome. How that outcome
was achieved with code is of no concern whatsoever.

TDD is a tool for engineers to have confidence in code. Function calls in,
function return values out. Tests pass when code behaves as expected. How that
code is then used as part of a wider system, and the outcomes of that, is of
no concern whatsoever.

------
Yhippa
As a former dedicated tester and now developer, I feel that dedicated QA may
not be as useful in the way software is developed these days. I like to view
TDD as "requirement-driven development". If you think of it that way, if
you're not testing your logic completely, then you have incomplete
requirements.

Having other developers peer review seems to be a better improvement on the
legacy process of dedicated QA. This means that you can't just throw commodity
developers at a project; you do need people with expertise. To round it out,
you do need UAT and acceptance testing but those are still variants of
requirement-driven development.

~~~
dionian
the only requirement driving my unit tests are things that id like to be
independently testable and there could be 100s of those when implementing a
feature.. or if i know the code will be throw away, maybe 0

------
pqdbr
I checked OP's product page
([https://www.functionize.com/](https://www.functionize.com/)) and it got me
very interested.

However, unfortunately, there's no pricing information in sight.

If Space X can display their pricing upfront for launching an orbital rocket,
why do I have to talk to sales to know how much a product like this costs?

Anyways, I took the time to come here complain about this, but didn't bother
to take the time to fill up their "contact sales" form because it would be too
much of a hassle. Go figure.

------
kissgyorgy
The article is wrong on so many levels. First, there is no such thing as "TDD
developers" :D and yes, developers should write end-to-end tests. Also,
developers should test code manually! at the time of writing to make sure they
completed the job... Also knowing the product you are working on is a very
sensible thing to do (although I hate doing it, because I find it boring), so
developers should also do the kind of "exploratory testing" occasionally.

There are a lot of problems separating these roles (blame game, unnecessary
work, conflicts, cost), when you bring those tasks closer to developers, there
are no need for testers. They should develop software and write tests too!

Security? You are in big trouble when security is just an afterthought and
only testers think about it... Security is a mindset, constant vigilance and
learning. Sorry, but some process or tool won't solve that for you.

~~~
Frost1x
>First, there is no such thing as "TDD developers" :D and yes, developers
should write end-to-end tests. Also, developers should test code manually!

In principle I'm a fan of TDD. In practice, most environments are "agile" and
require so many and such frequent fundamental design changes that even well
developed tests become irrelevant/incompatible and only create a hindrance to
development, including interacting with the testing infrastructure as well (as
if developers didn't have enough to do already, they're often also taking care
of devops, QA, product owner, and project management).

Quick and dirty frequent manual testing on whatever you're developing is
typically often the best you can get away with.

------
k__
Funny thing is, the less a tester knows about testing, the better they are. At
least this was my experience.

This is both good and bad.

Good because you can give basically anyone a testing protocol and let them
work through it. The new ones always find new bugs or inconsistencies.

Bad because testers have a half-life. If they do the job for too long, they
will miss regressions.

~~~
geoelectric
You have to rotate manual testers aggressively between areas to avoid that.

For exploratory, my take is rotating every release is usually OK. You need the
area expertise for good exploratory but don't want them to get jaded. Since
they roll their own experiments it doesn't happen as fast.

But scripted is far different. Using manual testers for scripted regression is
a bad idea for all kinds of reasons, but you honestly should never use them
more than once in a row in the same area. We're just too hardwired to
autopilot or cut corners when doing scripts once they're learned. But once is
OK, so it's novel, and when they forget later it can be OK again until they
learn the script again. It's why crowdsourcing has actually worked pretty well
in that area.

But really, just don't use scripted manual. It doesn't work. Put people on
exploratory, put scripts in automation, don't pretend either can do it alone.

I don't think it's a function of career length, though, and that threatens to
be a bit ageist to be frank. I'm quite sure I found more and better bugs the
length of my primary-QA career, so long as I cared about the job and didn't
have to recite a script. Burnout times were different, but those aren't age-
bound.

~~~
k__
Your suggestions sound reasonable, but didn't work for various reasons.

There was just one UI, so we couldn't rotate.

Automated UI tests were too brittle so we had do use testers for the scripts.

~~~
geoelectric
I'm going to assume you're not talking one simple single page utility,
otherwise what I'm saying here is absolutely overkill and it's a little weird
you'd have big enough testing problems to matter. But assuming you mean
something close to a modern production system with a UI, i.e. multiple screens
or tabs or panes or functions:

Re: UI, you typically decompose it into parts (the main menu is useful here,
or a use case/requirements tree) so that it can be tested in parallel by
multiple people. You'd rotate those assignments. If you weren't doing
functional decomposition of your UI, you were probably already well off best
practices.

Re: scripted manual UI tests, that's pretty much describing hosing your QA.
That's no reason at all, it's Antipattern #1 to avoid.

Brittle UI scripts happen when people who don't understand Page/Object Model
and other techniques around architecting and selecting UI tests build them. UI
scripts are natually more _fragile_ but if you're finding them _brittle_
(i.e., the scripts actually do break enough to get a reputation and kill
confidence) you're either trying to E2E stuff that you shouldn't be or you're
writing them wrong and haven't mitigated their fragility according to best
practice around abstracting the UI under test.

There is no excuse in the world for leaving a brittle UI script in place, and
the mitigation for not covering with a UI script isn't manual scripted
regression testing, it's unscripted exploratory testing, preferably guided.
All that brittleness in the automation script is also confidence-killing in
your manual script because it represents environmental unknowns or non-
deterministic operations that were critical enough to break the script. The
human has those too, so now they're not on the script you thought you wrote
anymore. They make less noise and do their best but it's like an unconditional
exception handler. Nothing likely got fixed, just covered up.

With exploratory, you acknowledge up front that they can use discretion, so
you're not tempted to pretend it's a day over day regression test and build
false confidence. It's not a valid regression strategy because people aren't
that consistent, but it's an excellent new-bug finding strategy because you
look at new features immediately, and approach old features in new ways to
break your own boredom. You're probably doing "unintended exploratory" right
now if you're trying manual scripted. That's a bad thing, has all the "not
valid regression" and all the "false confidence" issues.

Best thing about moving away from manual scripted, you save a _ton_ of money
not trying to maintain a library of Step/Verify scripts that break Single
Point of Truth by repeating the same flows all over the place and therefore
don't maintain worth a damn. Instead you test against the docs, so you get to
dogfood those too.

My strong suggestion, were you to consult with me for QA/automation, would
very likely be to take 20% of your time and build better scripts to dig
yourself out, assuming this is a process you're keeping around. I can't say
that for sure because I don't know your company or product, of course, but
around the time you have testing pain is around the time to do this.

Were I to use my typical strategy for resetting UI test it'd be:

1\. Start by cutting all the non-P0 cases. Most systems end up with basically
one P0 case per major advertised feature, almost always happy path, so that's
usually less than ten things where if they don't work, you can't ship the app
at all, no matter how much time it takes to debug. Automate those scenarios.
That's now your core UI acceptance suite. If you make it very small and limit
only to things that can never, ever break you may be able to even use it in
CI.

On another note, while they aren't common at all anymore, record/replay UI
automation systems are actually useful for this. With so few test cases you
can scrap and re-record throughout the cycle if you need to. Just don't try to
keep them past initial bringup, they're unmaintainable. At some point write
them programmatically for long-term ownership.

2\. Only introduce P1 UI automation if you can justify extra cost over P0
after letting it ride for a release cycle--P0 should be near-zero cost if
you're past alpha, or you did them wrong and need to revise those first before
multiplying the design errors. If you do introduce UI P1s, tag or suite them
differently. They're your extended UI acceptance suite. Don't block builds
with it, you can't fix the tests fast enough. Run them nightly (and no
record/replay here).

3\. Don't even bother with UI automating (or manual scripting) P2 and P3 stuff
unless you do not have anything else to do and you want more test maintenance.
For most it's a simple decision: you do, and you don't, respectively, so don't
test those E2E.

4\. Don't script negative tests outside simple partitions like
boundaries/numbers/character types/etc. Domain-specific negative tests usually
only test one narrow scenario. Chances are overwhelming that even if you have
a defect they miss it, and if it's not in docced flow they don't tend to get
fixed fast. Those are naturally P2/P3 unless they're app-market-killers.

5\. Only script people for one-offs and short-term crutches, otherwise let
them use their brains. Look up and implement guided exploratory and write up
mission targets to cover the other areas manually rather than with scripts.
Rotate those testers between different missions at least once per release, if
not more often.

Edit: I will say that I have a soft spot for crowdsourced UI testing services
if you _have_ to script, because of the novelty guaranteed by a gig system and
that they present to your process basically as meat automation. But generally,
no, you're not better off with manual scripted. You'd be better off to skip
entirely so you can be properly nervous about your lack of effective testing
and weigh the need accurately.

TL;DR: You should review your processes, you're hinting at the exact mistakes
people make that has had QA relegated to an expensive cost center in most
companies. Yours sounds a little broken and you may have normalized that.

~~~
k__
Thanks for that extensive explanation. I'm not working there anymore.

First we had no test and everything was crap.

Then we did scripted tests and things got pretty good.

Then things got worse, because of the regressions, when one person did the
tests for too long.

In the end we settled for students for a semester each, because they were
really cheap and didn't want to do that job for long anyway.

~~~
geoelectric
It was interesting to put words down at any rate, so thanks for the
opportunity.

Yeah, it really doesn’t take manual scripted testers long to get jaded and cut
corners.

------
exdsq
Testing is so much more than helping write perfect functions. It’s
benchmarking, it’s ensuring the right software is written, it’s helping
developers feel confident with their releases when working under tight
deadlines, it’s exploratory testing across entire systems when developers
might only focus on a subsystem.

~~~
necovek
All of that can be done with automated tests, though agreed, they would not be
(called) unit-tests (which are only used to prove how we've got our "perfect
functions").

Of course, some of them could be unit-tests too (I've written small
performance integration tests before, doing things like ensuring a function
only ever emits a single SQL query, or timing execution and ensuring it scales
in an O(1) manner, or ensuring data is passed by reference instead of being
copied... where performance was a critical property of a "perfect function").

------
timwaagh
Testers are very useful for political purposes, So I would not scrap them even
if they are kind off pointless after dev testing, unit testing, automated
functional testing, reviews, automated coverage checks etc. Most testers
aren't really automation specialists. Nor are they hackers (sorry, penetration
testers). Most of them have pretty light technical backgrounds. They just
check and sign off on a particular piece of functionality. Acting as a
lightning rod for the boss' anger when stuff goes south down the line.

------
afarrell
> TDD developers rarely write end-to-end integration tests.

The only coherent definition I can find for a "TDD developer" is someone whose
productivity is so closely tied to starting any development by writing an
automated test that it might be wise make it part of their identity.

I'm projecting: Due mostly to inattentive-subtype ADHD, this describes me.

...but why I wouldn't I want to _start_ by writing integration test scripts?
And why wouldn't I want to pair with a test engineer to do it?

------
greencar
> The biggest benefit of TDD is that it removes the fear of breaking your
> code.

This isn't true at all. TDD forces you to write unit testable code, which will
have lower cyclomatic complexity, looser coupling, and generally be easier to
maintain.

Writing tests second always leads to a suite that is more expensive to
maintain than provided value.

And unit tests suck for catching bugs anyway, it's really about writing better
code up front, which will have lower bugs and higher confidence by definition

------
julius_set
Can someone address the argument of:

Writing tests for everything (or almost everything) as proposed by TDD results
in... more code to maintain.

This is the argument most teams have had that I’ve observed which resulted in
little to no tests added.

I’ve seen above in both small teams (startups), and large orgs,

~~~
ThrowawayR2
Yes, more automated tests (TDD or not) do imply more code. The question is
whether building and maintaining that code is cheaper than the fallout from
the bugs that get through to the customer. And the answer to that is, after
the team or product grows past a certain size, that equation always tilts in
favor of having the tests.

------
lowbloodsugar
>Test-driven development was supposed to eliminate the need for independent
testing.

Citation needed.

On the contrary, it frees up QA humans from writing "test for code bugs" and
gives them more time to evaluate "but does it do what customers want?"

------
11235813213455
What about a 3rd option of having many e2e automated testing (with puppeteer
or such)?

~~~
geoelectric
You can find a lot of bugs with e2e but they're frequently not actionable
because you can't figure out which part of your stack is causing them. Being
non-actionable is one of the least-understood reasons to skip testing, and
ironically the most important one for keeping focus. They also require very
high maintenance, if they're going to be sensitive enough to be useful, as any
stack change potentially means re-validating them.

Read up on test pyramid for more. E2E are the least useful during dev stage,
more useful for exercising canary P0 use cases that can never be compromised
by any stack bug or it's a stop-ship until you find it. They aren't diagnostic
or robust enough for dev, but for a canary role their fragility is a plus.

~~~
pydry
>you can't figure out which part of your stack is causing them

If your debugging tools are halfway decent you certainly can.

~~~
geoelectric
On a multi-collaborator large system with half the behavior off in AWS?

You're either dreaming, or haven't worked on anything complex enough to see it
yet. E2E means system test, whether or not you think it is, including back
end. In almost any real software in 2020 that means multiple debuggers,
monitors, logs, god knows what.

But if you're testing a monolith, sure. Your system is one unit, so unit test
your monolith and enjoy your debugger.

If you're testing a monolith outside of something like embedded or basic app
testing, quit, because the company you're working for built a monolith.

Inactionable doesn't mean "not possible", it means "more trouble than it's
worth." That's why it's advisable to do E2E on P0 cases and not really
advisable to do it for P3 cases. You can do tooling work to move the bar for
how much trouble you have to put in, but "try harder" misses the point.

~~~
pydry
In such a system there's typically two kinds of E2E. E2E for the team's
service and end to end for the entire system.

If you can't write the former without acceptable debugging tools your tests
suck and you should rewrite them.

If you are a team writing a micro service in an ecosystem of micro services,
you need to write E2E tests that mock systems external to yours and have very
clear agreements/contracts about how they interact as _well_ as a QA team to
test the system as a whole coz otherwise each bug that crosses a layer
degenerates into an orgy of finger pointing. Ideally debugging at that scale
should be rare and done with multiple members from multiple teams.

~~~
geoelectric
In almost every org I’ve been in, E2E meant full thickness UI to datastore and
back tests. That’s where I was coming from.

I’ve never considered the single service tests E2E, more either a component
test suite or a public API unit test suite (same thing).

I agree you’d want those with collaborators mocked, and it’s where I’d lean
the bulk of my test strategy for a system like that from a QA POV (as well as
promoting internal API unit testing from a dev POV).

------
blackrock
TDD kills creativity.

~~~
necovek
Please elaborate!

I'd really love to learn how a method for creating a particular code structure
leads to less creativity! I mean sure, it doesn't really allow you to be
creative with eg. using global variables, but I am assuming you mean the end
result (product and product ideas), so I am genuinely curious.

~~~
blackrock
Software development is a very creative endeavor.

It’s the interplay of data structures and algorithms that brings a program to
life.

Often times, you know what needs to be solved, that you have to get from point
A to B. But you don’t really know how to get there. So you break down your
problem even further. And A to B, gets split up into 5 more pieces. And each
piece gets split up into another 5 pieces. So in the end, you may have 25 or
so individual pieces.

Now, how do you TDD this?

There is no way that you can know what all those little pieces are, ahead of
time. And if you do, then maybe your program is not really that difficult. Or
maybe it was already a solved problem, that you can use an existing example to
learn from.

However, you can do functional unit testing, to ensure that those individual
pieces does exactly what it was expected to do.

TDD, or Test Driven Development, puts the cart before the horse. It requires
you to build all this testing scaffolding for something that might not work.
Granted, it may work, but it add a tremendous amount of overhead work to
solving something.

You still need Functional Unit Testing of course. This is the magic that keeps
software reliable, and humming along.

~~~
necovek
That's exactly how TDD does not work.

You work against requirements (A to B), and you solve the problem
incrementally.

The main mantra of TDD is refactor constantly. That means that you are never
writing tests for things you do not know you need, but that you are building
up your solution from small parts that get turned into more complex parts (and
get refactored) as you go.

And with just a little bit of practice, that refactoring effort becomes a
second nature and means no overhead at all: you are just expressing your
iterative, creative process through code.

------
marcosdumay
Wait, what?!

Writing tests before you code was supposed to require less testing than
writing the tests after you code?

Also a large red flag: are testers supposed to make your program secure? How
would that even work?

I ought to give up on clicking on TDD articles at some point. People pushing
it seem to come from some parallel universe completely different from here.

~~~
muricula
Security analysis is uses framed as finding bugs. There's a few main ways of
doing this when searching for bugs in C/C++ code:

* Design consultations early in development to address insecure or risky designs.

* Reading risky parts of the code line by line hunting for buffer overflows and many other things. If you don't have access to the source, you can read the disassembly too, and people find a lot of bugs doing that!

* Fuzzing. Basically hooking up the program to a harness which throws random data at its attack surface until the program crashes, although it can get a lot more nuanced than that.

* Running static analysis tools to search for problematic code patterns.

The C/C++ code which powers all major operating systems and web browsers is
rife with security issues. There are also whole other sub-disciplines auditing
websites, corporate networks, and cryptography implementations, which
generally use analogous techniques.

------
ping_pong
This person is giving way too much credit to QA. At least in enterprise
software, which I was working on for 20 years, QA were mostly useless. In
SAAS, having developers own their entire code including unit tests all through
to deployment in my experience has yielded the best code quality that I've
seen. Also, I don't consider security to be QA, that is a specialty in my
opinion, similar to the article.

The normal development cycle in enterprise shrinkwrap software is that you
would take a 12-18 month release cycle, and then plan out X features. QA would
come back and say "We can only test 6 of those features", so the feature list
gets cut. Then as you develop the features and finish them, you don't hear
back from QA because their cycles are different from developers, except maybe
a few weeks before the end of the release cycle, and then you get hit with a
flurry of bugs, because that's when the QA is testing most of the code. Then
more features get cut because of the high bug count, and then even more
features get cut and you end up with 2 out of 10 features. This is how most
development occurred all the way up until I left enterprise software about 10
years ago.

The best way is to cut QA entirely out and kill it as a career path. Give
developers the entire burden of coding and testing, including end-to-end. I
personally dislike TDD and would never join a company that developed using TDD
but that's just my opinion. Obviously a SAAS vs shinkwrapped software is
different, but overall, the best code quality I've seen is at my latest
company where the developers owned everything. The buck stopped with you, and
it behooves you to write good quality unit tests and integration tests.
Cutting out QA forces developers to own their code and doesn't have a wall
where developers would code, write some small tests, and then throw it over to
the QA and not look at their feature for weeks or months.

~~~
JohnBooty
That sounds like a particularly toxic relationship with QA. I've had quite a
few positions where QA was very helpful.

In those cases, QA interfaced with users/customers and had a great level of
knowledge of how people actually _use_ the product, and often had a more
holistic view of how the entire suite of features works together, compared to
developers who had more specialized knowledge of various specific bits.

Now, I would agree that _ideally_ developers should get lots of hands-on
experience with how customers actually use the software.

And I would also agree that _ideally_ developers should have a holistic view
of the entire suite of software features, rather than being silo'd.

I definitely aim for those things as much as possible, but it's not always
practical. Both of the things I listed above add up to a fulltime job, or
multiple fulltime jobs if the product is large enough. Usually engineers are
in short supply and working mad hours already.

I would also like to point out that QA engineers are really vital when it
comes to developing games. I do not work in the game industry, but I have a
friend who is a QA engineer lead. Games are complex realtime systems and it
can take a LOT of work for those folks to find and come up with reproducible
cases for bugs.

