QA needs to be a role in a team, not a separate team. When there's a "QA team", they inevitably end up in an adversarial relationship with development teams. The devs will lie to QA, or withhold the full truth, for a number of reasons: they don't want to be nagged about small things; they don't work for QA so QA has no direct authority; they only interact when things are broken, so the entire relationship is based on negativity; QA comes after the fact and it's too late to change things (or at least too late to change them the right way); QA lacks the engineering background to understand what it is that they're testing (so they test the superficial things and don't test the showstoppers); etc. There are tons of reasons a separate QA teams creates an us-vs-them situation where the path of least resistance is deception and hoping no one finds out what's really broken.
It's ok to have a QA 'team' that represents a group of QA specialists and focuses on their unique interests and areas of responsibility, but the members of that team should be embedded in the development teams, working alongside them as they develop, helping to write test plans, identify problem areas before the code is written, clarifying acceptance criteria, etc. If there's a "QA manager", that person should be more like a mentor, or the kind of good PM who runs interference and supports the development effort, but isn't actively directing the development effort.
If your QA people aren't in the room (covid aside) with the developers, and are instead all gathered together in a separate "QA unit", you've already lost. By the time they identify problems, it's too late to fix them well, and they'll never change the development culture into one that learns to produce fewer bugs in the first place.
I disagree, and partially because of the reasons that you describe. Sometimes the devs are so used to the way the software works (they built it, after all) that it doesn't even occur to them that it sucks for people who aren't starting with a mental model of the system's internal state. Likewise, I've met devs who are apparently happy to ship... poor... software - sometimes an adversarial relationship is proof that the system is working.
I agree, this model seems to work the best for our team. I worked with different "agile" teams with QA member embedded in the team and with separate QA teams.With both models either the relationship between dev/QA becomes so toxic or the QA person is just a rubber stamp depending on the team dynamics. I found that taking away the safety net of QA from developers actually results in better ownership of the product quality by development teams,Because now dev teams don't have some one else to blame for not testing enough.This also added benefit of developers will automate most of the testing because they generally don't want to spend all their time doing manual tests of their code.
I don't think it's inevitable that the QA/dev team relationship be adversarial. I've been at places where it was, sure. But, at other places, QA and dev worked together blamelessly to improve the quality of the software. Like most things, I think this comes down to establishing a culture of collaboration vs competition, and recognizing that the QA manager isn't just "the person who is always delaying the releases."
I'm sure mileage may vary, but I'm lucky to be at a company where the dev team has a great relationship with QA. I think you're right, having a good culture around it is important. I'm more impressed than I am annoyed when QA finds an obscure bug before things hit production.
I agree with all of this, except I will add that having a "QA Infrastructure" team does make sense. This team should not be responsible for product testing, but for building the cross-team infra and tooling needed to make it easier for the product engineers to write integration/functional tests. Of course, this is only needed once a company is above a certain size.
sadly I have seen too many instances where QA integrated into development teams becomes a rubber stamp if not just replaying unit testing the developers used themselves. while they may not feel that way the support staff will certainly make it known through the water cooler.
Some projects have such sensitivity that a separate team which can do full regression testing along with proper testing of deliverable is required. Too many projects short what is actually tested and this can lead to stress on the support teams which in turn will feed back to the development teams
Another big problem is outsourcing QA. It's a role that requires good communication, which has a high chance of breaking down due to cultural differences and use/understanding of terminology.
for the 'adversarial' relationship, I'd say it depends how both teams are managed and how well they communicate. I've been until recently in such situation (a dev team and a dedicated QA team) and I never felt this (in part because the bugs found by QA would have been found by the clients and no one at work wants that and in part because the two managers work very well together).
> The devs will lie to QA, or withhold the full truth, for a number of reasons
Avoiding a QA silo doesn't automatically fix the root issue. They'll do this with their managers or teammates too if things are sufficiently FUBAR.
As a developer, a good QA team is worth their weight to me in gold. Reach out proactively and engage them about new features you've built that you want 'em to hammer on - they'll be better at finding all the edge cases you didn't think about than you will. By engaging them early, you're more likely to get bug reports for code you still remember, and less likely to have them filing bugs right before launch at the 11th hour where your manager might plead for yet more crunch and overtime. If they're thoroughly testing my code for me, that's less time I need to spend carefully hammering my own code to avoid future blame. They'll help me hammer out good repro cases for strange bugs, and test on more varied hardware with different timings and usage patterns.
> they don't work for QA so QA has no direct authority
If devs aren't held to account for shipping broken shit, embedded QA won't necessairly fare much better.
> QA comes after the fact and it's too late to change things (or at least too late to change them the right way)
You can have quick turnarounds with QA teams. They definitely need to be able to get their hands on builds quick enough to provide devs useful feedback before shipping, though, and can't be siloed to the point of blocking direct communication. Even external QA teams employed by another company can succeed here though.
> QA lacks the engineering background to understand what it is that they're testing (so they test the superficial things and don't test the showstoppers);
Hire better QA and/or educate your existing QA better. They can chase superfical things and checklists, but if you can't explain enough of the basics for them to help chase down a crash or misbehavior you might be worried about, you've got a problem. Poor documentation, poor communication, poor guidance, poor understanding, too much bureaucracy... something fixable.
Perhaps the structure of game development has some tips here - we've often got multiple QA teams:
1. Internal studio-wide QA teams, which help chase down bugs internally so you don't have to - probably in the same building, at least. Rarely gatekeepers in and of themselves, but they do have the ear of your production staff.
2. External second party QA teams - possibly contractors, possibly employed by your publisher, maybe in another city entirely. Slower turnaround, but more vast and a fresh set of eyes not ruined by being too close to the project for too long. Can also include not-quite-"QA" specialty stuff like usability testing. Brought on later into the project.
3. External third party QA teams. Employed by the console vendors, not by you or your publisher. These guys can throw a wrench into your release schedule, and then charge your company for the privilige of having another go if you've failed the certification process a few too many times (or at least, they could at one point.)
Even if you have an adverserial relationship with the console vendor's QA teams, you'll buddy up to your second party and internal QA teams to find enough of the bugs to ship on time with minimal crunch if you have the slighest lick of sense, even if you haven't the slighest bit of shame at shipping broken things. They're the ones who will help you avoid failing certification, after all.
It's ok to have a QA 'team' that represents a group of QA specialists and focuses on their unique interests and areas of responsibility, but the members of that team should be embedded in the development teams, working alongside them as they develop, helping to write test plans, identify problem areas before the code is written, clarifying acceptance criteria, etc. If there's a "QA manager", that person should be more like a mentor, or the kind of good PM who runs interference and supports the development effort, but isn't actively directing the development effort.
If your QA people aren't in the room (covid aside) with the developers, and are instead all gathered together in a separate "QA unit", you've already lost. By the time they identify problems, it's too late to fix them well, and they'll never change the development culture into one that learns to produce fewer bugs in the first place.