
Yahoo’s Engineers Move to Coding Without a QA Team - teklaperry
http://spectrum.ieee.org/view-from-the-valley/computing/software/yahoos-engineers-move-to-coding-without-a-net
======
ef4
Surprised to see the negativity here. I have worked in environments with
traditional manual QA, and environments where all development is test-driven
and nobody is allowed to merge a feature that lacks automated test coverage.

Both the productivity and the quality were higher in the places with fully
automated testing. Which is not shocking at all: does anybody really think a
human can run through 800 test cases better than a computer can?

It's not a magic way to save money -- the developers obviously end up spending
time writing tests. But the long-term value of those tests is cumulative,
whereas the effort spent on manual testing is spent anew every release.

Manual review is still good for noticing things that "feel wrong" or for
helping think up new corner cases. But those bleed into product owner & design
concerns, and aren't really a separate function.

~~~
arielweisberg
Moving from an environment with a 10:1 dev:qa to 2:1 showed me what happens
when dev is not responsible for shipping working software.

No thanks. It's a bunch of deflection and diffusion of responsibility coupled
with high latency flakey interactions between different teams. Everything that
can slip through the cracks does slip through the cracks.

I'm sure QA can be done well, but I am convinced that giving your devs a pass
to not finish their work is a dead end in several dimensions.

~~~
jboy55
I'm in full agreement.

In practise, a manual QA team encourages Devs to throw shit over the wall and
expect someone else to do some basic sanity checks they should have already
done. By the time those are done, what the QA team theoretically could find
gets shipped.

Then, when its discovered in Prod, the QA team will get in the way of a speedy
fix.

~~~
DougWebb
That's a management problem, not a problem with having a QA team. The top-
level QA manager and the top-level Dev manager should both report to the CTO,
and the Dev manager should be judged by how many issues the QA team finds. To
keep things fair, the QA manager should _not_ be rewarded nor penalized based
on the number of issues found pre-release, and everyone should be rewarded or
penalized based on the number of issues found post-release.

This keeps the devs incentivized to make sure everything works before the code
goes to QA, and it keeps everyone incentivized to eliminate as many bugs as
possible before release.

~~~
vkou
What will happen is that devs will ask their QA counterparts to report issues
through an undocumented side-channel.

If you give an engineer a career incentive to optimize something, you'd be
surprised how seriously some will take it.

~~~
jboy55
Incentives often cause really nasty political wars within orgs.

When a site gives a 500 because a database went down and the web app couldn't
connect to it... is that a bug, a missed test case or should ops take a hit on
down time? Furthermore, if you argue that the dev team should have reasonable
failsafes in code to connect to a db, in a 10 year old organization, should
the current Dev team pay for something that could have been in the code for
years?

If you set up a system where devs hand off to QA then they hand off to ops to
deploy. Furthermore every step is incentivised somewhat against each other.
Even if all teams are equal, it ends up in my opinion, to the path to CYA and
Waterfall. Everyone is more concerned with problems not being 'their' fault
than shipping good code.

~~~
jrumbut
You're absolutely right, assuming we're talking about business/consumer apps
and not pacemakers or rocket ships.

An incentive/penalty system for bugs without an incentive/penalty system for
features completed leads to paralysis. And a complicated incentive system
leads to game playing over productivity.

It's an old saying: be careful what you measure because you'll get a lot more
of it.

~~~
jboy55
I'm beginning to think that the best incentive system is just a good base pay,
with an emphasis on personal learning and growth. Have engineers strive to
become better engineers by taking courses, attending meetings and publishing
content. Quality will fall out of that as a positive side-effect.

------
chojeen
I worked at Yahoo before and during this period, first as a QA contractor and
then as a full-time developer.

Before the switch, our team (advertising pipeline on Hadoop) used the
waterfall method with these gigantic, monolithic releases; we probably
released a handful of times a year. Almost without exception, QA was done
manually and was painfully slow. I started to automate a lot of the testing
after I arrived, but believe you me when I say that it was a tall order.

Soon after I moved into development, QA engineers without coding chops were
let go, while the others were integrated into the development teams. The team
switched over to agile, and a lot of effort was made to automate testing
wherever possible. Despite some initial setbacks, we got down to a bi-weekly
release cycle with better quality control than before.

Around the time I left, the company was mandating continuous delivery for all
teams, as well as moving from internal tools to industry-standard ones like
Chef. I left before it was completed, but at least as far as the data pipeline
teams were concerned, the whole endeavor made the job a lot more fun,
increased release quality, and weeded out a lot of the "that's not my job"
types that made life hell for everyone else.

~~~
stevoski
> the whole endeavor made the job a lot more fun

That is an important part of producing quality output in any job, I believe.
The more employees actually enjoy what they are doing (or at least, don't
actively hate it), the better their output is likely to be.

------
mstade
I don't think it's that cut and dry. I've worked in some places where the QA
team was useless, meaningless red tape to get your stuff deployed. They
wouldn't do much but sign off on deployment _at some point_ , yet bore no
responsibility if shit hit the fan. In these cases, they really were just an
unnecessary cost and you learned pretty quickly to make sure your tests were
in place, that you were testing for the right things, and so on.

But then there were the other QA teams. The people that would just reject your
stuff outright if it didn't have tests (no matter if it worked) and when the
tests passed they would look at things truly from a customer perspective. They
would ask really uncomfortable questions, not just to developers, but to
designers and business alike. They had a mindset that was different from those
creating things; they were the devil's advocate. These people did much, much
more good than harm, and they are few and far between. Unfortunately, while I
believe they were incredibly valuable, business thought otherwise when cuts
came around..

~~~
csours
When testing, you are the headlights of the project. ... Testing is done to
find information. Critical decisions about the project or the product are made
on the basis of that information." [1]

I recently became a software tester, and I really didn't understand the role
for quite a while. Is my primary responsibility finding bugs? Logging defects?
Analysing requirements documents? Writing test scripts? Writing Status
reports?

Answer: Do enough of each to fulfill your goal of gathering and sharing
information with your management and dev groups.

If the software tester has problems testing, then the customer will have
problems using it, and the company will have problems supporting it.

1\. Kaner, Cem; James Bach; and Bret Pettichord. 2001. Lessons Learned in
Software Testing. Wiley.

~~~
bboreham
The goal of a QA person should be "shipping great software".

~~~
csours
I agree that's the goal, but what is the role? What are the contributions that
QA provide? That's what I had problems understanding.

------
minimaxir
The stereotype that QA-is-pointless in Silicon Valley is persistent and
actually annoying. There will _always_ be issues that even the most
comprehensive test suite will miss.

Startups still glorify Facebook's "Move Fast and Break Things" without noting
that Facebook has backpedaled from that. After all, people expect startup
software to have issues, so what's the harm? Technical debt? Pfft.

Engineers are not the best QA for their own code since they may be adverse to
admitting errors in their own code. QA engineers are not as empathetic.

Disclosure: I am a Software QA Engineer in Silicon Valley.

~~~
dragonwriter
The QA function is not pointless.

Whether a distinct QA team is the best means of performing the QA function is,
however, a separate question.

~~~
tostitos1979
I was on a large, fast moving project, with dozens of components. Some genius
decided that devs will do all QA, automated testing, blah blah. It was a
disaster.

The insane deadlines required devs to write lots of poor-to-average quality
code (tried code reviews and peer programming ... no time for that so it fell
on the wayside). The automated testing done by devs was terrible but
understandable. If you are up until 2am hacking out code (without any precise
requirements), then why bother with testing? We ended up having one -somewhat-
central component that had "gating tests". Everyone stuck their tests there.
That made things worse since that one component was the one that kept seeing
failed tests. The PMs did frantic "user-like" testing before demos. You can
imagine how fun that was.

In my opinion, the decision to not have a dedicated team of testers was the
big mistake in all of this. When you have many teams, many components and no
precise requirements, you need an independent QA team to coordinate and
prevent people from passing the buck. In a time critical project (what
projects are not time critical today?), you don't have precise requirements
and devs have to "sling" code. I accept this reality. But I don't except the
"no QA will make you mature devs" stupidity. If I was being a mature dev, I
would refuse to code until the requirements were clear. None of this 2 week
agile-scrum nonsense.

Oh .. and one other big thing. The project was a cloud project that needed to
be up 24/7 while we were developing it (for beta users). It was like going
from one outage to the next. What a disaster!

So what I learned is this: not only have a QA team, but have a 24 by 7 QA team
for the kind of project I was on. Note... not all projects are the same!

~~~
dragonwriter
Actually, it sounds like the main problem for the kind of project you were on
was realistic timelines, not a QA team.

And I don't think an independent QA team helps to coordinate people and avoid
buck-passing, IME, the more different teams are essential to delivering a
piece of software to the customer, the _more_ opportunities for buck passing,
and the higher level of management the buck passing occurs at.

------
smithkl42
Our approach - in a _much_ smaller company - is that all stories should have
automated tests before they head off to QA. QA's job is to make sure that the
_story_ in question works correctly, it's not to find regression bugs. If QA
finds a bug in the story, we write a test to catch that before we resubmit it.
Over time, we have enough tests at all levels of the system that QA doesn't
generally need to worry about regressions: just making sure that the latest
story works as advertised.

This approach allows us to stay agile, with small, regular releases, while
also making good use of what QA folks are actually good at.

------
diivio
This isn't surprising.

Microsoft switched to this model a few months after Satya took over.

For the majority of Microsoft teams it worked really well and showed the kinds
of results mentioned in this yahoo article. Look at many of our iOS apps as an
example.

But for some parts of the Windows OS team apparently it didn't work well
(according to anonymous reports leaked online to major news outlets by some
Windows team folks) and they say it caused bugs.

First of all I think that argument is semi-BS and a cover up for those
complainer's lack of competence in testing their code thus making them bad
engineers because a good engineer knows how to design, implement, and test
their product imo. But I digress.

I in no way want to sound like a d __k but as an engineer it is your
responsibility to practice test driven development but that 's not enough.

Like reading an essay you usually can't catch all of your own bugs and thus
peer editing or in this case cross testing is very useful.

You should write the Unit tests and integration tests for your feature

BUT

There should always be an additional level of end to end tests for your
feature written by someone else who is not you.

Everyone should have a feature and design and implement it well including its
Unit tests and integration tests BUT they should also be responsible for E2E
tests for someone else's feature.

That way everyone has feature tasks and test tasks and no one feels like they
are only doing one thing or stuck in a dead end career.

~~~
jeswin
> I in no way want to sound like a dk but as an engineer it is your
> responsibility to practice test driven development but that's not enough.

One of the things that put me off when it comes to TDD is that it has always
been a bit like religion.

What matters is whether the tests exist, not when they were written. I'd even
argue that writing a test first and then being constrained by that box is a
bad idea. Write the most elegant code first, and then write tests to cover all
paths. You're more likely to know the problem better after the code is
written.

~~~
darkr
> What matters is whether the tests exist, not when they were written

Technically true, but with myself at least; when I do TDD, I tend to write
more, and better tests. When I write tests after code, especially when working
on tight deadlines, there are substantially less tests written, just lots of
TODOs that never get done.

------
ergothus
I'm curious to find out if my expectations of QA are unrealistic.

I'd _expect_:

* Devs write automated unit tests galore, plus a smattering of integration tests

* QAs write some acceptance tests

* QAs maintain a higher level of broad understanding of where the org is going, trying to anticipate when a change in Team A will impact Team B _before_ it happens. They also do manual testing of obscure/unrepeated scenarios, basically using their broader knowledge to look for pain before it is felt.

The above hasn't happened anywhere I've been (though each point HAS happened
somewhere, just not all together).

One thing in particular I've noticed is that a good QA is a mindset that a dev
doesn't share. Devs can learn to be BETTER at QA than they are, but I honestly
think it's not helpful for a Qa to be a Dev or a Dev to be a QA - they are
different skill sets, and while someone can have both, it's hard to excel at
both.

------
eecks
I can see why this would work in a place that uses QA as a crutch.

All developers should aim for no bugs and test their stuff themselves but of
course when deadlines are looming its easier to just code and let the QA team
pick it up.

~~~
greydius
This was exactly the situation at a previous employer. There were twice as
many testers as devs. I tried to advocate for comprehensive unit and
integration tests and was met with a "if it doesn't work, the testers will let
us know" attitude. Baffling. At my current job we have no QA testers and write
much better software.

~~~
TheAceOfHearts
Comprehensive unit and integration tests strike me as a waste of time. From
what I've seen, getting around 70%~80% code coverage is usually going to be
around the tipping point where you start to get diminishing returns.

Don't get me wrong, I think testing is important. But there's tons of code
where you get no value by writing tests for it. (At least in frontend
development.)

We don't have a QA person, but I think it'd be great to have one. You can't
write automated tests to check that things all look like they should. You
can't write automated tests to check that all of the interactions are behaving
as expected.

~~~
greydius
Point taken. I've never done frontend work, but I can understand the value of
a human tester when a complex UI is involved. The kind of work I do is very
data-oriented and quite easy to get high coverage when the code is well
written.

------
steven2012
This is pretty much how things go in today's environment, especially in the
startups that I've seen. More things are being pushed directly on devs, which
is why we earn as high a salary as we do. Traditional QA is pretty much dead,
no one should be doing that now if they want to have a career in tech.

Where I work, devs do the QA, and most of the devops work as well. It's the
new reality, and anyone who thinks otherwise will be obsoleted.

~~~
umanwizard
> Traditional QA is pretty much dead

Where have you worked? Just inside the SV bubble? This is definitely not true.

------
DanielBMarkham
Welcome Yahoo engineers to the year 2010. Or 2005. It's nice here.

The suckiest part of this story is the number of folks who are stuck with
gated handoff processes that can't see how this would ever work. Some of those
folks might be waiting 10, 20 years catching up to the other folks.

Just to be clear, QA the function isn't going anywhere. It's all being
automated by the folks writing the code. QA the people/team? Turns out that
this setup never worked well.

I work with tech organizations all the time. I find that poor tech
organizations, when faced with a complex problem, give it to a _person or
team_. Good organizations bulldoze their way through it the first time with
all hands on board, then figure out how to offload as much of that manual BS
as possible to computers. If they can't automate it, they live with it until
they can. Same goes for complex cross-team integration processes.

------
projectileboy
If all your QA team does is run through a bunch of tests that could be
automated, then by all means, automate the tests and get rid of your QA
department. However, good QA folks have a valuable skill, which is that they
think of interesting ways to break software. Not many programmers do this very
well.

------
devonkim
This reminds me a bit about how SREs and developers conflict on contradictory
goals. With a self regulating system that gets established between development
and operations, if your code is bad in prod, you'll spend more time in
operations to try to take care of the mistakes made in development. If SREs
really don't get along with how developers throw over crap and quit, the
developers will get the pagers instead. This move to consolidate test and
development seems to be consistent with recent trends to pile upon more and
more work for developers in the efforts to reduce siloization.

So I'd have to ask how getting rid of QA has affected the pace of feature
development.

------
alrs
"Some of the engineers really cared about system performance types of things,
so they joined related teams. Some started working on automation [for
testing], and they thought that was great—that they didn’t have to do the same
thing over and over."

There is still QA, it's just automated QA. Welcome to the 21st century.

------
Xyik
Surprised that people are finding this unusual, in web/mobile anyways. In my
experience most engineers do some level of QA themselves, particularly in
start-ups < 1000 people. In what ways does an engineer being their own QA
negatively impact the company?

~~~
falsedan
Mostly that you're asking the developers to become experts in software testing
and verification, in addition to their existing knowledge. If you have good
mentoring & examples & guidelines, then everybody can learn and move along
roughly the same path. That takes time and effort to set up, so in those small
startups you're likely to see wildly divergent approaches to testing + the
friction when people think they should standardize, or when they actually do
need to, or when people switch teams.

~~~
grillvogel
I've spent years as both a tester and a dev. most devs think they are awesome
testers because their automated test verified that the happy path works.

------
BogusIKnow
Today QA is not manual testing.

Today QA is talking with product/UX, taking the end user and customer
perspective, wearing a quality head end to end over features an cross devices,
doing explorative testing for stuff that does not make sense to a customer
(mostly what's created by the inference of different features or cross device
interaction).

------
kemiller
I think QA has been misaligned all this time. They're not part of engineering,
they're part of product management. They're the low-level eyes and ears for
the product team. Automating checks for the issues they uncover is absolutely
an engineering function, but user-oriented holistic testing is not.

------
TheAndruu
Totally buy they could reduce actual errors that matter.

I've worked with many a QA who would get bent up over a detail outside of the
spec that didn't really matter, and where all QA testing was manual.

Coders (good ones) are well equipped to automate processes, and to do so
quickly, and this extends to integration testing.

~~~
noarchy
>I've worked with many a QA who would get bent up over a detail outside of the
spec that didn't really matter, and where all QA testing was manual.

This is where you need management (or someone from the product side) who can
set priorities, where needed, and put and end to pointless side-disputes that
can and do crop up.

------
beat
Every agile story should have a _testable_ completion point, agreed upon by
development and the customer. If whether or not the story is "done" is vague
and arguable, it's not good enough to do.

One of the big problems here, and where QA professionals can add real value,
is defining that "done" point. Customers are often not very good at it. Their
idea of what they want is too vague. They want developers to just build
something, and they accept or reject it when they see it (and fault developers
for not building it right).

But really, all story completion criteria should be testable, and developers
should be able to demonstrate the tests. The job of QA shouldn't be to test,
but to make sure the developers are actually testing what they claim to test.

------
Kabacaru
Manual testing is basically a 0 skill job. Can you click around this website
and tell me when you see a bug. This is the most common form of QA but adds
very little value that can't be added with more reliability using automation.

Given this QA can still bring value. The two roles that they really add value
in are a Test developer specialist writing non-flaky automated tests, and a BA
type role where they have conversations that expand a product owner's idea
into an implementable feature.

Given that neither of these roles require manual testing, if a QA team has
over specialised on manual testing, there's little value in keeping it.

~~~
philk10
I might be biased ( ex-dev with 20 years experience from Assembler to C to
.Net etc ) but GOOD manual testing requires a lot of skill. If it didn't then
I wouldn't have switched from dev to test. I work in a shop where they had
devs doing all the testing, tons of automation but they still found that a
good exploratory tester added value. But from your comment it may well be that
you've never worked with someone like me - maybe when you do you'll think
different

~~~
gsnedders
I think a lot depends on specific definitions: a manual test suite, where you
have a list of tests with clear steps and a clear expectation, is definitely
near-zero skill to execute. Actual exploratory testing, on the other hand, is
skilled: especially if they're expected to write up tests (automated or not)
that test code paths that haven't previously been tested.

------
whistlerbrk
I agree in principle for most products, no QA, no testing. Why?

* Everyone should do QA and implement their features own UI/UX, by following the pattern the application and framework sets tuned by an actual designer * An environment where production issues and bugs are prioritized above everything else should be created and fostered * To paraphrase Rich Hickey's analogy on the matter: writing tests is like driving around relying on the guard rails to keep you in the lines. That is (my interpretation): * If your code is this fragile to constantly require testing you've chosen poor abstractions.

------
slothguy72
They removed only MANUAL QA testing. There is a big difference between
removing the QA team entirely and automating QA work.

~~~
philk10
and of course all manual testing can be automated... good luck with that.

~~~
rocky1138
That's what users are for!

------
grandalf
Yahoo's main issue is UX and usability, not software quality, so this sounds
like another way of saying Yahoo laid off its QA team.

~~~
philk10
Depends on your definition of 'quality' \- if s/w isn't usable and has bad UC
then how can it be high quality?

~~~
grandalf
True. It has to work as intended too.

------
datashovel
In the past I've even loosely spec'd out a system that would build integration
tests simply from crawling a website. In my head. I'm surprised this hasn't
become a bigger priority from some of the biggest tech companies.

I know it would be a tough problem and a big project, but I think with only a
small amount of human interaction you can build all the integration testing
you would ever need simply by allowing the crawler to build them for you.

In fact the way I imagine it would work, the system would automatically build
a framework and a user could (in a very structured way via structured UI)
coerce the integration tests in small ways to ensure it understands what's
going on. For example: "This form is used for registration". "This form is for
logging in".

------
transitorykris
Removing dedicated QA (whether they do manual testing like in the article or
write automated tests) and forcing the developers to take this on themselves
is okay. Alternatively, I've had a lot of success with having development
teams take operational responsibility for their code. They are not only
naturally incentivized to take on automating QA, they also move toward
continuous deployment and become more involved in thinking about the product.
The safety and speed that's gained is seeming to result in teams that stay
small. It's not for everyone, and caused attrition early on, but talking about
these practices during interviews has attracted the right people.

------
ascendantlogic
I stand in the middle ground on this one. I fully believe that rote QA testing
with huge volumes of test plans is a waste of everyone's time. However
automated testing doesn't take into account the fact that _people_ are almost
always the primary users of your software, and so I feel somewhere there
should be a person or persons who occasionally smoke test the application to
make sure that things are working cohesively from an end-user perspective and
just making sure everything makes sense. If this person is the prototypical
product owner, that's great, if you can find one that's not in meetings all
day...

------
CodeSheikh
Dev writing automation tests for their code is kinda pointless. It would be
better to have another dev or a different team such as automation engineers
writing regression automation tests. Automating regression tests is definitely
better than manual QA-ing the same 1000 tests over and over again. There has
to be a good balance. Have extensive coverage of automated regression tests
and let manual QA test new features. This will at least increase the frequency
of release cycles. Getting rid of an entire QA dept is somewhat equivalent to
shooting yourself in the foot.

~~~
sanderjd
This is basically my experience. I think the ideal situation is having
development and QA cultures that _both_ prioritize quality and automation.
Development cultures sometimes fail to prioritize quality ("not our problem"),
and QA teams sometimes fail to prioritize automation ("not how we do things"),
but those are problems with those specific cultures, not with the entire
concepts of development and QA.

------
RyanZAG
Ultimately, even with automated testing, someone has to do the manual testing
of checking that it's actually providing the value it's meant to.

When you remove the manual QA team and switch to staged rollout, you are
moving the manual QA burden onto your users. You still have that manual QA
team - they're the first bunch of users in your staged rollout plan - you just
don't pay them anymore and gather their feedback through bug reports. Users
are used to buggy software because of other companies who do this (Google,
etc) so they carry on being users anyway.

------
miles_matthias
Quality assurance is important whether you do that via humans or code, but the
thing that always bothered me about Q&A was that the ones I worked with were
mindless people simply looking at the feature request and the functionality on
the page and comparing the two without any thought towards the actual product,
business, or user.

And in that system, the developer is completely removed from the product and
is just another factory worker. The closer engineers can be to users (with
design to translate obviously) the better for everyone.

------
loren_dunlop
i'm in QA - have been for 12 years. Testers who can only perform manual
testing, and organisations that only test manually are the product of
companies realising they should 'do some qa', and managers who do not
understand SW development signing off to build large, manual only test teams.

It's inefficient, there is a very slow rate of feedback to devs, not much can
be done until there is a working UI - so it all lends itself to the broken
waterfall model of code code code, then 'do some testing' right at the end of
the project - which has already seen overruns from dev squeeze qa time out.

Manual QA testers are relatively cheap on paper - so managers don't see a
problem with building a team this way.

I'm not sure this will ever go away, but as someone who tries to learn every
year, and master his career, I welcome Yahoo's choice. I see a role for a
highly skilled 'developer in test' role superseding the traditional,
ineffective manual QA role. Someone who can build automation frameworks
quickly, be responsible for maintaining them and test data, and provide rapid
feedback to devs. Devs should still be carrying out unit testing, code reviews
etc, but I do believe a role still exists for someone to focus on QA, just
with a lot more skills, providing far more rapid feedback, with less
dependencies on the devs for test environments.

------
mixmastamyk
Great. I worked for a company that didn't invest in QA, it was consistently a
!@#$% mess. When you do this, the need simply shifts to the customer. I
wouldn't install our software until the 4th or 5th hotfix patch was available.

Certainly, I'm an advocate of a more responsible dev team sharing the quality
tasks and continuous integration too. But no QA at all? Hahah... maybe if
you're a web portal that no one depends on for business-critical needs.

Edit: I guess the truth hurts.

~~~
vox_mollis
_I worked for a company that didn 't invest in QA_

Where are these magical places that _do_ invest in QA? In nearly 20 years of
professional development, I've never seen an organization in which the
criteria for shipping was anything other than "works for me". I have never
seen an organization in which there was either budget or managerial patience
for proper QA, let alone anything other than VERY basic acceptance testing.

~~~
hofmann
I once worked in an organization that has 1.5-2x QA then dev. I now work in a
place which has 1 QA for every 8 devs -- and there are far less bugs here then
the other.

I think the reason is proper tooling, a culture of thorough automated testing,
and ownership of code.

~~~
mixmastamyk
Ownership and the resulting pride in workmanship makes a big difference.

------
johnrob
When measuring the effectiveness via reduction of issues, how do you account
the natural stability introduced by reducing the updates/week each developer
ships? When devs are tasked with code reviews and/or QA, this is time that
could have been spent on their own features. In other words, if the product is
stable today, and everyone's on vacation (no new updates), the product will
generally remain stable save for unforeseen usage patterns.

------
LoSboccacc
"What happens when you take away the quality assurance team in a software
development operation? Fewer, not more errors."

And what happens when you close your eyes? Reality disappears?

Automated testing is a way to completely remove customer advocates out of the
loop. Correct UX doesn't mean good UX and unless someone can automate the test
of all the non-quantifiable qualities of good and intuitive they're gonna push
loads of engineering driven interfaces to their users.

-shivers-

------
srp0010
Updated TL;DR: QA is changing - just like everything else.

The article makes the assumption that QA == manual QA which as a quality
professional is false. Quality is about measuring risk across the development
process. Immature team need manual QA while mature (in a process/quality
sense) teams need much less (or none).

Quality professionals who want a sustained career needs to learn development
processes, opts, documentation & monitoring. We make teams better.

------
dkopi
This is actually a story about the triumph of continuous integration and
staged rollout. By shipping code constantly, but slowly rolling it out to
users - bugs can be detected very quickly by the users themselves, instead of
employing large QA teams.

Keeping a central code repository, automating builds, frequent commits and
automatic tests for code are taking away a lot of load for QA teams.

~~~
reid
You are correct! I'm a programmer at Yahoo -- deploying multiple times a day
to production, with the confidence your code will work, feels great.

Manual ("batch-release") deployments have been forbidden for over a year,
which is a forcing function to change development process to allow deploying
to production continuously multiple times a day. This requires robust test and
deployment automation and for engineers to better understand what they build.
It's pretty nice overall!

~~~
throwayahoops9
Forgive the throwaway -- but how does Yahoo define "will work?" Ignoring the
calls against the UX change of several years ago, your own user feedback pages
at
[https://yahoo.uservoice.com/forums/207809](https://yahoo.uservoice.com/forums/207809)
make it pretty clear that longstanding issues such as spam (the same ring of
spammers has operated for multiple years as Ultimate Stock Alerts,
PennyStockAlerts and ExplosiveOTC and others -- simple Bayesian filtering
could have solved this years ago) and things like the fact that the ignore
function (a pretty core piece of functionality) has never actually ignored
users - merely greyed them out, but they still take up space on the screen.

My point isn't to be negative about the state of Yahoo Finance; you probably
don't work in that department, and after three years of neglect, most of the
users are long gone.

My point is that if an organization is going to rely on end users to report
bugs, the organization must actually respond to those bugs. Sometimes the
answer might be "No, we're not going back to the Web 1.0 UX." But ignoring the
top bugs for multiple years suggests a breakdown in the feedback mechanism. If
Yahoo doesn't care, that's fine, it's just business. But it seems more likely
that Yahoo doesn't even _know_ there's a problem, because there's no way for
user feedback to make it to the developers.

------
xacaxulu
If you think about a QA team as your customer, as any downstream department in
the work pipeline truly is, you realize that in order to make full use of them
and to maximize your efficiency, you as a developer should write automated
unit tests to cover the user stories or feature requirements, allowing QA to
work on the nasty edge-cases.

------
stephenitis
Rainforestqa has a unique value proposition in this space (YC some year)

Their team and product are quite good if you want to explore QA as a service.
Essentially humans(turks) preform outlined and preprogrammed steps.

Their tagline "We automate your functional and integration testing with our
QA-as-a-Service API. Human testing at the speed of automation."

------
plinkplonk
When I was working at ThoughtWorks, we had devs writing automated unit tests
with close to 100 percent coverage, and _also_ QA (who also automated as much
of their testing as possible, using a variant of the FIT framework in those
days) finding significant bugs and show stoppers,.

In my experience, one is not a substitute for the other.

------
thepaintedcow
Ultimately, a good development process is about building in checks and
balances. Code reviews, QA, automated testing, etc. are all part of that. It's
up to each to team to decide which pieces they want. There's no right way to
do it.

------
steve371
I'd vote for having a QA team. Not for quality control purpose. But to have
someone think outside the box. Sometimes, you will be surprise when you talked
to the QA team. And you could not get those ideas from dev peer review.

~~~
kyllo
QA Team is great if it is a team of developers who are interested in QA and
test automation/tooling. Not so cool if it's a department full of people who
make low hourly wages to execute manual tests and don't write code.

------
knughit
Is it odd that this article describes, but highlight , that yahoo is a decade
behind the industry here? Continuous integration and skipping QA has been the
web standard for years now.

------
Animats
Why is Yahoo still coding anything? They're focusing on dumping everything
other than their stake in Alibaba.

------
nobrains
That is kind of like having an open kitchen policy (or a kitchen visible via a
glass wall) in a restaurant.

------
alekratz
Based on Tumblr's past performance, it's a shock that they had a QA team in
the first place.

------
hondo77
Nobody cares how Yahoo does _anything_.

------
Apreche
TL;DR: They fired the QA team to save money. Then they made the engineers do
the QA work for no extra pay.

~~~
cpncrunch
I think that is a slightly harsh assessment. They forced the engineers to stop
shipping shoddy code, so the QA team wasn't necessary in their opinion.

However I think there is probably a middle ground where your engineers deliver
quality code and you also have a QA team to increase that quality even
further.

~~~
mbesto
> so the QA team wasn't necessary in their opinion.

I don't know of any large, complex systems where QA is not necessary.
Technology is only fallible because humans are.

~~~
loco5niner
Note he didn't say "QA is not necessary". He did say "the QA team wasn't
necessary". His statement seems to assume that _someone_ is still doing QA.

------
allsystemsgo
This will end well.

------
throwaway_xx9
Historically, Yahoo! operated for the first 10 years of their existence
without formal QA teams. The first USA QA hires were for Japanese product QA.

Nowadays OpenCV is used a fair amount, and they're migrating to modern
industry-standard tools.

------
douche
There's no way this could end badly...

------
Bud
If this is working so well, why is there still no version of Yahoo Messenger—a
core product—for either Mac OS X or iOS?

~~~
wooster
It looks like Yahoo Messenger for iOS was released on December 3:
[https://itunes.apple.com/us/app/yahoo-messenger-chat-
share/i...](https://itunes.apple.com/us/app/yahoo-messenger-chat-
share/id1054013981?mt=8)

And apparently there's a native OS X client coming, as well:
[http://mashable.com/2015/12/03/yahoo-messenger-is-
back/](http://mashable.com/2015/12/03/yahoo-messenger-is-back/)

~~~
Bud
Thank you! I apologize for being 8 days behind on this. :)

It's still mystifying that there was no iOS or Mac version for such a long
(albeit temporary) period of time.

Also mystifying that I apparently deserved a -4 for this comment. Seems valid
to me to note Yahoo not having an app on iOS or Mac OS for well over a year.

