Hacker News new | past | comments | ask | show | jobs | submit login
Yahoo’s Engineers Move to Coding Without a QA Team (ieee.org)
232 points by teklaperry on Dec 11, 2015 | hide | past | favorite | 178 comments

Surprised to see the negativity here. I have worked in environments with traditional manual QA, and environments where all development is test-driven and nobody is allowed to merge a feature that lacks automated test coverage.

Both the productivity and the quality were higher in the places with fully automated testing. Which is not shocking at all: does anybody really think a human can run through 800 test cases better than a computer can?

It's not a magic way to save money -- the developers obviously end up spending time writing tests. But the long-term value of those tests is cumulative, whereas the effort spent on manual testing is spent anew every release.

Manual review is still good for noticing things that "feel wrong" or for helping think up new corner cases. But those bleed into product owner & design concerns, and aren't really a separate function.

Moving from an environment with a 10:1 dev:qa to 2:1 showed me what happens when dev is not responsible for shipping working software.

No thanks. It's a bunch of deflection and diffusion of responsibility coupled with high latency flakey interactions between different teams. Everything that can slip through the cracks does slip through the cracks.

I'm sure QA can be done well, but I am convinced that giving your devs a pass to not finish their work is a dead end in several dimensions.

The best way I've seen it done is when another developer, on the same team, who is responsible for the same product, reviews your code. Not someone who writes tests for a living, someone who does the exact same job as you on the same product.

Any time I've experienced something different it's been exactly as you described.

I agree with you that that is a necessary part of the process and a great first step for small teams moving away from "just release it and see if it works". The unfortunate reality though is that people in the same role on the same team often have the same misconceptions and blind spots.

Now getting devs to see the product through the users' eyes goes a long way toward solving that, but if you have a process and team of devs that are doing that you're way ahead of the game in a lot of ways.

I'm in full agreement.

In practise, a manual QA team encourages Devs to throw shit over the wall and expect someone else to do some basic sanity checks they should have already done. By the time those are done, what the QA team theoretically could find gets shipped.

Then, when its discovered in Prod, the QA team will get in the way of a speedy fix.

That's a management problem, not a problem with having a QA team. The top-level QA manager and the top-level Dev manager should both report to the CTO, and the Dev manager should be judged by how many issues the QA team finds. To keep things fair, the QA manager should not be rewarded nor penalized based on the number of issues found pre-release, and everyone should be rewarded or penalized based on the number of issues found post-release.

This keeps the devs incentivized to make sure everything works before the code goes to QA, and it keeps everyone incentivized to eliminate as many bugs as possible before release.

What will happen is that devs will ask their QA counterparts to report issues through an undocumented side-channel.

If you give an engineer a career incentive to optimize something, you'd be surprised how seriously some will take it.

Incentives often cause really nasty political wars within orgs.

When a site gives a 500 because a database went down and the web app couldn't connect to it... is that a bug, a missed test case or should ops take a hit on down time? Furthermore, if you argue that the dev team should have reasonable failsafes in code to connect to a db, in a 10 year old organization, should the current Dev team pay for something that could have been in the code for years?

If you set up a system where devs hand off to QA then they hand off to ops to deploy. Furthermore every step is incentivised somewhat against each other. Even if all teams are equal, it ends up in my opinion, to the path to CYA and Waterfall. Everyone is more concerned with problems not being 'their' fault than shipping good code.

You're absolutely right, assuming we're talking about business/consumer apps and not pacemakers or rocket ships.

An incentive/penalty system for bugs without an incentive/penalty system for features completed leads to paralysis. And a complicated incentive system leads to game playing over productivity.

It's an old saying: be careful what you measure because you'll get a lot more of it.

I'm beginning to think that the best incentive system is just a good base pay, with an emphasis on personal learning and growth. Have engineers strive to become better engineers by taking courses, attending meetings and publishing content. Quality will fall out of that as a positive side-effect.

The solution to this is to require a bug/ticket/tracking item as part of the change. Since QA knows that dev needs to create one anyway, they might as well get the credit.

Where this starts to break apart is that QA can get overzealous and file bugs for different incarnations of the same bug.

It's a bit of give and take.

> The top-level QA manager and the top-level Dev manager should both report to the CTO

For an organization with a single product (and, particularly, one small enough that the "top-level" QA and Dev manager are also first level managers), this makes some sense. Otherwise, this means that each product has no common level of technical management below the CTO, meaning it has no meaningful technical ownership.

It doesn't have to be the CTO; as I mentioned in another reply my intent was that QA and Dev need to be equals in the reporting chain, rather than one reporting to the other. That can work at any scale; even on an agile team with one QA and one Dev, they both report to the same boss who's responsible for both sides.

Of you are fine tuning the reporting chain, your org is already hopelessly broken. That is the central planning Communist model.

That sounds like a terrible strategy for a lot of us, where time to fix is much more important than bug count.

This is because there is likely to be a tradeoff between speedy development and bug count.

If you are shipping physical CDs of software, or releases which you can't update easily, or if you are working in a critical environment where mistakes are disastrous (finance, health, space, etc.), then it is fair enough to be so concerned about bugs.

But many if not most developers work on the web in areas where the product is uncertain and evolving. Prioritising rate of change is more important here. Rather than extensive QA, monitoring and fast rollback are better choices.

A formal QA process, which goes along with a formal release process and release schedule, certainly does slow things down. I think for large-scale projects with a large feature/requirement count that's ok, because there's a lot of little details to keep track of and the formal process helps with that. But you also need a fast-track for fixing small things quickly, and for that Dev, QA, and Ops need to work together to quickly diagnose, fix, test, and release changes.

My experience with the formal process I've been talking about was in a company whose product is a large-scale web application that interacted with a complex custom back end search engine and a proprietary content database. The QA and Dev teams had a friendly adversarial relationship, but ultimately we all felt we were on the same team with the same goals: to produce the best software we could. It was a fantastic place to work.

The top-level QA manager and the top-level Dev manager should both report to the CTO

Having a top-level manager who is responsible for a whole discipline doesn't work, it just creates a high wall to throw shit over. Developers and QA testers really should report to the same first line manager.

Have you ever worked for a fortune 100 company? The dev manager has probably never met the CTO. Im in this right now with my currrent job. I still keep my builds and test them longer than i should. Less broken shit gets through, but i regularly delay my qa ba's because of it.

So, not the CTO, but some shared higher-level manager. The point is that the QA lead cannot report into Development, and Development cannot report into QA. Both of those arrangements can lead to a bad actor stopping the other group from doing the right thing. (Eg: if QA reports to Dev, then Dev manager can force bad code to ship, and if Dev reports to QA than QA manager can prevent shipping and force Dev to waste time on excessive/pointless test code development that's not cost-justified.)

I've met people that call something similar 'fake agile'.

Surely everyone should report into the business (end-user on the business side) lead responsible for making this functionality happen. This includes Dev, QA, business. QA should be business? This is no longer 1985, there is no reason business (product owner) shouldn't be a capable PM and BA for their own product.

In the company I used to work for, the CTO and VP of Product Management were equals, both reporting to the CEO. The CTO's reports included the QA director, Dev director, and Project Management director.

On any given project, we had a Product Manager who set the business requirements, a Project Manager who ran the project, a Dev lead in charge of the development team, and a QA lead in charge of QA. None of these people could override the others because of the reporting arrangement, so we had to collaborate instead to make sure we all met our goals.

Early on this arrangement worked really well. But it started falling apart when ownership of the company changed and politics became a factor as people who cared more about moving up through the organization than about the success of the company and quality of the products were hired and promoted. My take is that bad management will destroy any approach you might take to producing good software. That's why I eventually left.

I work for a FTSE 100 company and the dev manager and CTO in the division where I work are very well connected. I don't think the size of company is necessarily the only factor here.

My experience is the opposite. Having independent testers means you're going to be stuck addressing 100 defects if you don't watch what you're doing the first time around.

10:1? You do realize QA has to have a semblance of life too? Adding more QA isn't designed to screw up the process, but to alleviate the workload. If you want to blame anything or anyone, I'd usually shoot for 1) hiring incompetent QA that isn't SDETs but merely just black-box QA competency, and 2) process - or lack of process, i.e. an agile process of building a little, testing a little, and delivering consistently a working product on pretty much any basis - monthly, weekly even.... That's the ideal of course.

If this is in place, adding extra QA can be beneficial.

None of this of course is excuse for a dev not producing code that works at least in its happy path plus/minus a few of the most obvious exceptions/error paths...

How is it that in the 2:1 environment developers aren't held accountable when problems turn up in code they're responsible for?

> Which is not shocking at all: does anybody really think a human can run through 800 test cases better than a computer can?

I think this rather misses the point; it's the bug that doesn't have a test case where QA helps.

Exactly. And having the computer run through 800 checks means a skilled tester can do what they are good at - finding bugs.

This to me was always the point of automating tests. One of the best QA people I knew refused to look at the user stories. He was brilliant at finding things that fit the spec but didn't make sense and things that users might do that weren't specified.

He found bugs in the specs, gaps in the specs and just plain untested behaviour.

QA is a job that requires skill, benefits greatly from technical understanding and requires a lot of domain knowledge.

Yup, that's pretty much my job. The devs do great testing and I try and pick holes in what they miss.

This is why specification should happen collaboratively with everyone with a stake (and that includes QA just as much as BA), you can fix them before they're developed against.

This is what a great QA tester will do. Good devs don't ship code with known bugs - but they will miss things because they code to the feature/user story/known behavior. That's what their job is. A good QA person's job will be to spend a whole lot of time thinking about how to break the dev's app that the dev missed.

> does anybody really think a human can run through 800 test cases better than a computer can?

Of course not. But automated tests are just the entry ticket to the release process.

A computer can't replicate the irrationality, laziness and guile of a human end-user. That's what good QA engineers test.

It really depends on what you're making and if it's possible to automate testing AND if you remembered to test it. Chrome appeared to have broken full screen movie playback for 5% of users once. Something changed, perf got bad, people who previously could play full screen video on a low end machine suddenly got playback too slow to be useful. They just stopped going full screen. No bug reports came in. Nothing in the testing infrastructure caught it. I only notice because my father tried to show me a video on his atom based netbook.

Similarly going full screen on a 2 monitor setup broke once. Again, no automated test.

The Web Text to Speech API is broken on every browser it's in. It will work with a simple sample but start and stop it a few times and it will break. I suspect because again there is no easy way to test it.

There's lots of others.

automated testing is not a substitute for QA IMO.

Even more important is that the developers spend time writing tests and their schedules are not expanded to accomodate this, nor is their pay increased. They are simply expected to cut corners elsewhere, or work more hours, or whatever is necessary. That's the real savings.

It sounds like there is some opposition between automatic tests and QA. But those do not have to be opposed - one can have both. In fact, for many types of software I don't see how you can avoid having both. The only choice you have who is doing your QA - your employees or your end users. Maybe Yahoo can afford the latter. But for some companies it would be a disaster.

The arguments here such as "oh they just make the devs do it and don't pay them anymore" are ridiculous. Nobody's working 80 hour weeks doing two full-time jobs at once.

Please let me tip a name for that: "Volkswagening" - assigning the quality check to the production team.

It doesn't even have to be intentional. If you write the code, you always have in mind some picture of how it should be used. That picture means you ignore the ways to use it which you not intended. So when the code is used in that way, it breaks. But you won't ever write a test for it because you would never think such usage is possible - exactly because you know too much about the right way of how it should be, so you start to think it's how it is.


Devs don't think like users. They know a lot about computers and can make good guesses about how other devs think. So devs are not the best people to test code that's going to be put in front of non-dev users.

Users are likely to have different models, and a different set of expectations. You can't write tests for all those possibilities because you literally have no idea what they are - and won't find out until you put them in front of users.

Haha, I'm going to start using that term. That's a great description.

To be fair the exact meaning of Volkswagening is to fake the behavior during official tests. Example: http://qz.com/515100/samsung-is-accused-of-volkswagening-its... Note that volkswagening.com was registered 20 days after the scandal, but is available for bid, if you want to do something fun with it.

VW lawyers would probably throw a fit if somebody actually used that domain for anything commercial.

Perhaps it could display different, positive content when accessed from Volkswagen IP addresses, to ensure they wouldn't find out.

I still firmly believe this is a large part of the reason for having devs carry pagers.

Maybe in better companies than I worked it that isn't the case, but devs shouldn't be your 24/7 support staff.

Isn't pager duty baked into the salary of the position? I thought that jobs where you're expected to be on call to some extent (hopefully at level 3 or 4 of support?) have an artificially high salary compared to those where it's a 9-5 gig?

Pagers? Now it's just a total violation of work life balance through Slack.

My phone has never been so loud.

Did the QA you dealt with not have any automated testing tools? That seems rather foolish to have your QA team manually test every feature every time without an automated smoke test.

QA is essential. How else do you know that you have built what you set out to built? Traditional manual QA is highly effective, so much so, that you can get huge gains by automating it. QA occurs at many levels and it makes sense to have a dedicated QA team for each level. Generally, the QA team for a level should be the exact people who requested and/or approved the specific feature subject to QA, as they are in the best position to confirm or deny that the feature meets the specification, as they created and/or approved it.

But you're talking about QA as a general process, not QA as is meant in the article:

> Software engineers at Yahoo are no longer permitted to hand off their completed code to another team for cross checking.

Dev Team A was giving a batch of code to Dev Team B to review.

I agree that the business user/product owner/whatever should be reviewing everything in test prior to approval and in production after but I'm not sure that's the article means by QA.

I think it's important to recognize that neither option is perfect by itself. Manual QA can still be an important process.

That being said, if you have to pick one or the other then you go with automated tests.

I worked at Yahoo before and during this period, first as a QA contractor and then as a full-time developer.

Before the switch, our team (advertising pipeline on Hadoop) used the waterfall method with these gigantic, monolithic releases; we probably released a handful of times a year. Almost without exception, QA was done manually and was painfully slow. I started to automate a lot of the testing after I arrived, but believe you me when I say that it was a tall order.

Soon after I moved into development, QA engineers without coding chops were let go, while the others were integrated into the development teams. The team switched over to agile, and a lot of effort was made to automate testing wherever possible. Despite some initial setbacks, we got down to a bi-weekly release cycle with better quality control than before.

Around the time I left, the company was mandating continuous delivery for all teams, as well as moving from internal tools to industry-standard ones like Chef. I left before it was completed, but at least as far as the data pipeline teams were concerned, the whole endeavor made the job a lot more fun, increased release quality, and weeded out a lot of the "that's not my job" types that made life hell for everyone else.

> the whole endeavor made the job a lot more fun

That is an important part of producing quality output in any job, I believe. The more employees actually enjoy what they are doing (or at least, don't actively hate it), the better their output is likely to be.

I don't think it's that cut and dry. I've worked in some places where the QA team was useless, meaningless red tape to get your stuff deployed. They wouldn't do much but sign off on deployment at some point, yet bore no responsibility if shit hit the fan. In these cases, they really were just an unnecessary cost and you learned pretty quickly to make sure your tests were in place, that you were testing for the right things, and so on.

But then there were the other QA teams. The people that would just reject your stuff outright if it didn't have tests (no matter if it worked) and when the tests passed they would look at things truly from a customer perspective. They would ask really uncomfortable questions, not just to developers, but to designers and business alike. They had a mindset that was different from those creating things; they were the devil's advocate. These people did much, much more good than harm, and they are few and far between. Unfortunately, while I believe they were incredibly valuable, business thought otherwise when cuts came around..

When testing, you are the headlights of the project. ... Testing is done to find information. Critical decisions about the project or the product are made on the basis of that information." [1]

I recently became a software tester, and I really didn't understand the role for quite a while. Is my primary responsibility finding bugs? Logging defects? Analysing requirements documents? Writing test scripts? Writing Status reports?

Answer: Do enough of each to fulfill your goal of gathering and sharing information with your management and dev groups.

If the software tester has problems testing, then the customer will have problems using it, and the company will have problems supporting it.

1. Kaner, Cem; James Bach; and Bret Pettichord. 2001. Lessons Learned in Software Testing. Wiley.

The goal of a QA person should be "shipping great software".

I agree that's the goal, but what is the role? What are the contributions that QA provide? That's what I had problems understanding.

I think the key difference is an engineering mindset.

When you have software developers skilled in quality assurance who have the job of finding the edge cases and producing comprehensive additional acceptance and functional testing, they're an asset. It's a particular perverse mindset that I personally really enjoy interacting with as a software developer 'customer' - those evil bastards find the best bugs, regression test them, and expand to find all of that class of error in the application.

The stereotype that QA-is-pointless in Silicon Valley is persistent and actually annoying. There will always be issues that even the most comprehensive test suite will miss.

Startups still glorify Facebook's "Move Fast and Break Things" without noting that Facebook has backpedaled from that. After all, people expect startup software to have issues, so what's the harm? Technical debt? Pfft.

Engineers are not the best QA for their own code since they may be adverse to admitting errors in their own code. QA engineers are not as empathetic.

Disclosure: I am a Software QA Engineer in Silicon Valley.

The QA function is not pointless.

Whether a distinct QA team is the best means of performing the QA function is, however, a separate question.

I was on a large, fast moving project, with dozens of components. Some genius decided that devs will do all QA, automated testing, blah blah. It was a disaster.

The insane deadlines required devs to write lots of poor-to-average quality code (tried code reviews and peer programming ... no time for that so it fell on the wayside). The automated testing done by devs was terrible but understandable. If you are up until 2am hacking out code (without any precise requirements), then why bother with testing? We ended up having one -somewhat- central component that had "gating tests". Everyone stuck their tests there. That made things worse since that one component was the one that kept seeing failed tests. The PMs did frantic "user-like" testing before demos. You can imagine how fun that was.

In my opinion, the decision to not have a dedicated team of testers was the big mistake in all of this. When you have many teams, many components and no precise requirements, you need an independent QA team to coordinate and prevent people from passing the buck. In a time critical project (what projects are not time critical today?), you don't have precise requirements and devs have to "sling" code. I accept this reality. But I don't except the "no QA will make you mature devs" stupidity. If I was being a mature dev, I would refuse to code until the requirements were clear. None of this 2 week agile-scrum nonsense.

Oh .. and one other big thing. The project was a cloud project that needed to be up 24/7 while we were developing it (for beta users). It was like going from one outage to the next. What a disaster!

So what I learned is this: not only have a QA team, but have a 24 by 7 QA team for the kind of project I was on. Note... not all projects are the same!

Actually, it sounds like the main problem for the kind of project you were on was realistic timelines, not a QA team.

And I don't think an independent QA team helps to coordinate people and avoid buck-passing, IME, the more different teams are essential to delivering a piece of software to the customer, the more opportunities for buck passing, and the higher level of management the buck passing occurs at.

> tried code reviews and peer programming ... no time for that so it fell on the wayside

You don't think that could be related? If you had a sane development cycle, that may have helped. Having QA would likely have helped as well, but a healthy dev cycle is a good start.

That sounds like a problem with lack of resources and bad planning, not just lacking qa. If you were to choose between a) hire 3 devs to offload your work so that you can focus on your tasks better, or b) hire 3 qa guys that will keep throwing back the shit you produce because you are overloaded. I don't think any qa would help in such a situation

Where I work teams are made up of a product designer, UX designer or two, devs, and QAs. The QAs are responsible for high level automated tests as well as manual testing for the parts that are hard to automate. It makes communication between devs/qa/design much faster as we all sit together.

You can be sure that any system at Yahoo! or anywhere else that is the bread and butter of the operation (ad systems, accounting, fraud tracking) still involves a software QA team.

Facebook could afford to "move fast and break things" when they weren't taking money from anyone. Now that they're a real company, that kind of attitude can get them adverse attention from the SEC if it causes an accounting discrepancy or from trial lawyers if it upsets their advertisers.

Facebook has an amazing 'dogfooding their own product' scenario that's hard to replicate for any other company.

The thing that I think is missed in the "getting rid of QA" debate is the size of releases. When you're deploying continuously, the size of what's changing drops substantially. Adding QA as a gating step in the process between developers and production is inefficient and, worse yet, QA will be overwhelmed by the number of rounds of QA they're forced to perform. It's not uncommon for QA regression testing to take days so when there are multiple releases per day, you can see the problem with putting QA between development and production.

That's not to say that dedicated QA doesn't have a purpose. They can and should be testing the live code on a regular basis (weekly or daily, depending on how long the testing cycle takes). But automated tests are what's responsible for ensuring that nothing arrives in production in a completely-broken state and it's assumed that the benefits of continually shipping software will outweigh the downsides of occasionally having subtle bugs in production. And even with QA involved, bugs will make it through the process. At that point, the expedited process of pushing code to production becomes a huge win. Testing strategies often aim for the best MTBF, but when that comes at the cost of MTTF, it's not always a good thing. We've had bugs that were fixed in production less than 10 minutes after the bug is filed.

The other point that gets missed is that people assume that there's no manual QA happening and a developer's careless change just goes to production and wreaks havoc. This ignores the code review process, which is crucial to delivering quality software in a continuous deployment scenario. On my team, changes require 2 +1s before being merged into master, subjected to continuous integration again and eventually deployed to production. Moreover, if any engineer reviewing the code isn't sure they fully understand the change or otherwise wants to see the code running, a single command that runs in under 10 min will spin up an environment in AWS using the code in the pull request so that they can do any manual testing they need to feel comfortable adding their +1. When their done, a single command cleans up that environment.

The thing to keep in mind when designing a testing strategy is the context your software runs in. I would not advocate this testing strategy for code that runs in a vehicle where a bug could cause physical destruction. Likewise, I wouldn't use it for an application with access to highly-sensitive medical or financial information where a leak or data corruption could mean millions of dollars in losses/fines. But for the majority of internet software, the stakes just aren't that high and the gains from a streamlined development process will outweigh the losses from bugs that find their way into production.

Disclosure: I manage a team that deploys continually, usually upwards of 20 times per day. We're responsible for our own QA and have a significantly lower defect rate than other teams in the company with a more traditional QA strategy. However we still draw on QA resources when we feel like we're pushing something risky.

emphatic... were you going for empathetic?

Yes, fixed.

Our approach - in a much smaller company - is that all stories should have automated tests before they head off to QA. QA's job is to make sure that the story in question works correctly, it's not to find regression bugs. If QA finds a bug in the story, we write a test to catch that before we resubmit it. Over time, we have enough tests at all levels of the system that QA doesn't generally need to worry about regressions: just making sure that the latest story works as advertised.

This approach allows us to stay agile, with small, regular releases, while also making good use of what QA folks are actually good at.

This isn't surprising.

Microsoft switched to this model a few months after Satya took over.

For the majority of Microsoft teams it worked really well and showed the kinds of results mentioned in this yahoo article. Look at many of our iOS apps as an example.

But for some parts of the Windows OS team apparently it didn't work well (according to anonymous reports leaked online to major news outlets by some Windows team folks) and they say it caused bugs.

First of all I think that argument is semi-BS and a cover up for those complainer's lack of competence in testing their code thus making them bad engineers because a good engineer knows how to design, implement, and test their product imo. But I digress.

I in no way want to sound like a dk but as an engineer it is your responsibility to practice test driven development but that's not enough.

Like reading an essay you usually can't catch all of your own bugs and thus peer editing or in this case cross testing is very useful.

You should write the Unit tests and integration tests for your feature


There should always be an additional level of end to end tests for your feature written by someone else who is not you.

Everyone should have a feature and design and implement it well including its Unit tests and integration tests BUT they should also be responsible for E2E tests for someone else's feature.

That way everyone has feature tasks and test tasks and no one feels like they are only doing one thing or stuck in a dead end career.

> I in no way want to sound like a dk but as an engineer it is your responsibility to practice test driven development but that's not enough.

One of the things that put me off when it comes to TDD is that it has always been a bit like religion.

What matters is whether the tests exist, not when they were written. I'd even argue that writing a test first and then being constrained by that box is a bad idea. Write the most elegant code first, and then write tests to cover all paths. You're more likely to know the problem better after the code is written.

> What matters is whether the tests exist, not when they were written

Technically true, but with myself at least; when I do TDD, I tend to write more, and better tests. When I write tests after code, especially when working on tight deadlines, there are substantially less tests written, just lots of TODOs that never get done.

I don't believe it's 100% on the dev to find all of the problems. Once you get a look at how the code works, you are less likely to find unhappy cases simply because you know in advance that doing X is stupid so you don't think of it. But of course a user wouldn't know that it's stupid or they are lazy or not as careful as they could be.

That's why you need a second person (ideally QA) to look at the result and test it. Cognitive bias 101.

Also making sure you have a good automated build,test,deploy structure is important too.

It's all the little details that will determine if this system will succeed or not.

Not the overall "big idea".

Implementation of this system and the competence and willingness to adapt of the team members is key imo. At least for this issue.

maybe a big company like microsoft can afford to not do as much of "qa tests" since they have people lining up wanting to be the beta tester for them(for free too!)

I'm curious to find out if my expectations of QA are unrealistic.

I'd _expect_:

* Devs write automated unit tests galore, plus a smattering of integration tests

* QAs write some acceptance tests

* QAs maintain a higher level of broad understanding of where the org is going, trying to anticipate when a change in Team A will impact Team B _before_ it happens. They also do manual testing of obscure/unrepeated scenarios, basically using their broader knowledge to look for pain before it is felt.

The above hasn't happened anywhere I've been (though each point HAS happened somewhere, just not all together).

One thing in particular I've noticed is that a good QA is a mindset that a dev doesn't share. Devs can learn to be BETTER at QA than they are, but I honestly think it's not helpful for a Qa to be a Dev or a Dev to be a QA - they are different skill sets, and while someone can have both, it's hard to excel at both.

I can see why this would work in a place that uses QA as a crutch.

All developers should aim for no bugs and test their stuff themselves but of course when deadlines are looming its easier to just code and let the QA team pick it up.

This was exactly the situation at a previous employer. There were twice as many testers as devs. I tried to advocate for comprehensive unit and integration tests and was met with a "if it doesn't work, the testers will let us know" attitude. Baffling. At my current job we have no QA testers and write much better software.

Comprehensive unit and integration tests strike me as a waste of time. From what I've seen, getting around 70%~80% code coverage is usually going to be around the tipping point where you start to get diminishing returns.

Don't get me wrong, I think testing is important. But there's tons of code where you get no value by writing tests for it. (At least in frontend development.)

We don't have a QA person, but I think it'd be great to have one. You can't write automated tests to check that things all look like they should. You can't write automated tests to check that all of the interactions are behaving as expected.

Point taken. I've never done frontend work, but I can understand the value of a human tester when a complex UI is involved. The kind of work I do is very data-oriented and quite easy to get high coverage when the code is well written.

This is pretty much how things go in today's environment, especially in the startups that I've seen. More things are being pushed directly on devs, which is why we earn as high a salary as we do. Traditional QA is pretty much dead, no one should be doing that now if they want to have a career in tech.

Where I work, devs do the QA, and most of the devops work as well. It's the new reality, and anyone who thinks otherwise will be obsoleted.

> Traditional QA is pretty much dead

Where have you worked? Just inside the SV bubble? This is definitely not true.

Welcome Yahoo engineers to the year 2010. Or 2005. It's nice here.

The suckiest part of this story is the number of folks who are stuck with gated handoff processes that can't see how this would ever work. Some of those folks might be waiting 10, 20 years catching up to the other folks.

Just to be clear, QA the function isn't going anywhere. It's all being automated by the folks writing the code. QA the people/team? Turns out that this setup never worked well.

I work with tech organizations all the time. I find that poor tech organizations, when faced with a complex problem, give it to a person or team. Good organizations bulldoze their way through it the first time with all hands on board, then figure out how to offload as much of that manual BS as possible to computers. If they can't automate it, they live with it until they can. Same goes for complex cross-team integration processes.

If all your QA team does is run through a bunch of tests that could be automated, then by all means, automate the tests and get rid of your QA department. However, good QA folks have a valuable skill, which is that they think of interesting ways to break software. Not many programmers do this very well.

This reminds me a bit about how SREs and developers conflict on contradictory goals. With a self regulating system that gets established between development and operations, if your code is bad in prod, you'll spend more time in operations to try to take care of the mistakes made in development. If SREs really don't get along with how developers throw over crap and quit, the developers will get the pagers instead. This move to consolidate test and development seems to be consistent with recent trends to pile upon more and more work for developers in the efforts to reduce siloization.

So I'd have to ask how getting rid of QA has affected the pace of feature development.

"Some of the engineers really cared about system performance types of things, so they joined related teams. Some started working on automation [for testing], and they thought that was great—that they didn’t have to do the same thing over and over."

There is still QA, it's just automated QA. Welcome to the 21st century.

Surprised that people are finding this unusual, in web/mobile anyways. In my experience most engineers do some level of QA themselves, particularly in start-ups < 1000 people. In what ways does an engineer being their own QA negatively impact the company?

Mostly that you're asking the developers to become experts in software testing and verification, in addition to their existing knowledge. If you have good mentoring & examples & guidelines, then everybody can learn and move along roughly the same path. That takes time and effort to set up, so in those small startups you're likely to see wildly divergent approaches to testing + the friction when people think they should standardize, or when they actually do need to, or when people switch teams.

I've spent years as both a tester and a dev. most devs think they are awesome testers because their automated test verified that the happy path works.

Yup me too. At AWS we didn't have test engineers either. Everyone was responsible for testing their own code. We didn't even have SREs, so everyone was also on call.

This just meant that everyone made sure they were writing well tested code before it got released because you didn't want to be the guy who made yourself, or worse, your coworker have to fix something at 3am.

Of course I can see how this could be bad too, like if developers really dislike writing tests. On the other hand, the people who write the code seem best equipped to understand how to set up automated testing most efficiently.

> This just meant that everyone made sure they were writing well tested code before it got released because you didn't want to be the guy who made yourself

ha, yeah right.

What it really means at Amazon is build your service then bail for a new team before you have to maintain it.

Exaggeration, but somewhat true. I have a friend there now who has an oncall rotation that's split into day/night. nighttime oncall basically means you are working graveyard this week. Of course that is really because that team should have a proper support staff, it's a vital service.

Amazon, to me, is the epitome of a company that combines dev/qa/support because it's cheap rather than because it's actually good.

Well, I should have said that the idea is people would write well tested code before it gets released. I can also attest it didn't necessarily work out that ideally in practice.

On the other hand, I'll disagree that they combine dev/qa/support because they're cheap. This actually makes little sense to me because typically QA roles are paid less than traditional software developers. That being the case, it doesn't make sense to get rid of them and let your developers do that stuff if all you want to do is save money.

What I heard was that they used to have support staff but changed it because it wasn't working out. I wasn't there so I don't know if that actually happened, but I can see how it's both harder and slower to have someone who doesn't know the code base fix bugs on the spot.

Honestly I can see it both ways. I think there are a lot of benefits to having the developers write tests themselves. On the other hand, when a project gets big enough, I can see how it makes sense to have people only working on tools like automated test frameworks or build stuff.

> I can see how it's both harder and slower to have someone who doesn't know the code base fix bugs on the spot.

Except you don't do that, you fight operational fires and other such bullshit and things don't get fixed.

Their broken oncall system is one of the reasons why I don't want to go back to Amazon.

because the person who wrote the code is not the most objective person to review it or find problems with it.

automation doesn't cover everything, you need baseline automation testing as well as more focused manual testing.

Today QA is not manual testing.

Today QA is talking with product/UX, taking the end user and customer perspective, wearing a quality head end to end over features an cross devices, doing explorative testing for stuff that does not make sense to a customer (mostly what's created by the inference of different features or cross device interaction).

I think QA has been misaligned all this time. They're not part of engineering, they're part of product management. They're the low-level eyes and ears for the product team. Automating checks for the issues they uncover is absolutely an engineering function, but user-oriented holistic testing is not.

Totally buy they could reduce actual errors that matter.

I've worked with many a QA who would get bent up over a detail outside of the spec that didn't really matter, and where all QA testing was manual.

Coders (good ones) are well equipped to automate processes, and to do so quickly, and this extends to integration testing.

>I've worked with many a QA who would get bent up over a detail outside of the spec that didn't really matter, and where all QA testing was manual.

This is where you need management (or someone from the product side) who can set priorities, where needed, and put and end to pointless side-disputes that can and do crop up.

Every agile story should have a testable completion point, agreed upon by development and the customer. If whether or not the story is "done" is vague and arguable, it's not good enough to do.

One of the big problems here, and where QA professionals can add real value, is defining that "done" point. Customers are often not very good at it. Their idea of what they want is too vague. They want developers to just build something, and they accept or reject it when they see it (and fault developers for not building it right).

But really, all story completion criteria should be testable, and developers should be able to demonstrate the tests. The job of QA shouldn't be to test, but to make sure the developers are actually testing what they claim to test.

Manual testing is basically a 0 skill job. Can you click around this website and tell me when you see a bug. This is the most common form of QA but adds very little value that can't be added with more reliability using automation.

Given this QA can still bring value. The two roles that they really add value in are a Test developer specialist writing non-flaky automated tests, and a BA type role where they have conversations that expand a product owner's idea into an implementable feature.

Given that neither of these roles require manual testing, if a QA team has over specialised on manual testing, there's little value in keeping it.

I might be biased ( ex-dev with 20 years experience from Assembler to C to .Net etc ) but GOOD manual testing requires a lot of skill. If it didn't then I wouldn't have switched from dev to test. I work in a shop where they had devs doing all the testing, tons of automation but they still found that a good exploratory tester added value. But from your comment it may well be that you've never worked with someone like me - maybe when you do you'll think different

I think a lot depends on specific definitions: a manual test suite, where you have a list of tests with clear steps and a clear expectation, is definitely near-zero skill to execute. Actual exploratory testing, on the other hand, is skilled: especially if they're expected to write up tests (automated or not) that test code paths that haven't previously been tested.

Historically, Yahoo! operated for the first 10 years of their existence without formal QA teams. The first USA QA hires were for Japanese product QA.

Nowadays OpenCV is used a fair amount, and they're migrating to modern industry-standard tools.

I agree in principle for most products, no QA, no testing. Why?

* Everyone should do QA and implement their features own UI/UX, by following the pattern the application and framework sets tuned by an actual designer * An environment where production issues and bugs are prioritized above everything else should be created and fostered * To paraphrase Rich Hickey's analogy on the matter: writing tests is like driving around relying on the guard rails to keep you in the lines. That is (my interpretation): * If your code is this fragile to constantly require testing you've chosen poor abstractions.

They removed only MANUAL QA testing. There is a big difference between removing the QA team entirely and automating QA work.

and of course all manual testing can be automated... good luck with that.

That's what users are for!

Yahoo's main issue is UX and usability, not software quality, so this sounds like another way of saying Yahoo laid off its QA team.

Depends on your definition of 'quality' - if s/w isn't usable and has bad UC then how can it be high quality?

True. It has to work as intended too.

In the past I've even loosely spec'd out a system that would build integration tests simply from crawling a website. In my head. I'm surprised this hasn't become a bigger priority from some of the biggest tech companies.

I know it would be a tough problem and a big project, but I think with only a small amount of human interaction you can build all the integration testing you would ever need simply by allowing the crawler to build them for you.

In fact the way I imagine it would work, the system would automatically build a framework and a user could (in a very structured way via structured UI) coerce the integration tests in small ways to ensure it understands what's going on. For example: "This form is used for registration". "This form is for logging in".

Removing dedicated QA (whether they do manual testing like in the article or write automated tests) and forcing the developers to take this on themselves is okay. Alternatively, I've had a lot of success with having development teams take operational responsibility for their code. They are not only naturally incentivized to take on automating QA, they also move toward continuous deployment and become more involved in thinking about the product. The safety and speed that's gained is seeming to result in teams that stay small. It's not for everyone, and caused attrition early on, but talking about these practices during interviews has attracted the right people.

I stand in the middle ground on this one. I fully believe that rote QA testing with huge volumes of test plans is a waste of everyone's time. However automated testing doesn't take into account the fact that people are almost always the primary users of your software, and so I feel somewhere there should be a person or persons who occasionally smoke test the application to make sure that things are working cohesively from an end-user perspective and just making sure everything makes sense. If this person is the prototypical product owner, that's great, if you can find one that's not in meetings all day...

Dev writing automation tests for their code is kinda pointless. It would be better to have another dev or a different team such as automation engineers writing regression automation tests. Automating regression tests is definitely better than manual QA-ing the same 1000 tests over and over again. There has to be a good balance. Have extensive coverage of automated regression tests and let manual QA test new features. This will at least increase the frequency of release cycles. Getting rid of an entire QA dept is somewhat equivalent to shooting yourself in the foot.

This is basically my experience. I think the ideal situation is having development and QA cultures that both prioritize quality and automation. Development cultures sometimes fail to prioritize quality ("not our problem"), and QA teams sometimes fail to prioritize automation ("not how we do things"), but those are problems with those specific cultures, not with the entire concepts of development and QA.

Nah, you write the code you write the tests.

Then you do a code review and make sure that the reviewer examines the code for missed cases.

Ultimately, even with automated testing, someone has to do the manual testing of checking that it's actually providing the value it's meant to.

When you remove the manual QA team and switch to staged rollout, you are moving the manual QA burden onto your users. You still have that manual QA team - they're the first bunch of users in your staged rollout plan - you just don't pay them anymore and gather their feedback through bug reports. Users are used to buggy software because of other companies who do this (Google, etc) so they carry on being users anyway.

i'm in QA - have been for 12 years. Testers who can only perform manual testing, and organisations that only test manually are the product of companies realising they should 'do some qa', and managers who do not understand SW development signing off to build large, manual only test teams.

It's inefficient, there is a very slow rate of feedback to devs, not much can be done until there is a working UI - so it all lends itself to the broken waterfall model of code code code, then 'do some testing' right at the end of the project - which has already seen overruns from dev squeeze qa time out.

Manual QA testers are relatively cheap on paper - so managers don't see a problem with building a team this way.

I'm not sure this will ever go away, but as someone who tries to learn every year, and master his career, I welcome Yahoo's choice. I see a role for a highly skilled 'developer in test' role superseding the traditional, ineffective manual QA role. Someone who can build automation frameworks quickly, be responsible for maintaining them and test data, and provide rapid feedback to devs. Devs should still be carrying out unit testing, code reviews etc, but I do believe a role still exists for someone to focus on QA, just with a lot more skills, providing far more rapid feedback, with less dependencies on the devs for test environments.

Quality assurance is important whether you do that via humans or code, but the thing that always bothered me about Q&A was that the ones I worked with were mindless people simply looking at the feature request and the functionality on the page and comparing the two without any thought towards the actual product, business, or user.

And in that system, the developer is completely removed from the product and is just another factory worker. The closer engineers can be to users (with design to translate obviously) the better for everyone.

Great. I worked for a company that didn't invest in QA, it was consistently a !@#$% mess. When you do this, the need simply shifts to the customer. I wouldn't install our software until the 4th or 5th hotfix patch was available.

Certainly, I'm an advocate of a more responsible dev team sharing the quality tasks and continuous integration too. But no QA at all? Hahah... maybe if you're a web portal that no one depends on for business-critical needs.

Edit: I guess the truth hurts.

I worked for a company that didn't invest in QA

Where are these magical places that do invest in QA? In nearly 20 years of professional development, I've never seen an organization in which the criteria for shipping was anything other than "works for me". I have never seen an organization in which there was either budget or managerial patience for proper QA, let alone anything other than VERY basic acceptance testing.

I once worked in an organization that has 1.5-2x QA then dev. I now work in a place which has 1 QA for every 8 devs -- and there are far less bugs here then the other.

I think the reason is proper tooling, a culture of thorough automated testing, and ownership of code.

Ownership and the resulting pride in workmanship makes a big difference.

When I worked at Microsoft many years ago they had an large QA team with equal footing to Development and Product in determining what shipped and when. This was the same for every development team I was a part of there for over 9 years. It wasn't perfect, and the ship cycles were slow, but the quality bar was set higher than what I've seen outside of Microsoft.

Healthcare. Many companies developing medical software require two layers of QA (programmer review and QA; then a separate QA team) before release. "Works for me" is not appropriate when the "me" in question is not a subject-matter expert, and the subject matter is potentially deadly.

Places where it is important that the software works.

I once worked on an enterprise product that had a QA team of 25 people.

When measuring the effectiveness via reduction of issues, how do you account the natural stability introduced by reducing the updates/week each developer ships? When devs are tasked with code reviews and/or QA, this is time that could have been spent on their own features. In other words, if the product is stable today, and everyone's on vacation (no new updates), the product will generally remain stable save for unforeseen usage patterns.

"What happens when you take away the quality assurance team in a software development operation? Fewer, not more errors."

And what happens when you close your eyes? Reality disappears?

Automated testing is a way to completely remove customer advocates out of the loop. Correct UX doesn't mean good UX and unless someone can automate the test of all the non-quantifiable qualities of good and intuitive they're gonna push loads of engineering driven interfaces to their users.


Updated TL;DR: QA is changing - just like everything else.

The article makes the assumption that QA == manual QA which as a quality professional is false. Quality is about measuring risk across the development process. Immature team need manual QA while mature (in a process/quality sense) teams need much less (or none).

Quality professionals who want a sustained career needs to learn development processes, opts, documentation & monitoring. We make teams better.

This is actually a story about the triumph of continuous integration and staged rollout. By shipping code constantly, but slowly rolling it out to users - bugs can be detected very quickly by the users themselves, instead of employing large QA teams.

Keeping a central code repository, automating builds, frequent commits and automatic tests for code are taking away a lot of load for QA teams.

You are correct! I'm a programmer at Yahoo -- deploying multiple times a day to production, with the confidence your code will work, feels great.

Manual ("batch-release") deployments have been forbidden for over a year, which is a forcing function to change development process to allow deploying to production continuously multiple times a day. This requires robust test and deployment automation and for engineers to better understand what they build. It's pretty nice overall!

Forgive the throwaway -- but how does Yahoo define "will work?" Ignoring the calls against the UX change of several years ago, your own user feedback pages at https://yahoo.uservoice.com/forums/207809 make it pretty clear that longstanding issues such as spam (the same ring of spammers has operated for multiple years as Ultimate Stock Alerts, PennyStockAlerts and ExplosiveOTC and others -- simple Bayesian filtering could have solved this years ago) and things like the fact that the ignore function (a pretty core piece of functionality) has never actually ignored users - merely greyed them out, but they still take up space on the screen.

My point isn't to be negative about the state of Yahoo Finance; you probably don't work in that department, and after three years of neglect, most of the users are long gone.

My point is that if an organization is going to rely on end users to report bugs, the organization must actually respond to those bugs. Sometimes the answer might be "No, we're not going back to the Web 1.0 UX." But ignoring the top bugs for multiple years suggests a breakdown in the feedback mechanism. If Yahoo doesn't care, that's fine, it's just business. But it seems more likely that Yahoo doesn't even know there's a problem, because there's no way for user feedback to make it to the developers.

This is actually a story about the triumph of ecosystem lock-in and training consumers to exchange high quality for low price or new shiny. It's outsourcing your testing to your customers. When you are either shipping free/freemium apps, or are a company that rigidly controls all APIs to their system (Twitter/Facebook), you can force the users to swallow any level of quality you choose to give them.

...triumph of continuous integration and staged rollout.

This makes sense to me. The fact that this wouldn't have worked at the old-style "enterprisey" places we used to work at doesn't say much in general. (It may not work at Y! either, but it seems possible...)

obvious bugs in basic functionality can be detected by automation

the nastiest bugs will be found by either a manual tester or your customer.

If you think about a QA team as your customer, as any downstream department in the work pipeline truly is, you realize that in order to make full use of them and to maximize your efficiency, you as a developer should write automated unit tests to cover the user stories or feature requirements, allowing QA to work on the nasty edge-cases.

Rainforestqa has a unique value proposition in this space (YC some year)

Their team and product are quite good if you want to explore QA as a service. Essentially humans(turks) preform outlined and preprogrammed steps.

Their tagline "We automate your functional and integration testing with our QA-as-a-Service API. Human testing at the speed of automation."

When I was working at ThoughtWorks, we had devs writing automated unit tests with close to 100 percent coverage, and also QA (who also automated as much of their testing as possible, using a variant of the FIT framework in those days) finding significant bugs and show stoppers,.

In my experience, one is not a substitute for the other.

Ultimately, a good development process is about building in checks and balances. Code reviews, QA, automated testing, etc. are all part of that. It's up to each to team to decide which pieces they want. There's no right way to do it.

I'd vote for having a QA team. Not for quality control purpose. But to have someone think outside the box. Sometimes, you will be surprise when you talked to the QA team. And you could not get those ideas from dev peer review.

QA Team is great if it is a team of developers who are interested in QA and test automation/tooling. Not so cool if it's a department full of people who make low hourly wages to execute manual tests and don't write code.

Is it odd that this article describes, but highlight , that yahoo is a decade behind the industry here? Continuous integration and skipping QA has been the web standard for years now.

Why is Yahoo still coding anything? They're focusing on dumping everything other than their stake in Alibaba.

That is kind of like having an open kitchen policy (or a kitchen visible via a glass wall) in a restaurant.

Based on Tumblr's past performance, it's a shock that they had a QA team in the first place.

Nobody cares how Yahoo does anything.

TL;DR: They fired the QA team to save money. Then they made the engineers do the QA work for no extra pay.

I think that is a slightly harsh assessment. They forced the engineers to stop shipping shoddy code, so the QA team wasn't necessary in their opinion.

However I think there is probably a middle ground where your engineers deliver quality code and you also have a QA team to increase that quality even further.

A failing company cutting QA teams is nothing but a desperate attempt to save money. Yahoo's margins are falling and it's easier to fire people than it is to raise revenue.

> so the QA team wasn't necessary in their opinion.

I don't know of any large, complex systems where QA is not necessary. Technology is only fallible because humans are.

Note he didn't say "QA is not necessary". He did say "the QA team wasn't necessary". His statement seems to assume that _someone_ is still doing QA.

How about the right version:

They fired the QA team to force the devs to do a better job of designing for testability. When you have to write and plough through your own tests, you integrate tests earlier, and you modularize your code better to support that testing. There's a lot in the early phases of engineering that the initial devs can do that QA cannot. QA is handed a black box; dev gets to change the box.

It's a very similar big-picture realization as the move from "system administration" to SREs/devOps. Having a bunch of people throwing #@*( over the wall that other people then have to make work is a poor model for optimizing the big picture.

This is a good move. It's Mayer taking another play from the Google playbook and trying to improve the process at Yahoo.

> They fired the QA team to force the devs to do a better job of designing for testability

If that is actually true, then they should have fired the Devs and found ones who understand writing correct and testable code.

That is a horrible way to treat people.

I think we must be misunderstanding each other. Developers are human. (At least, I think we are.) We respond to the incentives and structure of the environment in which we're developing. A culture of "throw crap over the wall to QA" naturally creates an incentive for quickly writing up features, without balancing that with robustness, because finding the problems "is QA's fault". A culture of "the devs are responsible for the quality of their code" produces better code. The way to treat the devs well is to have the QA folks go off and create or bring in easy-to-use automated testing and CI frameworks, to make it easy for the devs to do the right thing.

This is not a new observation - it's been happening over the last 10 years. see, for example, http://blogs.forrester.com/mike_gualtieri/11-02-17-want_bett... or http://product.hubspot.com/blog/the-classic-qa-team-is-obsol... and, while it's from 2007, http://googletesting.blogspot.com/2007/03/difference-between...

> Developers are human

So are Q&A people. They are the developer's teams members not their minions.

> We respond to the incentives and structure of the environment in which we're developing. A culture of "throw crap over the wall to QA" naturally creates an incentive for quickly writing up features, without balancing that with robustness, because finding the problems "is QA's fault". A culture of "the devs are responsible for the quality of their code" produces better code.

I'm wondering why the presence of a Q&A team would in any way change the professional responsibility of developers to create correct, testable code. If your developers have developed a culture of "throw crap over the wall to QA", then that's on the developers.

> The way to treat the devs well is to have the QA folks go off and create or bring in easy-to-use automated testing and CI frameworks, to make it easy for the devs to do the right thing.

That is not really the function of QA. Developers need to use proper testing tools on their own, but QA has its own function beyond just testing the code. They also test the assumptions and specification that the "business" ask the developer's to code. They look for bugs that go beyond what a certain set of test criteria instructs. It is also important to supply Q&A with tools to do fast, automated testing. Developer tools are not the function of Q&A.

Q&A has a job to do, and its irresponsible for Developers use them as a substitute for their responsibilities. As an example from other industries, safety is not just the Safety Officer's problem.

// I've been equal years developer and system admin working with good and bad Q&A people

I mean, they have a salary, so unless the engineers are required to stay extra hours to do the QA it makes no difference to them.

They might as well take out the trash and clean the floors on their way out. Vertically integrated!

I have worked in small companies before where there were no janitorial services, so we emptied our trash cans once a week, and took turns taking the larger bags to the dumpster outside. What's the big deal? And that's not a rhetorical question. I truthfully want to know why you think something like that is silly.

Because Yahoo isn't a small company? They have been losing a lot of engineers and it must be hell to recruit people to join Yahoo, negative press like this is only going to make it worse.

As for why I think having Yahoo engineers take over janitorial duties is silly... If you're paying Silicon Valley engineering wages to someone (which they are, no one is going to Yahoo for stock options!) you should make sure they are working on engineering level tasks. It would be a giant waste of resources to have people do tasks they are overqualified for.

Definitely. Writing software : writing software to test software as writing software : cleaning the floors.

Clearly you've forgotten that doing anything one doesn't want to do, even if it's still software and absolutely part of their job, is "being forced to do it for no extra pay."

It's impossible to ask engineers to do more work in a day - you can only change the work being done and/or the time you give them to do it. My guess is that they'll be asked to do more automated & sanity testing as part of their cycle, and as a result their timelines will be extended.

Imho, this is the way engineering teams should be structured anyways.

> "It's impossible to ask engineers to do more work in a day - you can only change the work being done and/or the time you give them to do it."

False. If engineers are typically working 9-5 or 9-6, and you give them new responsibilities but keep existing deadlines and scope, it in essence has the effect of forcing them to work longer hours to get the work done (or else...).

Sure. But that's never sustainable.

Not sure about that. Many companies have successfully creeped up the responsibilities of people while keeping their pay the same. Add a few tasks here and there, tell them that this will look great come their review, but then lead them along.

Not saying this is good or that the practice doesn't result in increased turnover, but that doesn't stop companies from doing it with some success.

Did you even read the article? Many of the QA teams were disbanded and did other things like development, test automation, etc.

Well, that takes away from the other work than can do - and may incentivize them to more thoroughly check their work as they go. It could be a fairly big improvement to efficiency.

Nope, they did not. And no, they didn't Source: I work there.

Your snark gave me a good laugh.

Hit the nail on the head. I worked for a company that did this as one of their last efforts to save money. Then that company went away..

This will end well.

There's no way this could end badly...

If this is working so well, why is there still no version of Yahoo Messenger—a core product—for either Mac OS X or iOS?

It looks like Yahoo Messenger for iOS was released on December 3: https://itunes.apple.com/us/app/yahoo-messenger-chat-share/i...

And apparently there's a native OS X client coming, as well: http://mashable.com/2015/12/03/yahoo-messenger-is-back/

Thank you! I apologize for being 8 days behind on this. :)

It's still mystifying that there was no iOS or Mac version for such a long (albeit temporary) period of time.

Also mystifying that I apparently deserved a -4 for this comment. Seems valid to me to note Yahoo not having an app on iOS or Mac OS for well over a year.

Yahoo Messenger for iOS: https://appsto.re/us/DOV0-.i

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact