Both the productivity and the quality were higher in the places with fully automated testing. Which is not shocking at all: does anybody really think a human can run through 800 test cases better than a computer can?
It's not a magic way to save money -- the developers obviously end up spending time writing tests. But the long-term value of those tests is cumulative, whereas the effort spent on manual testing is spent anew every release.
Manual review is still good for noticing things that "feel wrong" or for helping think up new corner cases. But those bleed into product owner & design concerns, and aren't really a separate function.
No thanks. It's a bunch of deflection and diffusion of responsibility coupled with high latency flakey interactions between different teams. Everything that can slip through the cracks does slip through the cracks.
I'm sure QA can be done well, but I am convinced that giving your devs a pass to not finish their work is a dead end in several dimensions.
Any time I've experienced something different it's been exactly as you described.
Now getting devs to see the product through the users' eyes goes a long way toward solving that, but if you have a process and team of devs that are doing that you're way ahead of the game in a lot of ways.
In practise, a manual QA team encourages Devs to throw shit over the wall and expect someone else to do some basic sanity checks they should have already done. By the time those are done, what the QA team theoretically could find gets shipped.
Then, when its discovered in Prod, the QA team will get in the way of a speedy fix.
This keeps the devs incentivized to make sure everything works before the code goes to QA, and it keeps everyone incentivized to eliminate as many bugs as possible before release.
If you give an engineer a career incentive to optimize something, you'd be surprised how seriously some will take it.
When a site gives a 500 because a database went down and the web app couldn't connect to it... is that a bug, a missed test case or should ops take a hit on down time? Furthermore, if you argue that the dev team should have reasonable failsafes in code to connect to a db, in a 10 year old organization, should the current Dev team pay for something that could have been in the code for years?
If you set up a system where devs hand off to QA then they hand off to ops to deploy. Furthermore every step is incentivised somewhat against each other. Even if all teams are equal, it ends up in my opinion, to the path to CYA and Waterfall. Everyone is more concerned with problems not being 'their' fault than shipping good code.
An incentive/penalty system for bugs without an incentive/penalty system for features completed leads to paralysis. And a complicated incentive system leads to game playing over productivity.
It's an old saying: be careful what you measure because you'll get a lot more of it.
Where this starts to break apart is that QA can get overzealous and file bugs for different incarnations of the same bug.
It's a bit of give and take.
For an organization with a single product (and, particularly, one small enough that the "top-level" QA and Dev manager are also first level managers), this makes some sense. Otherwise, this means that each product has no common level of technical management below the CTO, meaning it has no meaningful technical ownership.
This is because there is likely to be a tradeoff between speedy development and bug count.
If you are shipping physical CDs of software, or releases which you can't update easily, or if you are working in a critical environment where mistakes are disastrous (finance, health, space, etc.), then it is fair enough to be so concerned about bugs.
But many if not most developers work on the web in areas where the product is uncertain and evolving. Prioritising rate of change is more important here. Rather than extensive QA, monitoring and fast rollback are better choices.
My experience with the formal process I've been talking about was in a company whose product is a large-scale web application that interacted with a complex custom back end search engine and a proprietary content database. The QA and Dev teams had a friendly adversarial relationship, but ultimately we all felt we were on the same team with the same goals: to produce the best software we could. It was a fantastic place to work.
Having a top-level manager who is responsible for a whole discipline doesn't work, it just creates a high wall to throw shit over. Developers and QA testers really should report to the same first line manager.
Surely everyone should report into the business (end-user on the business side) lead responsible for making this functionality happen. This includes Dev, QA, business. QA should be business? This is no longer 1985, there is no reason business (product owner) shouldn't be a capable PM and BA for their own product.
On any given project, we had a Product Manager who set the business requirements, a Project Manager who ran the project, a Dev lead in charge of the development team, and a QA lead in charge of QA. None of these people could override the others because of the reporting arrangement, so we had to collaborate instead to make sure we all met our goals.
Early on this arrangement worked really well. But it started falling apart when ownership of the company changed and politics became a factor as people who cared more about moving up through the organization than about the success of the company and quality of the products were hired and promoted. My take is that bad management will destroy any approach you might take to producing good software. That's why I eventually left.
If this is in place, adding extra QA can be beneficial.
None of this of course is excuse for a dev not producing code that works at least in its happy path plus/minus a few of the most obvious exceptions/error paths...
I think this rather misses the point; it's the bug that doesn't have a test case where QA helps.
He found bugs in the specs, gaps in the specs and just plain untested behaviour.
QA is a job that requires skill, benefits greatly from technical understanding and requires a lot of domain knowledge.
Of course not. But automated tests are just the entry ticket to the release process.
A computer can't replicate the irrationality, laziness and guile of a human end-user. That's what good QA engineers test.
Similarly going full screen on a 2 monitor setup broke once. Again, no automated test.
The Web Text to Speech API is broken on every browser it's in. It will work with a simple sample but start and stop it a few times and it will break. I suspect because again there is no easy way to test it.
There's lots of others.
automated testing is not a substitute for QA IMO.
Devs don't think like users. They know a lot about computers and can make good guesses about how other devs think. So devs are not the best people to test code that's going to be put in front of non-dev users.
Users are likely to have different models, and a different set of expectations. You can't write tests for all those possibilities because you literally have no idea what they are - and won't find out until you put them in front of users.
Maybe in better companies than I worked it that isn't the case, but devs shouldn't be your 24/7 support staff.
My phone has never been so loud.
> Software engineers at Yahoo are no longer permitted to hand off their completed code to another team for cross checking.
Dev Team A was giving a batch of code to Dev Team B to review.
I agree that the business user/product owner/whatever should be reviewing everything in test prior to approval and in production after but I'm not sure that's the article means by QA.
That being said, if you have to pick one or the other then you go with automated tests.
Before the switch, our team (advertising pipeline on Hadoop) used the waterfall method with these gigantic, monolithic releases; we probably released a handful of times a year. Almost without exception, QA was done manually and was painfully slow. I started to automate a lot of the testing after I arrived, but believe you me when I say that it was a tall order.
Soon after I moved into development, QA engineers without coding chops were let go, while the others were integrated into the development teams. The team switched over to agile, and a lot of effort was made to automate testing wherever possible. Despite some initial setbacks, we got down to a bi-weekly release cycle with better quality control than before.
Around the time I left, the company was mandating continuous delivery for all teams, as well as moving from internal tools to industry-standard ones like Chef. I left before it was completed, but at least as far as the data pipeline teams were concerned, the whole endeavor made the job a lot more fun, increased release quality, and weeded out a lot of the "that's not my job" types that made life hell for everyone else.
That is an important part of producing quality output in any job, I believe. The more employees actually enjoy what they are doing (or at least, don't actively hate it), the better their output is likely to be.
But then there were the other QA teams. The people that would just reject your stuff outright if it didn't have tests (no matter if it worked) and when the tests passed they would look at things truly from a customer perspective. They would ask really uncomfortable questions, not just to developers, but to designers and business alike. They had a mindset that was different from those creating things; they were the devil's advocate. These people did much, much more good than harm, and they are few and far between. Unfortunately, while I believe they were incredibly valuable, business thought otherwise when cuts came around..
I recently became a software tester, and I really didn't understand the role for quite a while. Is my primary responsibility finding bugs? Logging defects? Analysing requirements documents? Writing test scripts? Writing Status reports?
Answer: Do enough of each to fulfill your goal of gathering and sharing information with your management and dev groups.
If the software tester has problems testing, then the customer will have problems using it, and the company will have problems supporting it.
1. Kaner, Cem; James Bach; and Bret Pettichord. 2001. Lessons Learned in Software Testing. Wiley.
When you have software developers skilled in quality assurance who have the job of finding the edge cases and producing comprehensive additional acceptance and functional testing, they're an asset. It's a particular perverse mindset that I personally really enjoy interacting with as a software developer 'customer' - those evil bastards find the best bugs, regression test them, and expand to find all of that class of error in the application.
Startups still glorify Facebook's "Move Fast and Break Things" without noting that Facebook has backpedaled from that. After all, people expect startup software to have issues, so what's the harm? Technical debt? Pfft.
Engineers are not the best QA for their own code since they may be adverse to admitting errors in their own code. QA engineers are not as empathetic.
Disclosure: I am a Software QA Engineer in Silicon Valley.
Whether a distinct QA team is the best means of performing the QA function is, however, a separate question.
The insane deadlines required devs to write lots of poor-to-average quality code (tried code reviews and peer programming ... no time for that so it fell on the wayside). The automated testing done by devs was terrible but understandable. If you are up until 2am hacking out code (without any precise requirements), then why bother with testing? We ended up having one -somewhat- central component that had "gating tests". Everyone stuck their tests there. That made things worse since that one component was the one that kept seeing failed tests. The PMs did frantic "user-like" testing before demos. You can imagine how fun that was.
In my opinion, the decision to not have a dedicated team of testers was the big mistake in all of this. When you have many teams, many components and no precise requirements, you need an independent QA team to coordinate and prevent people from passing the buck. In a time critical project (what projects are not time critical today?), you don't have precise requirements and devs have to "sling" code. I accept this reality. But I don't except the "no QA will make you mature devs" stupidity. If I was being a mature dev, I would refuse to code until the requirements were clear. None of this 2 week agile-scrum nonsense.
Oh .. and one other big thing. The project was a cloud project that needed to be up 24/7 while we were developing it (for beta users). It was like going from one outage to the next. What a disaster!
So what I learned is this: not only have a QA team, but have a 24 by 7 QA team for the kind of project I was on. Note... not all projects are the same!
And I don't think an independent QA team helps to coordinate people and avoid buck-passing, IME, the more different teams are essential to delivering a piece of software to the customer, the more opportunities for buck passing, and the higher level of management the buck passing occurs at.
You don't think that could be related? If you had a sane development cycle, that may have helped. Having QA would likely have helped as well, but a healthy dev cycle is a good start.
Facebook could afford to "move fast and break things" when they weren't taking money from anyone. Now that they're a real company, that kind of attitude can get them adverse attention from the SEC if it causes an accounting discrepancy or from trial lawyers if it upsets their advertisers.
That's not to say that dedicated QA doesn't have a purpose. They can and should be testing the live code on a regular basis (weekly or daily, depending on how long the testing cycle takes). But automated tests are what's responsible for ensuring that nothing arrives in production in a completely-broken state and it's assumed that the benefits of continually shipping software will outweigh the downsides of occasionally having subtle bugs in production. And even with QA involved, bugs will make it through the process. At that point, the expedited process of pushing code to production becomes a huge win. Testing strategies often aim for the best MTBF, but when that comes at the cost of MTTF, it's not always a good thing. We've had bugs that were fixed in production less than 10 minutes after the bug is filed.
The other point that gets missed is that people assume that there's no manual QA happening and a developer's careless change just goes to production and wreaks havoc. This ignores the code review process, which is crucial to delivering quality software in a continuous deployment scenario. On my team, changes require 2 +1s before being merged into master, subjected to continuous integration again and eventually deployed to production. Moreover, if any engineer reviewing the code isn't sure they fully understand the change or otherwise wants to see the code running, a single command that runs in under 10 min will spin up an environment in AWS using the code in the pull request so that they can do any manual testing they need to feel comfortable adding their +1. When their done, a single command cleans up that environment.
The thing to keep in mind when designing a testing strategy is the context your software runs in. I would not advocate this testing strategy for code that runs in a vehicle where a bug could cause physical destruction. Likewise, I wouldn't use it for an application with access to highly-sensitive medical or financial information where a leak or data corruption could mean millions of dollars in losses/fines. But for the majority of internet software, the stakes just aren't that high and the gains from a streamlined development process will outweigh the losses from bugs that find their way into production.
Disclosure: I manage a team that deploys continually, usually upwards of 20 times per day. We're responsible for our own QA and have a significantly lower defect rate than other teams in the company with a more traditional QA strategy. However we still draw on QA resources when we feel like we're pushing something risky.
This approach allows us to stay agile, with small, regular releases, while also making good use of what QA folks are actually good at.
Microsoft switched to this model a few months after Satya took over.
For the majority of Microsoft teams it worked really well and showed the kinds of results mentioned in this yahoo article. Look at many of our iOS apps as an example.
But for some parts of the Windows OS team apparently it didn't work well (according to anonymous reports leaked online to major news outlets by some Windows team folks) and they say it caused bugs.
First of all I think that argument is semi-BS and a cover up for those complainer's lack of competence in testing their code thus making them bad engineers because a good engineer knows how to design, implement, and test their product imo. But I digress.
I in no way want to sound like a dk but as an engineer it is your responsibility to practice test driven development but that's not enough.
Like reading an essay you usually can't catch all of your own bugs and thus peer editing or in this case cross testing is very useful.
You should write the Unit tests and integration tests for your feature
There should always be an additional level of end to end tests for your feature written by someone else who is not you.
Everyone should have a feature and design and implement it well including its Unit tests and integration tests BUT they should also be responsible for E2E tests for someone else's feature.
That way everyone has feature tasks and test tasks and no one feels like they are only doing one thing or stuck in a dead end career.
One of the things that put me off when it comes to TDD is that it has always been a bit like religion.
What matters is whether the tests exist, not when they were written. I'd even argue that writing a test first and then being constrained by that box is a bad idea. Write the most elegant code first, and then write tests to cover all paths. You're more likely to know the problem better after the code is written.
Technically true, but with myself at least; when I do TDD, I tend to write more, and better tests. When I write tests after code, especially when working on tight deadlines, there are substantially less tests written, just lots of TODOs that never get done.
That's why you need a second person (ideally QA) to look at the result and test it. Cognitive bias 101.
It's all the little details that will determine if this system will succeed or not.
Not the overall "big idea".
Implementation of this system and the competence and willingness to adapt of the team members is key imo. At least for this issue.
* Devs write automated unit tests galore, plus a smattering of integration tests
* QAs write some acceptance tests
* QAs maintain a higher level of broad understanding of where the org is going, trying to anticipate when a change in Team A will impact Team B _before_ it happens. They also do manual testing of obscure/unrepeated scenarios, basically using their broader knowledge to look for pain before it is felt.
The above hasn't happened anywhere I've been (though each point HAS happened somewhere, just not all together).
One thing in particular I've noticed is that a good QA is a mindset that a dev doesn't share. Devs can learn to be BETTER at QA than they are, but I honestly think it's not helpful for a Qa to be a Dev or a Dev to be a QA - they are different skill sets, and while someone can have both, it's hard to excel at both.
All developers should aim for no bugs and test their stuff themselves but of course when deadlines are looming its easier to just code and let the QA team pick it up.
Don't get me wrong, I think testing is important. But there's tons of code where you get no value by writing tests for it. (At least in frontend development.)
We don't have a QA person, but I think it'd be great to have one. You can't write automated tests to check that things all look like they should. You can't write automated tests to check that all of the interactions are behaving as expected.
Where I work, devs do the QA, and most of the devops work as well. It's the new reality, and anyone who thinks otherwise will be obsoleted.
Where have you worked? Just inside the SV bubble? This is definitely not true.
The suckiest part of this story is the number of folks who are stuck with gated handoff processes that can't see how this would ever work. Some of those folks might be waiting 10, 20 years catching up to the other folks.
Just to be clear, QA the function isn't going anywhere. It's all being automated by the folks writing the code. QA the people/team? Turns out that this setup never worked well.
I work with tech organizations all the time. I find that poor tech organizations, when faced with a complex problem, give it to a person or team. Good organizations bulldoze their way through it the first time with all hands on board, then figure out how to offload as much of that manual BS as possible to computers. If they can't automate it, they live with it until they can. Same goes for complex cross-team integration processes.
So I'd have to ask how getting rid of QA has affected the pace of feature development.
There is still QA, it's just automated QA. Welcome to the 21st century.
This just meant that everyone made sure they were writing well tested code before it got released because you didn't want to be the guy who made yourself, or worse, your coworker have to fix something at 3am.
Of course I can see how this could be bad too, like if developers really dislike writing tests. On the other hand, the people who write the code seem best equipped to understand how to set up automated testing most efficiently.
ha, yeah right.
What it really means at Amazon is build your service then bail for a new team before you have to maintain it.
Exaggeration, but somewhat true. I have a friend there now who has an oncall rotation that's split into day/night. nighttime oncall basically means you are working graveyard this week. Of course that is really because that team should have a proper support staff, it's a vital service.
Amazon, to me, is the epitome of a company that combines dev/qa/support because it's cheap rather than because it's actually good.
On the other hand, I'll disagree that they combine dev/qa/support because they're cheap. This actually makes little sense to me because typically QA roles are paid less than traditional software developers. That being the case, it doesn't make sense to get rid of them and let your developers do that stuff if all you want to do is save money.
What I heard was that they used to have support staff but changed it because it wasn't working out. I wasn't there so I don't know if that actually happened, but I can see how it's both harder and slower to have someone who doesn't know the code base fix bugs on the spot.
Honestly I can see it both ways. I think there are a lot of benefits to having the developers write tests themselves. On the other hand, when a project gets big enough, I can see how it makes sense to have people only working on tools like automated test frameworks or build stuff.
Except you don't do that, you fight operational fires and other such bullshit and things don't get fixed.
Their broken oncall system is one of the reasons why I don't want to go back to Amazon.
automation doesn't cover everything, you need baseline automation testing as well as more focused manual testing.
Today QA is talking with product/UX, taking the end user and customer perspective, wearing a quality head end to end over features an cross devices, doing explorative testing for stuff that does not make sense to a customer (mostly what's created by the inference of different features or cross device interaction).
I've worked with many a QA who would get bent up over a detail outside of the spec that didn't really matter, and where all QA testing was manual.
Coders (good ones) are well equipped to automate processes, and to do so quickly, and this extends to integration testing.
This is where you need management (or someone from the product side) who can set priorities, where needed, and put and end to pointless side-disputes that can and do crop up.
One of the big problems here, and where QA professionals can add real value, is defining that "done" point. Customers are often not very good at it. Their idea of what they want is too vague. They want developers to just build something, and they accept or reject it when they see it (and fault developers for not building it right).
But really, all story completion criteria should be testable, and developers should be able to demonstrate the tests. The job of QA shouldn't be to test, but to make sure the developers are actually testing what they claim to test.
Given this QA can still bring value. The two roles that they really add value in are a Test developer specialist writing non-flaky automated tests, and a BA type role where they have conversations that expand a product owner's idea into an implementable feature.
Given that neither of these roles require manual testing, if a QA team has over specialised on manual testing, there's little value in keeping it.
Nowadays OpenCV is used a fair amount, and they're migrating to modern industry-standard tools.
* Everyone should do QA and implement their features own UI/UX, by following the pattern the application and framework sets tuned by an actual designer
* An environment where production issues and bugs are prioritized above everything else should be created and fostered
* To paraphrase Rich Hickey's analogy on the matter: writing tests is like driving around relying on the guard rails to keep you in the lines. That is (my interpretation):
* If your code is this fragile to constantly require testing you've chosen poor abstractions.
I know it would be a tough problem and a big project, but I think with only a small amount of human interaction you can build all the integration testing you would ever need simply by allowing the crawler to build them for you.
In fact the way I imagine it would work, the system would automatically build a framework and a user could (in a very structured way via structured UI) coerce the integration tests in small ways to ensure it understands what's going on. For example: "This form is used for registration". "This form is for logging in".
Then you do a code review and make sure that the reviewer examines the code for missed cases.
When you remove the manual QA team and switch to staged rollout, you are moving the manual QA burden onto your users. You still have that manual QA team - they're the first bunch of users in your staged rollout plan - you just don't pay them anymore and gather their feedback through bug reports. Users are used to buggy software because of other companies who do this (Google, etc) so they carry on being users anyway.
It's inefficient, there is a very slow rate of feedback to devs, not much can be done until there is a working UI - so it all lends itself to the broken waterfall model of code code code, then 'do some testing' right at the end of the project - which has already seen overruns from dev squeeze qa time out.
Manual QA testers are relatively cheap on paper - so managers don't see a problem with building a team this way.
I'm not sure this will ever go away, but as someone who tries to learn every year, and master his career, I welcome Yahoo's choice. I see a role for a highly skilled 'developer in test' role superseding the traditional, ineffective manual QA role. Someone who can build automation frameworks quickly, be responsible for maintaining them and test data, and provide rapid feedback to devs. Devs should still be carrying out unit testing, code reviews etc, but I do believe a role still exists for someone to focus on QA, just with a lot more skills, providing far more rapid feedback, with less dependencies on the devs for test environments.
And in that system, the developer is completely removed from the product and is just another factory worker. The closer engineers can be to users (with design to translate obviously) the better for everyone.
Certainly, I'm an advocate of a more responsible dev team sharing the quality tasks and continuous integration too. But no QA at all? Hahah... maybe if you're a web portal that no one depends on for business-critical needs.
Edit: I guess the truth hurts.
Where are these magical places that do invest in QA? In nearly 20 years of professional development, I've never seen an organization in which the criteria for shipping was anything other than "works for me". I have never seen an organization in which there was either budget or managerial patience for proper QA, let alone anything other than VERY basic acceptance testing.
I think the reason is proper tooling, a culture of thorough automated testing, and ownership of code.
I once worked on an enterprise product that had a QA team of 25 people.
And what happens when you close your eyes? Reality disappears?
Automated testing is a way to completely remove customer advocates out of the loop. Correct UX doesn't mean good UX and unless someone can automate the test of all the non-quantifiable qualities of good and intuitive they're gonna push loads of engineering driven interfaces to their users.
The article makes the assumption that QA == manual QA which as a quality professional is false. Quality is about measuring risk across the development process. Immature team need manual QA while mature (in a process/quality sense) teams need much less (or none).
Quality professionals who want a sustained career needs to learn development processes, opts, documentation & monitoring. We make teams better.
Keeping a central code repository, automating builds, frequent commits and automatic tests for code are taking away a lot of load for QA teams.
Manual ("batch-release") deployments have been forbidden for over a year, which is a forcing function to change development process to allow deploying to production continuously multiple times a day. This requires robust test and deployment automation and for engineers to better understand what they build. It's pretty nice overall!
My point isn't to be negative about the state of Yahoo Finance; you probably don't work in that department, and after three years of neglect, most of the users are long gone.
My point is that if an organization is going to rely on end users to report bugs, the organization must actually respond to those bugs. Sometimes the answer might be "No, we're not going back to the Web 1.0 UX." But ignoring the top bugs for multiple years suggests a breakdown in the feedback mechanism. If Yahoo doesn't care, that's fine, it's just business. But it seems more likely that Yahoo doesn't even know there's a problem, because there's no way for user feedback to make it to the developers.
This makes sense to me. The fact that this wouldn't have worked at the old-style "enterprisey" places we used to work at doesn't say much in general. (It may not work at Y! either, but it seems possible...)
the nastiest bugs will be found by either a manual tester or your customer.
Their team and product are quite good if you want to explore QA as a service. Essentially humans(turks) preform outlined and preprogrammed steps.
Their tagline "We automate your functional and integration testing with our QA-as-a-Service API. Human testing at the speed of automation."
In my experience, one is not a substitute for the other.
However I think there is probably a middle ground where your engineers deliver quality code and you also have a QA team to increase that quality even further.
I don't know of any large, complex systems where QA is not necessary. Technology is only fallible because humans are.
They fired the QA team to force the devs to do a better job of designing for testability. When you have to write and plough through your own tests, you integrate tests earlier, and you modularize your code better to support that testing. There's a lot in the early phases of engineering that the initial devs can do that QA cannot. QA is handed a black box; dev gets to change the box.
It's a very similar big-picture realization as the move from "system administration" to SREs/devOps. Having a bunch of people throwing #@*( over the wall that other people then have to make work is a poor model for optimizing the big picture.
This is a good move. It's Mayer taking another play from the Google playbook and trying to improve the process at Yahoo.
If that is actually true, then they should have fired the Devs and found ones who understand writing correct and testable code.
That is a horrible way to treat people.
This is not a new observation - it's been happening over the last 10 years. see, for example, http://blogs.forrester.com/mike_gualtieri/11-02-17-want_bett...
and, while it's from 2007, http://googletesting.blogspot.com/2007/03/difference-between...
So are Q&A people. They are the developer's teams members not their minions.
> We respond to the incentives and structure of the environment in which we're developing. A culture of "throw crap over the wall to QA" naturally creates an incentive for quickly writing up features, without balancing that with robustness, because finding the problems "is QA's fault". A culture of "the devs are responsible for the quality of their code" produces better code.
I'm wondering why the presence of a Q&A team would in any way change the professional responsibility of developers to create correct, testable code. If your developers have developed a culture of "throw crap over the wall to QA", then that's on the developers.
> The way to treat the devs well is to have the QA folks go off and create or bring in easy-to-use automated testing and CI frameworks, to make it easy for the devs to do the right thing.
That is not really the function of QA. Developers need to use proper testing tools on their own, but QA has its own function beyond just testing the code. They also test the assumptions and specification that the "business" ask the developer's to code. They look for bugs that go beyond what a certain set of test criteria instructs. It is also important to supply Q&A with tools to do fast, automated testing. Developer tools are not the function of Q&A.
Q&A has a job to do, and its irresponsible for Developers use them as a substitute for their responsibilities. As an example from other industries, safety is not just the Safety Officer's problem.
// I've been equal years developer and system admin working with good and bad Q&A people
As for why I think having Yahoo engineers take over janitorial duties is silly... If you're paying Silicon Valley engineering wages to someone (which they are, no one is going to Yahoo for stock options!) you should make sure they are working on engineering level tasks. It would be a giant waste of resources to have people do tasks they are overqualified for.
Imho, this is the way engineering teams should be structured anyways.
False. If engineers are typically working 9-5 or 9-6, and you give them new responsibilities but keep existing deadlines and scope, it in essence has the effect of forcing them to work longer hours to get the work done (or else...).
Not saying this is good or that the practice doesn't result in increased turnover, but that doesn't stop companies from doing it with some success.
And apparently there's a native OS X client coming, as well: http://mashable.com/2015/12/03/yahoo-messenger-is-back/
It's still mystifying that there was no iOS or Mac version for such a long (albeit temporary) period of time.
Also mystifying that I apparently deserved a -4 for this comment. Seems valid to me to note Yahoo not having an app on iOS or Mac OS for well over a year.