What happened? Microsoft used to have an army of testers who would do a mix of manual and automated testing to help ensure the quality of the product. In 2014 they were all fired. Their strategy for testing Windows is now three fold:
- Devs are expected to test their own code now
- There is now a program where customers can use beta builds of Win10 and report issues
- MS relies on telemetry to find and fix issues
Today there's lots of old code from the era before automated tests were widespread in the industry, and often automated tests which were previously maintained by the testers are now bitrotting. The culture of testing is not strong among developers in some parts of the company. The beta program can't catch everything either.
So telemetry it is.
And you need some telemetry. Minidumps/coredumps make debugging certain kinds of crashes much easier, and Apple captures that sort of data too. And I guess seeing whether people use some widget in Office a lot/not at all could be useful for improving the product. In practice it also leads to issues with the GDPR and governments, which MS does ultimately fix.
Apparently Win10 Enterprise users have a switch somewhere where they can turn off almost all telemetry, but not so for Home and Pro users, say what you will about that.
> In 2014 they were all fired. Their strategy for testing Windows is now three fold: - Devs are expected to test their own code now
I was at a company that did the same (at roughly the same time) - this was a terrible mistake. But it's very hard to discuss, because any dev complaining sounds like they want to avoid testing.
Dear manager that I hope reads this:
QAs, at least the ones under discussion, aren't just mindless pressers-of-buttons. They have a skillset and domain knowledge just like (but distinct from) devs. A dev should absolutely test their own code...but they will do so with their domain knowledge which will be by definition more limited than someone for whom such knowledge is a key part of the job. Devs should test so that only problems worthy of the QA skillset make it to the QAs.
Asking devs to do QA work is asking them to build skills that are not their strength, and that will come at the cost of their strengths. It is asking them to build skills they probably don't enjoy, and that will come at the cost of their job satisfaction. Having devs do QA work is as bad as devs that test nothing and leave it all for the QAs - it's an inefficient and ineffective use of resources and time.
There is nothing wrong with encouraging Devs to deliver higher quality code to the QA/Test engs. There IS something wrong with thinking higher quality code from devs mean you don't need a QA/Test Eng.
Firing all your QA engineers because you can make your developers do QA is like firing all your operations people because you can make your developers do operations.
In either case, it's also like cutting off your left arm because you still have your right arm.
This is such a perfect analogy for where I feel DevOps fails in some implementations.
My experience is across three different companies now, and in the first two (not my current employer who... still do it wrong but it's the best I've seen), they fired the Ops guys and expected the Systems Engineers\Software Engineers to take those roles.
No overtime pay in either company for the extra work load that was placed on them (e.g. me) of course. So the only cost the companies incurred was productivity and the quality of work. The burden was entirely carried by the staff and to a greater degree the clients who accept a shoddier product and service.
Fortunately I quit both jobs to get to where I am now, though the old timers no doubt are still working ridiculous hours and are too scared to ask them selves "can I find better else where?".
Re: specialization. Most people aren't ambidextrous afaik and will have a hard time retraining to do things with another hand. So I think the metaphor still holds
Testing your own code is quite a conflict of interest. Of course you are going to make sure that all your tests pass. And argue that the corner cases that you didn't test will never happen in production.
From a recent comment around here I (re)learned that some software developers "just build to spec and they're done"; it's not their fault that some user presses a button and accidentally launches all nuclear missiles [or whatever], that wasn't stated as "don't allow it" in the spec so it's not their fault.
And then QA does their good job and gets hate for saying that accidental nuclear genocide [or whatever] might be a bad thing so let's fix that before release to the customer.
Sure, "code to spec and you're done". But are you actually really done? Are you sure?
> Microsoft used to have an army of testers who would do a mix of manual and automated testing to help ensure the quality of the product. In 2014 they were all fired.
> ... Devs are expected to test their own code now
I'll never understand this line of thinking. A good tester doesn't think the way a good developer does. It's a different but equal and necessary skill in order to provide a strong product.
Fully agree! Development is about making things work, testing is about breaking things.
Sadly, some managers think that 'developed' includes 'tested' and expect flawless results from developers. It's like being a dancer and a painter simultaneously - possible, but has its limitations.
> And you need some telemetry. Minidumps/coredumps make debugging certain kinds of crashes much easier, and Apple captures that sort of data too.
I don't know how Apple does it, but crash reporters have been around for a long time (Netscape had one!) and would always ask you if you wanted to send a crash report.
I worked at a place where we used to dog food betas, and Microsoft basically refused to accept any reports from them. They only wanted telemetry.
Then you have products like SCCM that are now forever beta, and they play games with release cycles so that you literally have about a 3 week period to report substantial bugs and get a fix. Before that, the product isn’t generally available. (You need to use “pre-release” to keep up with the quarterly updates) After that, you’ll be told that it’s too late in the cycle to fix bugs, because they won’t be fixed until after the support cycle ends.
Today there's lots of old code from the era before automated tests were widespread in the industry, and often automated tests which were previously maintained by the testers are now bitrotting. The culture of testing is not strong among developers in some parts of the company. The beta program can't catch everything either.
So telemetry it is.
And you need some telemetry. Minidumps/coredumps make debugging certain kinds of crashes much easier, and Apple captures that sort of data too. And I guess seeing whether people use some widget in Office a lot/not at all could be useful for improving the product. In practice it also leads to issues with the GDPR and governments, which MS does ultimately fix.
Apparently Win10 Enterprise users have a switch somewhere where they can turn off almost all telemetry, but not so for Home and Pro users, say what you will about that.