If you run a large installation of computers, taking updates can be a huge risk. Often they can break things, and then you're in the position of being blamed for running an update. Not updating can often lead to much higher stability.
In previous environments I've worked that were "regulated", any change to the environent, such as a firmware upgrade, triggered an entire re-regulation process (testing, paperwork, etc).
That's wrong. If you run a large installation of computers, and you do not have a plan and a process for quickly deploying security patches, you should be fired with cause.
In this specific case, there are mitigations available that do not require installation of software, but merely a configuration change. Also in this specific case, the people who run IT at NHS are completely incompetent, and this has been well-documented for several years.
In the general case, "I have a lot of machines" is an excuse provided by the unable to evade being held responsible by the uninformed.
Easy for your to say. I have been unable to do my job for a several days because some update broke a service I was using. Sure the service was badly written, but we didn't know that until the patch was applied.
The phone company used to have (they still might, I'm not in the business anymore) large labs that were small replications of their network. I've been in meetings where the goal was to decide if we should try to get our latest release through their process - if yes and we were successful they would pay big for the latest features, but if yes and it failed [I can't remember, I think we had to pay them for test time, but the contract was complex]. A lot of time was spent looking at every known bug to decide if it was important.
funding is necessary but not the determining factor. There are just as many incompetent IT admins in well funded private companies earning top pay, Sensible and aware top management is far more critical,
if your job is to keep a bunch of computers working, keeping the systems running is the goal. Deploying security patches quickly is not always considered a requirement.
Again, the problem is that rolling out patches quickly often leads to unplanned problems that can't be easily detected or rolled back from. That can cause problems worse than leaving security issues unpatched.
If your systems are exposed to the Internet, then deploying security patches quickly is a part of keeping the systems running - as illustrated by this case, where the systems obviously are not running and can't be easily rolled back to a working state.
The business of cybercrime is changing. With the growing popularity of ransomware, we should expect a gradual decrease in time between a published remote vulnerability and your systems getting attacked. It may be useful to delay patches by a day to see if there aren't any glaring problems encountered by others - but it's not a reason do leave open holes that were patched in March. Frankly, there was no good reason why this attack hadn't happened a month ago; next time the gap may be much smaller.
Yes, there is a chance that installing a security update to break your systems. But there's also a chance that not installing a security update will break your systems, and that chance, frankly, is much higher.
Furthermore, "That can cause problems worse than leaving security issues unpatched" seems trivially untrue. Every horrible thing that might happen because of a patch broken in a weird way may also happen because of an unpatched security issue. Leaving security issues unpatched can take down all your systems and data, plus also expose confidential information. A MS patch, on the other hand, assuming that it's tested in any way whatsoever, won't do that - at most, it will take down some of your systems, which is bad, but not as bad as e.g. Spain's Telefonica is experiencing right now. What patch could have caused them even worse problems?
When you say 'the people who run IT at the NHS' you are aware that thanks to recent governments attempts to break up central structures, each hospital trust, each GP surgery is likely to have someone different handling IT - market forces are good etc.
Any competent sysadmin will have these available on their internal update server and push updates+restart during off-peak hours.
Receptionist computers that can open websites with untrusted JavaScript can't reasonably be held to this certification. Certification isn't what kept the NHS from applying patches.
Some vertical markets use a lot of software that integrates with Microsoft Office applications. The result is that there is a much higher change of a Microsoft update breaking a critical application. [0] is a recent (September 2015) example of two Microsoft patches that were widely blocked in the legal industry until Microsoft released a follow up patch. iManage and Workshare, the products mentioned in the blog entry, are considered critical applications in any law firm that uses them. iManage is a widely used document management system (think primitive VCS with Office add-ins). All documents are stored in the DMS so access to it is critical to the business. Workshare is used for document comparison and metadata scrubbing. Metadata scrubbing is used on all outgoing emails.
Translation: "My feelings make me feel that the statement isn't right. Instead of finding out, I'm just going to say that I wish someone would tell this commenter they're wrong."
> If you run a large installation of computers, taking updates can be a huge risk. Often they can break things, and then you're in the position of being blamed for running an update. Not updating can often lead to much higher stability.
There is such a thing as staged rollouts for this exact type of scenario.
In previous environments I've worked that were "regulated", any change to the environent, such as a firmware upgrade, triggered an entire re-regulation process (testing, paperwork, etc).