Basically, the rule should be: if you are using general purpose consumer software, then you should be doing updates; if you are in an environment where updates are considered too risky, then running commodity software should also be considered too risky and you should be building very small locked down systems instead. Ideally without a direct internet connection (they can always connect through an updatable system that can't actually cause the medical device to malfunction, but can be reasonably protected against outgoing malware as well).
 I would be ok with some of these devices running a stripped down Linux (or NT) kernel, just not a full desktop OS. If you need a fancy UI, then that can be in an external (hopefully wired, not IoT) component that can be updated.
Moreover, the manufacturer have the obligation to assess every known bug of every SOUP and provide fixes if this can endanger the patient.
The issue is that to prove that a device is safe you have to execute costly tests. For a device I have been working on, we do endurance tests on multiple systems to simulate 10 years of use. Even with intensive scenario, on multiple systems it can take a few months. And if we encounter a single crash we reset the counter and start again. So in the end the product is safe but it is costly. This is why most of the time it is actually better to have the most simple stack possible on bare metal. But sometimes mistakes have been made, and you inherit a system full of SOUP and this is a nightmare to maintain.
I actually except some shitstorm on Monday morning, luckily I am working more on the embedded side so no Windows for me but some other divisions will be affected.
Except that people don't want to learn a new GUI for every machine...
Except that people want to be able to use a tablet for the interface...
Except that people want to control things from their phone...
Here's the reality: The end user doesn't give one iota of DAMN about security. People want to control their pacemaker or insulin pump from their PHONE. Ay yai yai.
Even worse: can your security get in the way when someone is trying to save a life? That's going to send you to court.
Life critical systems should be small, fully open stack, fully audited, and mathematically proven to be correct.
Non-critical systems, secondary information reporting, and possibly even remote control interfaces for those systems should follow industry best practices and try to do their best to stay up to date and updated.
Most likely many modern pieces of medical technology have not been designed with this isolation between the core critical components that actually do the job and the commodity junk around them that provide convenience for humans.
which is what happens when your whole computing network is remote-killed
It's not like medical devices have an entertainment system like cars and airplanes.
This is all doable, but it adds a bit of BOM cost and changes the development model.
The long and short don't use standard desktop Windows (or even standard embedded Windows), Linux or MacOS to run these devices.
If someone has a computer hooked to an MRI machine and to the hospital network, and it runs outdated/insecure software then someone made a mistake somewhere.
If you want a system to reach 100% it can't rely on not making mistakes. If all operating systems are supposed to be updated, then this has to be enforced as part of the software. The software e.g. shouldn't accept traffic unless it's up to date.
It's certainly ridiculous if you don't keep it utterly sandboxed and limited to only required use.
Also ridiculous is anyone falling for - or being allowed to fall for - a mail based phishing attack anywhere in the organisation.
This is a failure of management to properly train their employees.
I could understand if 1 would be a violation, but perhaps, after today, the FDA could fast track manufacturer patches to run software loads on VMs?
I don't imagine 2 would solve current infrastructure issues any time soon given the size of investments in current equipment, but could it be a best practice going forward?
In 2006 this involved a nice virus that sent all your photos and emails off to people they were not intended to go to, there was a psychological aspect to what was going on with this payload plus a full spectrum dominance aspect - the media were briefed with the cover story but I don't think any journalists deliberately infected a machine to see for themselves.
At the same time that this was going on there were some computer virus problems in U.K. hospitals, those same Windows XP machines they have today. The Russian stock market was taken down around this time too.
Suspiciously I tried to put two and two together on this, but with 'fog of war' you can't prove that the correlation = causation. The timing was uncanny though, a 'cyberstorm' exercise going on at the same time that the BBC News on TV was showing NHS machines taken out by virus.
So that was in 2006. A decade ago. If you found a hole in a hospital roof a decade ago they would have had ample opportunity to fix it. They had a good warning a decade ago, things could have changed but nothing did.
I had the pleasure of a backroom tour of a police station one night, don't ask why, luckily I was a 'compliant' person, no trouble at all, allowed to go home with no charges or anything at all. An almost pleasant experience of wrongful arrest, but still with the fingerprints taken - I think it is their guest book.
Every PC I saw was prehistoric. The PC for taking mugshots was early 1990's, running Windows 95 or 98. I had it explained to my why things were so decrepit.
Imagine if during the London riots of 2011 if the PCs PC network had been taken down with all of that police bureaucracy becoming unworkable?!? I believe that the police computers are effectively in the same position as the NHS, with PCs dedicated to one task, e.g. mugshots, and that a take down of this infrastructure would just ruin everything for the police. I think that targeting the UK police and getting their computers compromised (with mugshots, fingerprints, whatever) and then asking the police to pay $$$ in bitcoin before they were locked out for good next week, that would have made me chuckle with schadenfreude.
Anyone considering disclosure, responsible or not, should be aware of these types of secondary effects. Had these vulnerabilities hypothetically been discovered by a white hat or found their way to a leak-disseminating organization, the discoverers and gatekeepers should consider that not everything can be patched, and the ethical thing to do here would have been to notify Microsoft and wait for a significant product cycle to release technical details. I somehow doubt the Shadow Brokers had that aim, though. And it's saddening that even in the hypothetical case, many people would choose "yay transparency!" over a thoughtful enumeration of consequences.