> "The malware was circulated by email; targets were sent an encrypted, compressed file that, once loaded, allowed the ransomware to infiltrate its targets."
It sounds like the basic (?) security practices recommended by professionals - keep systems up-to-date, pay attention to whether an email is suspicious - would have covered your network. Of course, as @mhogomchunu points out in his comment - is this the sort of thing where only one weak link is needed?
Still. Maybe this will help the proponents of keeping government systems updated? And/or, maybe this will prompt companies like MS to roll out security-only updates, to make it easier for sysadmins to keep their systems up-to-date...?
(presumably, a reason why these systems weren't updated is due to functionality concerns with updates...?)
This is secondhand information (so take it for what it's worth, there could be pieces I'm missing), but I talked with a startup that was focusing on this problem, and the issue was not quite the computers and servers that IT were using (although sometimes it was), it was that many medical devices (like CT scanners, pumps, etc) come shipped with old outdated versions of operating systems and libraries.
No big deal right? Just make sure those are up to date too? Well, many times the support contract for these medical devices are so strict that you can invalidate the warranty by installing third party software like an antivirus, or even doing something like Windows update.
Even worse, many hospitals don't even know what devices they have -- it's easy for IT to know about laptops and computers, but when every single medical device more complicated than a stethoscope has a chip in it and may respond to calls on certain ports, it's a tougher picture to know.
The startup was https://www.virtalabs.com/ by the way, they really are doing some cool things to help with this.
Basically, the rule should be: if you are using general purpose consumer software, then you should be doing updates; if you are in an environment where updates are considered too risky, then running commodity software should also be considered too risky and you should be building very small locked down systems instead. Ideally without a direct internet connection (they can always connect through an updatable system that can't actually cause the medical device to malfunction, but can be reasonably protected against outgoing malware as well).
 I would be ok with some of these devices running a stripped down Linux (or NT) kernel, just not a full desktop OS. If you need a fancy UI, then that can be in an external (hopefully wired, not IoT) component that can be updated.
Moreover, the manufacturer have the obligation to assess every known bug of every SOUP and provide fixes if this can endanger the patient.
The issue is that to prove that a device is safe you have to execute costly tests. For a device I have been working on, we do endurance tests on multiple systems to simulate 10 years of use. Even with intensive scenario, on multiple systems it can take a few months. And if we encounter a single crash we reset the counter and start again. So in the end the product is safe but it is costly. This is why most of the time it is actually better to have the most simple stack possible on bare metal. But sometimes mistakes have been made, and you inherit a system full of SOUP and this is a nightmare to maintain.
I actually except some shitstorm on Monday morning, luckily I am working more on the embedded side so no Windows for me but some other divisions will be affected.
Except that people don't want to learn a new GUI for every machine...
Except that people want to be able to use a tablet for the interface...
Except that people want to control things from their phone...
Here's the reality: The end user doesn't give one iota of DAMN about security. People want to control their pacemaker or insulin pump from their PHONE. Ay yai yai.
Even worse: can your security get in the way when someone is trying to save a life? That's going to send you to court.
Life critical systems should be small, fully open stack, fully audited, and mathematically proven to be correct.
Non-critical systems, secondary information reporting, and possibly even remote control interfaces for those systems should follow industry best practices and try to do their best to stay up to date and updated.
Most likely many modern pieces of medical technology have not been designed with this isolation between the core critical components that actually do the job and the commodity junk around them that provide convenience for humans.
which is what happens when your whole computing network is remote-killed
It's not like medical devices have an entertainment system like cars and airplanes.
This is all doable, but it adds a bit of BOM cost and changes the development model.
The long and short don't use standard desktop Windows (or even standard embedded Windows), Linux or MacOS to run these devices.
If someone has a computer hooked to an MRI machine and to the hospital network, and it runs outdated/insecure software then someone made a mistake somewhere.
If you want a system to reach 100% it can't rely on not making mistakes. If all operating systems are supposed to be updated, then this has to be enforced as part of the software. The software e.g. shouldn't accept traffic unless it's up to date.
It's certainly ridiculous if you don't keep it utterly sandboxed and limited to only required use.
Also ridiculous is anyone falling for - or being allowed to fall for - a mail based phishing attack anywhere in the organisation.
This is a failure of management to properly train their employees.
I could understand if 1 would be a violation, but perhaps, after today, the FDA could fast track manufacturer patches to run software loads on VMs?
I don't imagine 2 would solve current infrastructure issues any time soon given the size of investments in current equipment, but could it be a best practice going forward?
In 2006 this involved a nice virus that sent all your photos and emails off to people they were not intended to go to, there was a psychological aspect to what was going on with this payload plus a full spectrum dominance aspect - the media were briefed with the cover story but I don't think any journalists deliberately infected a machine to see for themselves.
At the same time that this was going on there were some computer virus problems in U.K. hospitals, those same Windows XP machines they have today. The Russian stock market was taken down around this time too.
Suspiciously I tried to put two and two together on this, but with 'fog of war' you can't prove that the correlation = causation. The timing was uncanny though, a 'cyberstorm' exercise going on at the same time that the BBC News on TV was showing NHS machines taken out by virus.
So that was in 2006. A decade ago. If you found a hole in a hospital roof a decade ago they would have had ample opportunity to fix it. They had a good warning a decade ago, things could have changed but nothing did.
I had the pleasure of a backroom tour of a police station one night, don't ask why, luckily I was a 'compliant' person, no trouble at all, allowed to go home with no charges or anything at all. An almost pleasant experience of wrongful arrest, but still with the fingerprints taken - I think it is their guest book.
Every PC I saw was prehistoric. The PC for taking mugshots was early 1990's, running Windows 95 or 98. I had it explained to my why things were so decrepit.
Imagine if during the London riots of 2011 if the PCs PC network had been taken down with all of that police bureaucracy becoming unworkable?!? I believe that the police computers are effectively in the same position as the NHS, with PCs dedicated to one task, e.g. mugshots, and that a take down of this infrastructure would just ruin everything for the police. I think that targeting the UK police and getting their computers compromised (with mugshots, fingerprints, whatever) and then asking the police to pay $$$ in bitcoin before they were locked out for good next week, that would have made me chuckle with schadenfreude.
Anyone considering disclosure, responsible or not, should be aware of these types of secondary effects. Had these vulnerabilities hypothetically been discovered by a white hat or found their way to a leak-disseminating organization, the discoverers and gatekeepers should consider that not everything can be patched, and the ethical thing to do here would have been to notify Microsoft and wait for a significant product cycle to release technical details. I somehow doubt the Shadow Brokers had that aim, though. And it's saddening that even in the hypothetical case, many people would choose "yay transparency!" over a thoughtful enumeration of consequences.
We saw a fictional example of a scheme like this on Battlestar Galactica. Officers phoned and faxed orders around the ship, using simple devices that did not execute software. The CIC had its data punched in by radar operators, instead of networking with shipwide sensors. It was a lot of work, but it did keep working in the face of a sophisticated, combined malware/saboteur attack.
> security incidents
Also, the doctor's computer pretty much needs to interface with the system(s) that handles patient billing (and thus non-medical companies) and the system(s) that handle patient scheduling, reminders, etc.
Not really an issue in the NHS, apart from the occasional non-resident foreign national.
(The "fundholding" system does mean there's a certain amount of internal billing which the patient is never aware of, but the beating Bevinist heart of the free-at-point-of-use system is still in place)
A private practice where everything is paid by the patient in full by cash or CC could do without any integration with external systems (just run a standard cash register), but as soon as someone else is paying for it, you generally need to link the doctor's office systems to that in some way.
And yes, surely they should have super limited network features. The important word is "should."
Workstations absolutely should be patched with security updates. Running an intranet-wide update server is non-trivial, but is well within the reach of a competent sysadmin. And failing to do it is negligent.
In previous environments I've worked that were "regulated", any change to the environent, such as a firmware upgrade, triggered an entire re-regulation process (testing, paperwork, etc).
In this specific case, there are mitigations available that do not require installation of software, but merely a configuration change. Also in this specific case, the people who run IT at NHS are completely incompetent, and this has been well-documented for several years.
In the general case, "I have a lot of machines" is an excuse provided by the unable to evade being held responsible by the uninformed.
The phone company used to have (they still might, I'm not in the business anymore) large labs that were small replications of their network. I've been in meetings where the goal was to decide if we should try to get our latest release through their process - if yes and we were successful they would pay big for the latest features, but if yes and it failed [I can't remember, I think we had to pay them for test time, but the contract was complex]. A lot of time was spent looking at every known bug to decide if it was important.
That's what you get when you defund critical services.
Again, the problem is that rolling out patches quickly often leads to unplanned problems that can't be easily detected or rolled back from. That can cause problems worse than leaving security issues unpatched.
The business of cybercrime is changing. With the growing popularity of ransomware, we should expect a gradual decrease in time between a published remote vulnerability and your systems getting attacked. It may be useful to delay patches by a day to see if there aren't any glaring problems encountered by others - but it's not a reason do leave open holes that were patched in March. Frankly, there was no good reason why this attack hadn't happened a month ago; next time the gap may be much smaller.
Yes, there is a chance that installing a security update to break your systems. But there's also a chance that not installing a security update will break your systems, and that chance, frankly, is much higher.
Furthermore, "That can cause problems worse than leaving security issues unpatched" seems trivially untrue. Every horrible thing that might happen because of a patch broken in a weird way may also happen because of an unpatched security issue. Leaving security issues unpatched can take down all your systems and data, plus also expose confidential information. A MS patch, on the other hand, assuming that it's tested in any way whatsoever, won't do that - at most, it will take down some of your systems, which is bad, but not as bad as e.g. Spain's Telefonica is experiencing right now. What patch could have caused them even worse problems?
You just download the monthly rollup: http://www.catalog.update.microsoft.com/search.aspx?q=401221...
Any competent sysadmin will have these available on their internal update server and push updates+restart during off-peak hours.
There is such a thing as staged rollouts for this exact type of scenario.
Only non-critical machines can just automatically apply software patches from Redmond (or anybody). This is not laziness or incompetence - only a few weeks ago military grade exploits from the USG were leaked onto the internet and are currently being re-purposed for non-spying applications. Does anyone think any organisation is prepared for this? Chinese chatter indicates that ms17-010smb doesn't even fix all cases! Many organisations will have been saved by infra guys making sure ms17-010smb was rushed through and that McAfee sigs were updated 'just because'.
edit: fixed CVE (Eternalblue)
They should have formally validated software running on formally validated deterministic realtime hardware, running in non-networked environments (But with telemetry and remote control from networked computers if that's convenient) we just don't bother because it's cheaper and legal to get away with selling hacky nonsense.
Now the machine that you pull up the images on is most likely going to be a general purpose PC/Mac. You still need to patch that. Your IT dept needs to have patch cycles that deploy in sets, so all mission critical equipment can be tested before everything gets patched. It takes diligence, and planning. If you prepare at a very large hospital with two MRI machines, then a bad patch can leave you degraded, but not totally offline.
Yeah, not gonna happen.
As long as the chance of cyberattacks is larger than the chance of horrible patches, you simply accept the risk of horrible patches and install them anyway. Or keep the system totally isolated from everything, if it's that critical.
The IV drip machine is plugged into the wall, and is operated by buttons on the front.
Unfortunately, I think the active hours period cannot be set to more than twelve hours, which is less than the time required for some surgical interventions. I can almost imagine it: OK everyone, ten-minute break while Windows installs its updates, this guy who's been on life support for the last ten hours can wait a little longer.
That's why updates are not forced on business-grade installs, and forcing them would be a very, very stupid decision.
Forced updates make sense for home users, since Microsoft can't depend on someone requiring them to keep their networks secure. For other types of users, second-guessing update policies is always a bad idea.
If someone is going to die if a computer stops working for any reason at all, it should not be running Windows, or Linux, or macOS. It certainly shouldn't be connected to the internet or to any other network.
When we treat computers as nice-to-have mixed-use machines with all the bells and whistles, you need to treat them like nice-to-haves and not need-to-haves.
Surgeon workstations can absolutely be restarted once per month to install the monthly roll-up.
The article mentions patient records servers and receptionist computers being affected by the ransomware. Not life support equipment.
I was replying to the part about forcing updates. I didn't know about the group policy setting (rightfully pointed out by sp332); without it, you don't wait a month, you wait at most 12 hours :-).
And there have already been situations where updates have caused problems. Maybe not as severe as a full on attack, but enough to potentially disrupt production and thus risk someones job.
There was a Windows script file on the desktop, something like "UPS tracker.js", but it disappeared before I could grab it and a free space recovery didn't return it. (Possibly due to TRIM, it was on an SSD workstation.)
The problem with windows is that crap can not be upgraded without stopping workflow and rebooting.
With linux distros you can upgrade packages in a background (I think that's because you can upgrade the file being executed in linux, while in windows you can't, but I'm not sure) without even rebooting after upgrade. You can even patch your kernel without reboot.
It windows you have to see an upgrade screen for an hour without an opportunity to do something useful, and after that you have to reboot. That sucks.
I do not believe outsourcing saves money. It only does so either by cutting quality of service, or in cases where the IT department was heavily mismanaged anyway. Bring in capable management and you don't need to outsource.
Special-purpose stuff can still be cheaper to outsource, though. If I need something to work next week and it would take my staff a month to get up to speed, I'd spend the money on outsourcing it.