Hacker News new | comments | show | ask | jobs | submit login

> "Microsoft rolled out a patch for the vulnerability last March, but hackers took advantage of the fact that vulnerable targets — particularly hospitals — had yet to update their systems."

> "The malware was circulated by email; targets were sent an encrypted, compressed file that, once loaded, allowed the ransomware to infiltrate its targets."

It sounds like the basic (?) security practices recommended by professionals - keep systems up-to-date, pay attention to whether an email is suspicious - would have covered your network. Of course, as @mhogomchunu points out in his comment - is this the sort of thing where only one weak link is needed?

Still. Maybe this will help the proponents of keeping government systems updated? And/or, maybe this will prompt companies like MS to roll out security-only updates, to make it easier for sysadmins to keep their systems up-to-date...?

(presumably, a reason why these systems weren't updated is due to functionality concerns with updates...?)




> It sounds like the basic (?) security practices recommended by professionals - keep systems up-to-date, pay attention to whether an email is suspicious - would have covered your network.

This is secondhand information (so take it for what it's worth, there could be pieces I'm missing), but I talked with a startup that was focusing on this problem, and the issue was not quite the computers and servers that IT were using (although sometimes it was), it was that many medical devices (like CT scanners, pumps, etc) come shipped with old outdated versions of operating systems and libraries.

No big deal right? Just make sure those are up to date too? Well, many times the support contract for these medical devices are so strict that you can invalidate the warranty by installing third party software like an antivirus, or even doing something like Windows update.

Even worse, many hospitals don't even know what devices they have -- it's easy for IT to know about laptops and computers, but when every single medical device more complicated than a stethoscope has a chip in it and may respond to calls on certain ports, it's a tougher picture to know.

The startup was https://www.virtalabs.com/ by the way, they really are doing some cool things to help with this.


In defense of these medical devices, that is actually a FDA requirement. The entire combination of the system is certified to work, and even one patch for a security vulnerability leaves open the possibility that the patch breaks something and people die! Of course it goes without saying that you need to ensure that a virus cannot run on this machine by some other means. If these machines can get infected they automatically loses certification and cannot be used for medical purposes.


In offense of these medical devices, they should never have been running Windows or any general purpose OS in the first place! A lot easier to guarantee security if the entire thing is a well tested 10-50KLOC Rust daemon on top of seL4. I am not even asking them to do formal verification themselves, just a small trusted base and reasonable secure coding practices. I mean, come on, a critical medical device running the entirety of Windows XP (or say, Ubuntu with Apache, an X server and GNOME[1]) should be considered actual negligence. The FDA should make it outright impossible to certify such contraption.

Basically, the rule should be: if you are using general purpose consumer software, then you should be doing updates; if you are in an environment where updates are considered too risky, then running commodity software should also be considered too risky and you should be building very small locked down systems instead. Ideally without a direct internet connection (they can always connect through an updatable system that can't actually cause the medical device to malfunction, but can be reasonably protected against outgoing malware as well).

[1] I would be ok with some of these devices running a stripped down Linux (or NT) kernel, just not a full desktop OS. If you need a fancy UI, then that can be in an external (hopefully wired, not IoT) component that can be updated.


The FDA does not forbid the use of general purpose OS. However, they are strictly regulated. For every SOUP, software of unknown provenance/pedigree, that is every piece of software that was not developed specifically for a medical device, this is the responsibility of the manufacturer to provide performance requirements, test, risk analysis...

Moreover, the manufacturer have the obligation to assess every known bug of every SOUP and provide fixes if this can endanger the patient.

The issue is that to prove that a device is safe you have to execute costly tests. For a device I have been working on, we do endurance tests on multiple systems to simulate 10 years of use. Even with intensive scenario, on multiple systems it can take a few months. And if we encounter a single crash we reset the counter and start again. So in the end the product is safe but it is costly. This is why most of the time it is actually better to have the most simple stack possible on bare metal. But sometimes mistakes have been made, and you inherit a system full of SOUP and this is a nightmare to maintain.

I actually except some shitstorm on Monday morning, luckily I am working more on the embedded side so no Windows for me but some other divisions will be affected.


> In offense of these medical devices, they should never have been running Windows or any general purpose OS in the first place!

Except that people don't want to learn a new GUI for every machine...

Except that people want to be able to use a tablet for the interface...

Except that people want to control things from their phone...

Here's the reality: The end user doesn't give one iota of DAMN about security. People want to control their pacemaker or insulin pump from their PHONE. Ay yai yai.

Even worse: can your security get in the way when someone is trying to save a life? That's going to send you to court.


Most of these don't apply in context of medical devices. Sure, you can find some which will give you access to the usual OS desktop. But largely they're integrated and have a full-screen, completely customised interface


The devices itself should not run Windows. You should separate the two: one for the device, one for the user. The user machine (a full-blown windows if that's what you want), you can security-update all you want.


Of course, such devices can put their code in ROM, and so any malware would not survive a reboot.


Sure, but then, you also need strict W^X memory protections, without exceptions (kernel included), since malware in memory of a device that doesn't often reboot is dangerous enough. For example, the very best malware for network devices never writes itself to disk even if possible, in order to avoid showing up in forensics. This already precludes most general purpose OSes and is still technically vulnerable to some convoluted return-to-X attacks that just swap data structure pointers around and use existing code to construct the malicious behavior, so I'd still feel better with a minimal trusted base even then.


This 100x. I know it's extremely easy to Monday morning quarterback hospital IT but it's not as simple as people think. There's legal and, far more importantly, medical implications to updating software at a hospital. Oh you think it's ridiculous we use i.e. 7 in compatibility mode? It's because our mission critical emr only works in that (well it really works in everything but it's certified in 7) and if we use anything but the certified software load in accessing it the vendor puts all blame on us.


Yes, it actually is.

Life critical systems should be small, fully open stack, fully audited, and mathematically proven to be correct.

Non-critical systems, secondary information reporting, and possibly even remote control interfaces for those systems should follow industry best practices and try to do their best to stay up to date and updated.

Most likely many modern pieces of medical technology have not been designed with this isolation between the core critical components that actually do the job and the commodity junk around them that provide convenience for humans.


Maybe you really think it's just as simple as mandating exactly what you wrote here. But I'd imagine you'd agree that even doing this would have real and significant costs, which means tradeoffs are going to have to be made, e.g. some people won't receive medical care they otherwise would.


> some people won't receive medical care they otherwise would.

which is what happens when your whole computing network is remote-killed


The problem is that the technology stack required by modern equipment is too large to be satisfied by anything but a general-purpose OS. Good luck trying to get a mathematically proven OS.


Pretty sure you can build an X-Ray/MRI control software in Rust on top of seL4, and do lightweight verification (or, even better: hardware breakers of some sort) around issues like "will output lethal doses of radiation". That is a general purpose enough kernel and a general purpose enough programming language, without having to drag in tens of millions of lines of code intended for personal GUI systems... Then for malware issues you simply don't plug that device directly into the internet, nor allow it to run any new code (e.g. your only +X mounted filesystem is a ROM and memory is strictly W^X).


Rust has a lot of nice safety features, but the compiler hasn't been formally verified at all.


Yeah, I am aware. The problem is that using, say, CompCert might result in less security in practice, since although the compiler transformations are verified, code written in C is usually more prone to security issues. It also puts the burden of proving memory safety on the developer, which is a requirement for proving nearly anything else. I don't know Rust well enough to know if this applies for sure, but I think it is a lot less to ask from the manufacturer that they produce a proof of the form "assuming this language's memory model holds, we have properties X, Y and Z" and then just hope the compiler is sane, versus requiring a more heavy-weight end to end proof. Also, eventually there might be a mode for certified compilation in Rust/Go, at which point you get the best of both worlds.


This is true, but work is in progress, and some parts of the standard library already have been. And some of that work has found bugs too: https://github.com/rust-lang/rust/pull/41624


https://sel4.systems/ is a formally verified microkernel.


Does this distinction between critical and non-critical systems make sense for medical equipment? Displaying the information to humans (doctors and nurses) is probably life-critical. If the display is broken, it's not working.

It's not like medical devices have an entertainment system like cars and airplanes.


The display and the business end of the equipment are critical and should not be network-connected (or even have USB ports, for that matter). The part that uploads to whatever big server should have updates all the time. The critical bit should either be connected to the non-critical bit by a genuinely one-way link (e.g. unidirectional fiber) or should use a very small, very carefully audited stack for communication.

This is all doable, but it adds a bit of BOM cost and changes the development model.


An alternative would be to expose these subsystems on a network and have strict API's, encryption, and authentication between them. This would allow you to audit/update components individually rather than the whole device. So your display would act as a networked display and only have a very limited set of functions.


Yep. That worked fine for the Iranian uranium centrifuge guys...


stuxnet jumped airgap over usb, did it not?


We already have that distinction in our current regulations... some devices need FDA approval, some don't


Which is why bog standard COTS OS shouldn't be used for these types of devices. They should use a proper hardened embedded OS that has some form of mandatory access control / capability isolation system.

The long and short don't use standard desktop Windows (or even standard embedded Windows), Linux or MacOS to run these devices.


It's fine to certify devices for certain software, but a device must either be free to maintain and secure or it's not connected to a network.

If someone has a computer hooked to an MRI machine and to the hospital network, and it runs outdated/insecure software then someone made a mistake somewhere.


> If someone has a computer hooked to an MRI machine and to the hospital network, and it runs outdated/insecure software then someone made a mistake somewhere.

If you want a system to reach 100% it can't rely on not making mistakes. If all operating systems are supposed to be updated, then this has to be enforced as part of the software. The software e.g. shouldn't accept traffic unless it's up to date.


Oh you think it's ridiculous we use i.e. 7 in compatibility mode?

It's certainly ridiculous if you don't keep it utterly sandboxed and limited to only required use.

Also ridiculous is anyone falling for - or being allowed to fall for - a mail based phishing attack anywhere in the organisation.


Oh come on. They are doctors and nurses, not programmers.


But isn't it part of their job to care for their equipment?

This is a failure of management to properly train their employees.


Anyone disregarding common sense security advice anywhere in any organisation should leave the premises under escort within ten minutes. Would have upped standards everywhere years ago if implemented.


I'm curious, not trying to be smart: 1. Would running Windows 7 in a VM violate the certified software load? 2. Is new device software being written to run in containers/hypervisor level?

I could understand if 1 would be a violation, but perhaps, after today, the FDA could fast track manufacturer patches to run software loads on VMs?

I don't imagine 2 would solve current infrastructure issues any time soon given the size of investments in current equipment, but could it be a best practice going forward?


Usually you get the system integrated into some panel of the device. It's not the software itself that's certified. It's the device as a whole with everything running on it, hypervisors included.


Many years ago, early days of War on Terror, there were 'cyberstorm' exercises by the TLAs of the U.S. military allegedly on some mythical networks that were not 'the internet'.

In 2006 this involved a nice virus that sent all your photos and emails off to people they were not intended to go to, there was a psychological aspect to what was going on with this payload plus a full spectrum dominance aspect - the media were briefed with the cover story but I don't think any journalists deliberately infected a machine to see for themselves.

At the same time that this was going on there were some computer virus problems in U.K. hospitals, those same Windows XP machines they have today. The Russian stock market was taken down around this time too.

Suspiciously I tried to put two and two together on this, but with 'fog of war' you can't prove that the correlation = causation. The timing was uncanny though, a 'cyberstorm' exercise going on at the same time that the BBC News on TV was showing NHS machines taken out by virus.

https://www.dhs.gov/cyber-storm-i

http://www.telegraph.co.uk/news/uknews/1510781/Computer-bug-...

So that was in 2006. A decade ago. If you found a hole in a hospital roof a decade ago they would have had ample opportunity to fix it. They had a good warning a decade ago, things could have changed but nothing did.

I had the pleasure of a backroom tour of a police station one night, don't ask why, luckily I was a 'compliant' person, no trouble at all, allowed to go home with no charges or anything at all. An almost pleasant experience of wrongful arrest, but still with the fingerprints taken - I think it is their guest book.

Every PC I saw was prehistoric. The PC for taking mugshots was early 1990's, running Windows 95 or 98. I had it explained to my why things were so decrepit.

Imagine if during the London riots of 2011 if the PCs PC network had been taken down with all of that police bureaucracy becoming unworkable?!? I believe that the police computers are effectively in the same position as the NHS, with PCs dedicated to one task, e.g. mugshots, and that a take down of this infrastructure would just ruin everything for the police. I think that targeting the UK police and getting their computers compromised (with mugshots, fingerprints, whatever) and then asking the police to pay $$$ in bitcoin before they were locked out for good next week, that would have made me chuckle with schadenfreude.


Just wait until Congress tries to impeach Trump, people are rioting in the street, various factions fighting each other, and then the shit you just described happens. Maybe the Education Secretary's brother has some resources in contingency for such an event.


In a perfect world there would be market pressure on device manufacturers: those device manufacturers who patch devices and ensure the patched versions are recertified, would win out over those who do not, in an environment where the expectation is for all these devices to be networked! But of course this requires a competitive market to exist, AND for recertification and patching to be trivial costs. Since they're not, even if a hospital administrator were to price in the risk of losing certifications on all their devices, it's likely that the risk would end up being less expensive than choosing that (potentially non-existent) security-conscious device manufacturer.

Anyone considering disclosure, responsible or not, should be aware of these types of secondary effects. Had these vulnerabilities hypothetically been discovered by a white hat or found their way to a leak-disseminating organization, the discoverers and gatekeepers should consider that not everything can be patched, and the ethical thing to do here would have been to notify Microsoft and wait for a significant product cycle to release technical details. I somehow doubt the Shadow Brokers had that aim, though. And it's saddening that even in the hypothetical case, many people would choose "yay transparency!" over a thoughtful enumeration of consequences.


Seems like the FDA should certify on software tests and not software versions.


Including virus like attacks and fuzzing.


The computer systems affected by the NHS ransomware incident weren't medical devices; they were patient records servers and emergency receptionist workstations. No excuse for failure to patch.


Don't blame the NHS IT staff. The decision to not pay for XP security updates came from the highest level, the UK Tory government: https://www.benthamsgaze.org/2017/05/13/the-politics-of-the-...


If that's the case that's fine, but then my question is why are these computers networked to any extranet source? It seems a natural conflict: you cannot update the system due to it being so important that it always works, yet we need to attach it to an outside network which allows risk of infection. In my opinion, if the computer HAS to be connected to a network that is accessible from outside, then it MUST be allowed to be updated with latest antivirus/protection updates.


If it is infeasible to keep certain critical, networked device up to date, then I propose an alternative solution: those devices should only produce output, they should not read anything at all from their external ports. Their only input, should be their physical user interface. Would that work, for, say, an x-ray machine, or an MRI?

We saw a fictional example of a scheme like this on Battlestar Galactica. Officers phoned and faxed orders around the ship, using simple devices that did not execute software. The CIC had its data punched in by radar operators, instead of networking with shipwide sensors. It was a lot of work, but it did keep working in the face of a sophisticated, combined malware/saboteur attack.


In theory sure that could work. In practice it would raise healthcare costs even further due to the extra manual labor. So that's not going to happen.


Why are those devices being connected to an unsecure network? Surely they should have super limited data exchange features?


As is commonly the case, hardware vendors are more concerned with selling you the hardware and probably spend bottom-dollar for their software developers. I can't say that I've worked in such an environment, but my impression is that management at such companies probably see software dev as a cost-centre rather than something to actually spend money on for quality.


But the hospital management shouldn't be plugging them onto the same network where end-users have access, no?


Surely that's the point of hooking them up to the network, so you can e.g. get the pictures out of your CT scanner on to the doctor's PC?


The doctors' PC can run just fine on an isolated network and doesn't have to be connected to the internet.


No that wouldn't work. Modern healthcare is a team effort, especially for patients with complex conditions. Doctors must be able to collaborate with each other including securely sharing data across the Internet in order to deliver effective patient care. No one is going to give up on that just to prevent a few isolated security incidents.


> securely sharing data

> security incidents


That's the idea behind N3, the NHS's internal network. The idea of a hard shell with a soft centre. With N3 as large as it is, the idea breaks down. Security in depth is required, secure at every level. The hard shell idea is outdated, and N3 is scheduled to be turned off in 2019.


So you propose a separate, isolated network linking all the medical facilities, doctor's offices and private practices nationwide? Even the military doesn't do that for most of their offices.

Also, the doctor's computer pretty much needs to interface with the system(s) that handles patient billing (and thus non-medical companies) and the system(s) that handle patient scheduling, reminders, etc.


> patient billing

Not really an issue in the NHS, apart from the occasional non-resident foreign national.

(The "fundholding" system does mean there's a certain amount of internal billing which the patient is never aware of, but the beating Bevinist heart of the free-at-point-of-use system is still in place)


Free-at-point-of use process tend to be ones that require integration with a billing service, namely, to send information about the performed procedures to whatever system is paying for them, no matter if it's some state agency, private insurance, or whatever else - that's what I meant by non-medical companies that would need to be on the network.

A private practice where everything is paid by the patient in full by cash or CC could do without any integration with external systems (just run a standard cash register), but as soon as someone else is paying for it, you generally need to link the doctor's office systems to that in some way.


Until that doctor needs to submit patient info to a study, look up an obscure symptoms, talk with others in the medical community, etc.


It has an ethernet port, someone will plug an ethernet cable into it. The problem is not so much that the users are idiots, the problem is that people get distracted some of the time and make mistakes some of the time.

And yes, surely they should have super limited network features. The important word is "should."


Many of the computerized medical devices are diagnostic, so being able to send digital data to doctors quickly and easily over the internet is a key part of their functionality. Also, the other way around - being able to get patient data to the device without manually re-entering them, which is costly and error-prone and thus dangerous.


In the article, it mentioned that patient records servers and patient inprocessing workstations were affected. No mention was made about medical devices.

Workstations absolutely should be patched with security updates. Running an intranet-wide update server is non-trivial, but is well within the reach of a competent sysadmin. And failing to do it is negligent.


Those are not the systems involved in these attacks; the NHS systems compromised were the workstations used by doctors to access patient records and the Samba servers storing those records.


I supported a "cashless gaming" server for years which had the exact same contract. One Windows Update and I couldn't even get a failed disk replaced.


well I suspect that such devices should not be connected I was on dialysis at a clinic from one of the effected trusts and boy am I glad that my hemo dialysis machine was not connected to the network.


They had better at least honor their warantee and replace all this hosed equipment.


Karen Sandler has an interesting story about medical devices and how they are, literally, putting her life on the line. She's both a lawyer and a hacker, and you should hear the stories she tells about how people distrust her for this and think she's trying to trick them when all she wants to do is learn about the software and hardware that is keeping her alive:

https://www.youtube.com/watch?v=iMKHqO28FcI


Wow, that video deserves more views than it has. Really interesting and relevant for a two year old talk.


If you run a large installation of computers, taking updates can be a huge risk. Often they can break things, and then you're in the position of being blamed for running an update. Not updating can often lead to much higher stability.

In previous environments I've worked that were "regulated", any change to the environent, such as a firmware upgrade, triggered an entire re-regulation process (testing, paperwork, etc).


That's wrong. If you run a large installation of computers, and you do not have a plan and a process for quickly deploying security patches, you should be fired with cause.

In this specific case, there are mitigations available that do not require installation of software, but merely a configuration change. Also in this specific case, the people who run IT at NHS are completely incompetent, and this has been well-documented for several years.

In the general case, "I have a lot of machines" is an excuse provided by the unable to evade being held responsible by the uninformed.


Easy for your to say. I have been unable to do my job for a several days because some update broke a service I was using. Sure the service was badly written, but we didn't know that until the patch was applied.

The phone company used to have (they still might, I'm not in the business anymore) large labs that were small replications of their network. I've been in meetings where the goal was to decide if we should try to get our latest release through their process - if yes and we were successful they would pay big for the latest features, but if yes and it failed [I can't remember, I think we had to pay them for test time, but the contract was complex]. A lot of time was spent looking at every known bug to decide if it was important.


Was the update a Microsoft Security Update?


> Also in this specific case, the people who run IT at NHS are completely incompetent

That's what you get when you defund critical services.


funding is necessary but not the determining factor. There are just as many incompetent IT admins in well funded private companies earning top pay, Sensible and aware top management is far more critical,


if your job is to keep a bunch of computers working, keeping the systems running is the goal. Deploying security patches quickly is not always considered a requirement.

Again, the problem is that rolling out patches quickly often leads to unplanned problems that can't be easily detected or rolled back from. That can cause problems worse than leaving security issues unpatched.


If your systems are exposed to the Internet, then deploying security patches quickly is a part of keeping the systems running - as illustrated by this case, where the systems obviously are not running and can't be easily rolled back to a working state.

The business of cybercrime is changing. With the growing popularity of ransomware, we should expect a gradual decrease in time between a published remote vulnerability and your systems getting attacked. It may be useful to delay patches by a day to see if there aren't any glaring problems encountered by others - but it's not a reason do leave open holes that were patched in March. Frankly, there was no good reason why this attack hadn't happened a month ago; next time the gap may be much smaller.

Yes, there is a chance that installing a security update to break your systems. But there's also a chance that not installing a security update will break your systems, and that chance, frankly, is much higher.

Furthermore, "That can cause problems worse than leaving security issues unpatched" seems trivially untrue. Every horrible thing that might happen because of a patch broken in a weird way may also happen because of an unpatched security issue. Leaving security issues unpatched can take down all your systems and data, plus also expose confidential information. A MS patch, on the other hand, assuming that it's tested in any way whatsoever, won't do that - at most, it will take down some of your systems, which is bad, but not as bad as e.g. Spain's Telefonica is experiencing right now. What patch could have caused them even worse problems?


When you say 'the people who run IT at the NHS' you are aware that thanks to recent governments attempts to break up central structures, each hospital trust, each GP surgery is likely to have someone different handling IT - market forces are good etc.


Do you know this assertively?


Downloading Microsoft security updates is simple and safe.

You just download the monthly rollup: http://www.catalog.update.microsoft.com/search.aspx?q=401221...

Any competent sysadmin will have these available on their internal update server and push updates+restart during off-peak hours.

Receptionist computers that can open websites with untrusted JavaScript can't reasonably be held to this certification. Certification isn't what kept the NHS from applying patches.


Some vertical markets use a lot of software that integrates with Microsoft Office applications. The result is that there is a much higher change of a Microsoft update breaking a critical application. [0] is a recent (September 2015) example of two Microsoft patches that were widely blocked in the legal industry until Microsoft released a follow up patch. iManage and Workshare, the products mentioned in the blog entry, are considered critical applications in any law firm that uses them. iManage is a widely used document management system (think primitive VCS with Office add-ins). All documents are stored in the DMS so access to it is critical to the business. Workshare is used for document comparison and metadata scrubbing. Metadata scrubbing is used on all outgoing emails.

[0] http://www.kraftkennedy.com/block-2-microsoft-patches-preven...


I'd like to hear r/sysadmin opinion on that.


Translation: "My feelings make me feel that the statement isn't right. Instead of finding out, I'm just going to say that I wish someone would tell this commenter they're wrong."


Translation: "Microsoft has lied about the content of their updates before."


> If you run a large installation of computers, taking updates can be a huge risk. Often they can break things, and then you're in the position of being blamed for running an update. Not updating can often lead to much higher stability.

There is such a thing as staged rollouts for this exact type of scenario.


Well this justifies MS's decision for forced updates in Win10. Not that I like it, just saying.


So your workstation is next to a bed and is attached to a machine which feeds a drip to keep a little girl alive and it gets your untested patch or whole OS upgrade and the dosage is increased or the driver stops and the patient dies.

Only non-critical machines can just automatically apply software patches from Redmond (or anybody). This is not laziness or incompetence - only a few weeks ago military grade exploits from the USG were leaked onto the internet and are currently being re-purposed for non-spying applications. Does anyone think any organisation is prepared for this? Chinese chatter indicates that ms17-010smb doesn't even fix all cases! Many organisations will have been saved by infra guys making sure ms17-010smb was rushed through and that McAfee sigs were updated 'just because'.

edit: fixed CVE (Eternalblue)


Machines like that, which cannot shoulder the risk of applying updates to a network-connected general purpose OS designed to run third party (potentially malicious) code on a non-deterministic non-realtime system... probably should not be using such a system. Patching is risky, not patching is risky.

They should have formally validated software running on formally validated deterministic realtime hardware, running in non-networked environments (But with telemetry and remote control from networked computers if that's convenient) we just don't bother because it's cheaper and legal to get away with selling hacky nonsense.


I agree. A mission critical MRI machine should not be running an off the shelf OS (Win, Mac, Linux). If you're paying $5 million for a machine, it better have its own real time operating system that had been independently audited.

Now the machine that you pull up the images on is most likely going to be a general purpose PC/Mac. You still need to patch that. Your IT dept needs to have patch cycles that deploy in sets, so all mission critical equipment can be tested before everything gets patched. It takes diligence, and planning. If you prepare at a very large hospital with two MRI machines, then a bad patch can leave you degraded, but not totally offline.


Custom operating systems would require higher development costs and extremely rare sysadmin skills, which would mean larger hospital budgets, which would mean higher taxes or premiums.

Yeah, not gonna happen.


So your workstation is next to a bed and is attached to a machine which feeds a drip to keep a little girl alive and it gets hit by a worm like this one, stops working and the patient dies.

As long as the chance of cyberattacks is larger than the chance of horrible patches, you simply accept the risk of horrible patches and install them anyway. Or keep the system totally isolated from everything, if it's that critical.


The IV drip machine is not plugged into the CoW (Computer on Wheels). That workstation is running a version of enterprise Windows primarily to allow the medical professional to view and update patient records.

The IV drip machine is plugged into the wall, and is operated by buttons on the front.


In reality, a huge number of modern IV drip machines plug into the wall for power and get their network connectivity via 802.11. This is to allow remote configuration and status monitoring.


Are the IV drip machines running Windows CE or XP embedded? Was there a news report that claimed that IV drip machines were affected by malware?


Your hypothetical situation distracts from the actual issue. The ransomware infected NHS patient records servers and receptionist workstations, according to the article.


> Well this justifies MS's decision for forced updates in Win10. Not that I like it, just saying.

Unfortunately, I think the active hours period cannot be set to more than twelve hours, which is less than the time required for some surgical interventions. I can almost imagine it: OK everyone, ten-minute break while Windows installs its updates, this guy who's been on life support for the last ten hours can wait a little longer.

That's why updates are not forced on business-grade installs, and forcing them would be a very, very stupid decision.

Forced updates make sense for home users, since Microsoft can't depend on someone requiring them to keep their networks secure. For other types of users, second-guessing update policies is always a bad idea.


Windows is not a real time OS. Neither is Linux (except for perhaps a limited number of forks/distros).

If someone is going to die if a computer stops working for any reason at all, it should not be running Windows, or Linux, or macOS. It certainly shouldn't be connected to the internet or to any other network.

When we treat computers as nice-to-have mixed-use machines with all the bells and whistles, you need to treat them like nice-to-haves and not need-to-haves.


Surgeries are scheduled in advance except for the most urgent procedures; most surgeons and surgical nurses don't work on weekends.

Surgeon workstations can absolutely be restarted once per month to install the monthly roll-up.

The article mentions patient records servers and receptionist computers being affected by the ransomware. Not life support equipment.


> Surgeon workstations can absolutely be restarted once per month to install the monthly roll-up.

I was replying to the part about forcing updates. I didn't know about the group policy setting (rightfully pointed out by sp332); without it, you don't wait a month, you wait at most 12 hours :-).


Win10 Pro is more flexible, although you might have to drop down to Group Policy to do it. http://pureinfotech.com/defer-windows-10-upgrades-updates/ At the very least, the workstations can be pointed to internal WSUS servers which control the rollouts. I'm guessing that's how most of the currently-vulnerable computers stayed vulnerable until now.


I stand corrected, I didn't know about that group policy setting.


Keep in mind that on business grade installs the updates are not forced.

And there have already been situations where updates have caused problems. Maybe not as severe as a full on attack, but enough to potentially disrupt production and thus risk someones job.


The infection I'm dealing with happened from a fully-patched Win 10 Pro machine running Windows Defender and Outlook 2013 (32). It already had authenticated access to the files on the server it encrypted.

There was a Windows script file on the desktop, something like "UPS tracker.js", but it disappeared before I could grab it and a free space recovery didn't return it. (Possibly due to TRIM, it was on an SSD workstation.)


Not forced if you are using Windows 10 Enterprise

https://docs.microsoft.com/en-us/windows/deployment/update/w...


It justifies security updates for all operating systems. It does not justify the installation of spyware or changes to the user interface.


Sadly this distinction is rarely made and imo intentionally kept ambiguous. Lovejoy's law is used for justifying spy, bloat and crapware.


Not all patches require reboots on many OSs. Some OS's even apply kernel patches live. :) They could have taken a more user friendly approach. I understand they were boxed in a little technically, but they built the box.


>It sounds like the basic (?) security practices recommended by professionals

The problem with windows is that crap can not be upgraded without stopping workflow and rebooting.

With linux distros you can upgrade packages in a background (I think that's because you can upgrade the file being executed in linux, while in windows you can't, but I'm not sure) without even rebooting after upgrade. You can even patch your kernel without reboot.

It windows you have to see an upgrade screen for an hour without an opportunity to do something useful, and after that you have to reboot. That sucks.


Functionality is always second to security. In this situation you patch all the machines and then you test which still work . If the machines at the hospital dont work because some software was incompatible with the vulnerability fix (which is almost unthinkable in most cases)- then those coputers are simply unavailable and surgeries are cancelled or whatever the impact may be.


I'm sure this sort of stuff doesn't help with speedy updates.

https://arstechnica.com/tech-policy/2017/03/public-universit...


I saw this at work from the inside of a big telco that did this. They replaced one guy who needed only vague instructions to configure complex software replaced by a team of five who needed detailed step-by-step manuals written out by the vendor and still took twice as long and couldn't cope with any hiccups along the way.

I do not believe outsourcing saves money. It only does so either by cutting quality of service, or in cases where the IT department was heavily mismanaged anyway. Bring in capable management and you don't need to outsource.


I've never seen a case where outsourcing of general-purpose IT things saves money over the long-term. It might make the budget look better for a year or two. Which, I think, is the motivation for a lot of the people making decisions to outsource. It is cheaper right now, so who cares about later?

Special-purpose stuff can still be cheaper to outsource, though. If I need something to work next week and it would take my staff a month to get up to speed, I'd spend the money on outsourcing it.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: