Hacker News new | comments | show | ask | jobs | submit login

> It sounds like the basic (?) security practices recommended by professionals - keep systems up-to-date, pay attention to whether an email is suspicious - would have covered your network.

This is secondhand information (so take it for what it's worth, there could be pieces I'm missing), but I talked with a startup that was focusing on this problem, and the issue was not quite the computers and servers that IT were using (although sometimes it was), it was that many medical devices (like CT scanners, pumps, etc) come shipped with old outdated versions of operating systems and libraries.

No big deal right? Just make sure those are up to date too? Well, many times the support contract for these medical devices are so strict that you can invalidate the warranty by installing third party software like an antivirus, or even doing something like Windows update.

Even worse, many hospitals don't even know what devices they have -- it's easy for IT to know about laptops and computers, but when every single medical device more complicated than a stethoscope has a chip in it and may respond to calls on certain ports, it's a tougher picture to know.

The startup was https://www.virtalabs.com/ by the way, they really are doing some cool things to help with this.




In defense of these medical devices, that is actually a FDA requirement. The entire combination of the system is certified to work, and even one patch for a security vulnerability leaves open the possibility that the patch breaks something and people die! Of course it goes without saying that you need to ensure that a virus cannot run on this machine by some other means. If these machines can get infected they automatically loses certification and cannot be used for medical purposes.


In offense of these medical devices, they should never have been running Windows or any general purpose OS in the first place! A lot easier to guarantee security if the entire thing is a well tested 10-50KLOC Rust daemon on top of seL4. I am not even asking them to do formal verification themselves, just a small trusted base and reasonable secure coding practices. I mean, come on, a critical medical device running the entirety of Windows XP (or say, Ubuntu with Apache, an X server and GNOME[1]) should be considered actual negligence. The FDA should make it outright impossible to certify such contraption.

Basically, the rule should be: if you are using general purpose consumer software, then you should be doing updates; if you are in an environment where updates are considered too risky, then running commodity software should also be considered too risky and you should be building very small locked down systems instead. Ideally without a direct internet connection (they can always connect through an updatable system that can't actually cause the medical device to malfunction, but can be reasonably protected against outgoing malware as well).

[1] I would be ok with some of these devices running a stripped down Linux (or NT) kernel, just not a full desktop OS. If you need a fancy UI, then that can be in an external (hopefully wired, not IoT) component that can be updated.


The FDA does not forbid the use of general purpose OS. However, they are strictly regulated. For every SOUP, software of unknown provenance/pedigree, that is every piece of software that was not developed specifically for a medical device, this is the responsibility of the manufacturer to provide performance requirements, test, risk analysis...

Moreover, the manufacturer have the obligation to assess every known bug of every SOUP and provide fixes if this can endanger the patient.

The issue is that to prove that a device is safe you have to execute costly tests. For a device I have been working on, we do endurance tests on multiple systems to simulate 10 years of use. Even with intensive scenario, on multiple systems it can take a few months. And if we encounter a single crash we reset the counter and start again. So in the end the product is safe but it is costly. This is why most of the time it is actually better to have the most simple stack possible on bare metal. But sometimes mistakes have been made, and you inherit a system full of SOUP and this is a nightmare to maintain.

I actually except some shitstorm on Monday morning, luckily I am working more on the embedded side so no Windows for me but some other divisions will be affected.


> In offense of these medical devices, they should never have been running Windows or any general purpose OS in the first place!

Except that people don't want to learn a new GUI for every machine...

Except that people want to be able to use a tablet for the interface...

Except that people want to control things from their phone...

Here's the reality: The end user doesn't give one iota of DAMN about security. People want to control their pacemaker or insulin pump from their PHONE. Ay yai yai.

Even worse: can your security get in the way when someone is trying to save a life? That's going to send you to court.


Most of these don't apply in context of medical devices. Sure, you can find some which will give you access to the usual OS desktop. But largely they're integrated and have a full-screen, completely customised interface


The devices itself should not run Windows. You should separate the two: one for the device, one for the user. The user machine (a full-blown windows if that's what you want), you can security-update all you want.


Of course, such devices can put their code in ROM, and so any malware would not survive a reboot.


Sure, but then, you also need strict W^X memory protections, without exceptions (kernel included), since malware in memory of a device that doesn't often reboot is dangerous enough. For example, the very best malware for network devices never writes itself to disk even if possible, in order to avoid showing up in forensics. This already precludes most general purpose OSes and is still technically vulnerable to some convoluted return-to-X attacks that just swap data structure pointers around and use existing code to construct the malicious behavior, so I'd still feel better with a minimal trusted base even then.


This 100x. I know it's extremely easy to Monday morning quarterback hospital IT but it's not as simple as people think. There's legal and, far more importantly, medical implications to updating software at a hospital. Oh you think it's ridiculous we use i.e. 7 in compatibility mode? It's because our mission critical emr only works in that (well it really works in everything but it's certified in 7) and if we use anything but the certified software load in accessing it the vendor puts all blame on us.


Yes, it actually is.

Life critical systems should be small, fully open stack, fully audited, and mathematically proven to be correct.

Non-critical systems, secondary information reporting, and possibly even remote control interfaces for those systems should follow industry best practices and try to do their best to stay up to date and updated.

Most likely many modern pieces of medical technology have not been designed with this isolation between the core critical components that actually do the job and the commodity junk around them that provide convenience for humans.


Maybe you really think it's just as simple as mandating exactly what you wrote here. But I'd imagine you'd agree that even doing this would have real and significant costs, which means tradeoffs are going to have to be made, e.g. some people won't receive medical care they otherwise would.


> some people won't receive medical care they otherwise would.

which is what happens when your whole computing network is remote-killed


The problem is that the technology stack required by modern equipment is too large to be satisfied by anything but a general-purpose OS. Good luck trying to get a mathematically proven OS.


Pretty sure you can build an X-Ray/MRI control software in Rust on top of seL4, and do lightweight verification (or, even better: hardware breakers of some sort) around issues like "will output lethal doses of radiation". That is a general purpose enough kernel and a general purpose enough programming language, without having to drag in tens of millions of lines of code intended for personal GUI systems... Then for malware issues you simply don't plug that device directly into the internet, nor allow it to run any new code (e.g. your only +X mounted filesystem is a ROM and memory is strictly W^X).


Rust has a lot of nice safety features, but the compiler hasn't been formally verified at all.


Yeah, I am aware. The problem is that using, say, CompCert might result in less security in practice, since although the compiler transformations are verified, code written in C is usually more prone to security issues. It also puts the burden of proving memory safety on the developer, which is a requirement for proving nearly anything else. I don't know Rust well enough to know if this applies for sure, but I think it is a lot less to ask from the manufacturer that they produce a proof of the form "assuming this language's memory model holds, we have properties X, Y and Z" and then just hope the compiler is sane, versus requiring a more heavy-weight end to end proof. Also, eventually there might be a mode for certified compilation in Rust/Go, at which point you get the best of both worlds.


This is true, but work is in progress, and some parts of the standard library already have been. And some of that work has found bugs too: https://github.com/rust-lang/rust/pull/41624


https://sel4.systems/ is a formally verified microkernel.


Does this distinction between critical and non-critical systems make sense for medical equipment? Displaying the information to humans (doctors and nurses) is probably life-critical. If the display is broken, it's not working.

It's not like medical devices have an entertainment system like cars and airplanes.


The display and the business end of the equipment are critical and should not be network-connected (or even have USB ports, for that matter). The part that uploads to whatever big server should have updates all the time. The critical bit should either be connected to the non-critical bit by a genuinely one-way link (e.g. unidirectional fiber) or should use a very small, very carefully audited stack for communication.

This is all doable, but it adds a bit of BOM cost and changes the development model.


An alternative would be to expose these subsystems on a network and have strict API's, encryption, and authentication between them. This would allow you to audit/update components individually rather than the whole device. So your display would act as a networked display and only have a very limited set of functions.


Yep. That worked fine for the Iranian uranium centrifuge guys...


stuxnet jumped airgap over usb, did it not?


We already have that distinction in our current regulations... some devices need FDA approval, some don't


Which is why bog standard COTS OS shouldn't be used for these types of devices. They should use a proper hardened embedded OS that has some form of mandatory access control / capability isolation system.

The long and short don't use standard desktop Windows (or even standard embedded Windows), Linux or MacOS to run these devices.


It's fine to certify devices for certain software, but a device must either be free to maintain and secure or it's not connected to a network.

If someone has a computer hooked to an MRI machine and to the hospital network, and it runs outdated/insecure software then someone made a mistake somewhere.


> If someone has a computer hooked to an MRI machine and to the hospital network, and it runs outdated/insecure software then someone made a mistake somewhere.

If you want a system to reach 100% it can't rely on not making mistakes. If all operating systems are supposed to be updated, then this has to be enforced as part of the software. The software e.g. shouldn't accept traffic unless it's up to date.


Oh you think it's ridiculous we use i.e. 7 in compatibility mode?

It's certainly ridiculous if you don't keep it utterly sandboxed and limited to only required use.

Also ridiculous is anyone falling for - or being allowed to fall for - a mail based phishing attack anywhere in the organisation.


Oh come on. They are doctors and nurses, not programmers.


But isn't it part of their job to care for their equipment?

This is a failure of management to properly train their employees.


Anyone disregarding common sense security advice anywhere in any organisation should leave the premises under escort within ten minutes. Would have upped standards everywhere years ago if implemented.


I'm curious, not trying to be smart: 1. Would running Windows 7 in a VM violate the certified software load? 2. Is new device software being written to run in containers/hypervisor level?

I could understand if 1 would be a violation, but perhaps, after today, the FDA could fast track manufacturer patches to run software loads on VMs?

I don't imagine 2 would solve current infrastructure issues any time soon given the size of investments in current equipment, but could it be a best practice going forward?


Usually you get the system integrated into some panel of the device. It's not the software itself that's certified. It's the device as a whole with everything running on it, hypervisors included.


Many years ago, early days of War on Terror, there were 'cyberstorm' exercises by the TLAs of the U.S. military allegedly on some mythical networks that were not 'the internet'.

In 2006 this involved a nice virus that sent all your photos and emails off to people they were not intended to go to, there was a psychological aspect to what was going on with this payload plus a full spectrum dominance aspect - the media were briefed with the cover story but I don't think any journalists deliberately infected a machine to see for themselves.

At the same time that this was going on there were some computer virus problems in U.K. hospitals, those same Windows XP machines they have today. The Russian stock market was taken down around this time too.

Suspiciously I tried to put two and two together on this, but with 'fog of war' you can't prove that the correlation = causation. The timing was uncanny though, a 'cyberstorm' exercise going on at the same time that the BBC News on TV was showing NHS machines taken out by virus.

https://www.dhs.gov/cyber-storm-i

http://www.telegraph.co.uk/news/uknews/1510781/Computer-bug-...

So that was in 2006. A decade ago. If you found a hole in a hospital roof a decade ago they would have had ample opportunity to fix it. They had a good warning a decade ago, things could have changed but nothing did.

I had the pleasure of a backroom tour of a police station one night, don't ask why, luckily I was a 'compliant' person, no trouble at all, allowed to go home with no charges or anything at all. An almost pleasant experience of wrongful arrest, but still with the fingerprints taken - I think it is their guest book.

Every PC I saw was prehistoric. The PC for taking mugshots was early 1990's, running Windows 95 or 98. I had it explained to my why things were so decrepit.

Imagine if during the London riots of 2011 if the PCs PC network had been taken down with all of that police bureaucracy becoming unworkable?!? I believe that the police computers are effectively in the same position as the NHS, with PCs dedicated to one task, e.g. mugshots, and that a take down of this infrastructure would just ruin everything for the police. I think that targeting the UK police and getting their computers compromised (with mugshots, fingerprints, whatever) and then asking the police to pay $$$ in bitcoin before they were locked out for good next week, that would have made me chuckle with schadenfreude.


Just wait until Congress tries to impeach Trump, people are rioting in the street, various factions fighting each other, and then the shit you just described happens. Maybe the Education Secretary's brother has some resources in contingency for such an event.


In a perfect world there would be market pressure on device manufacturers: those device manufacturers who patch devices and ensure the patched versions are recertified, would win out over those who do not, in an environment where the expectation is for all these devices to be networked! But of course this requires a competitive market to exist, AND for recertification and patching to be trivial costs. Since they're not, even if a hospital administrator were to price in the risk of losing certifications on all their devices, it's likely that the risk would end up being less expensive than choosing that (potentially non-existent) security-conscious device manufacturer.

Anyone considering disclosure, responsible or not, should be aware of these types of secondary effects. Had these vulnerabilities hypothetically been discovered by a white hat or found their way to a leak-disseminating organization, the discoverers and gatekeepers should consider that not everything can be patched, and the ethical thing to do here would have been to notify Microsoft and wait for a significant product cycle to release technical details. I somehow doubt the Shadow Brokers had that aim, though. And it's saddening that even in the hypothetical case, many people would choose "yay transparency!" over a thoughtful enumeration of consequences.


Seems like the FDA should certify on software tests and not software versions.


Including virus like attacks and fuzzing.


The computer systems affected by the NHS ransomware incident weren't medical devices; they were patient records servers and emergency receptionist workstations. No excuse for failure to patch.


Don't blame the NHS IT staff. The decision to not pay for XP security updates came from the highest level, the UK Tory government: https://www.benthamsgaze.org/2017/05/13/the-politics-of-the-...


If that's the case that's fine, but then my question is why are these computers networked to any extranet source? It seems a natural conflict: you cannot update the system due to it being so important that it always works, yet we need to attach it to an outside network which allows risk of infection. In my opinion, if the computer HAS to be connected to a network that is accessible from outside, then it MUST be allowed to be updated with latest antivirus/protection updates.


If it is infeasible to keep certain critical, networked device up to date, then I propose an alternative solution: those devices should only produce output, they should not read anything at all from their external ports. Their only input, should be their physical user interface. Would that work, for, say, an x-ray machine, or an MRI?

We saw a fictional example of a scheme like this on Battlestar Galactica. Officers phoned and faxed orders around the ship, using simple devices that did not execute software. The CIC had its data punched in by radar operators, instead of networking with shipwide sensors. It was a lot of work, but it did keep working in the face of a sophisticated, combined malware/saboteur attack.


In theory sure that could work. In practice it would raise healthcare costs even further due to the extra manual labor. So that's not going to happen.


Why are those devices being connected to an unsecure network? Surely they should have super limited data exchange features?


As is commonly the case, hardware vendors are more concerned with selling you the hardware and probably spend bottom-dollar for their software developers. I can't say that I've worked in such an environment, but my impression is that management at such companies probably see software dev as a cost-centre rather than something to actually spend money on for quality.


But the hospital management shouldn't be plugging them onto the same network where end-users have access, no?


Surely that's the point of hooking them up to the network, so you can e.g. get the pictures out of your CT scanner on to the doctor's PC?


The doctors' PC can run just fine on an isolated network and doesn't have to be connected to the internet.


No that wouldn't work. Modern healthcare is a team effort, especially for patients with complex conditions. Doctors must be able to collaborate with each other including securely sharing data across the Internet in order to deliver effective patient care. No one is going to give up on that just to prevent a few isolated security incidents.


> securely sharing data

> security incidents


That's the idea behind N3, the NHS's internal network. The idea of a hard shell with a soft centre. With N3 as large as it is, the idea breaks down. Security in depth is required, secure at every level. The hard shell idea is outdated, and N3 is scheduled to be turned off in 2019.


So you propose a separate, isolated network linking all the medical facilities, doctor's offices and private practices nationwide? Even the military doesn't do that for most of their offices.

Also, the doctor's computer pretty much needs to interface with the system(s) that handles patient billing (and thus non-medical companies) and the system(s) that handle patient scheduling, reminders, etc.


> patient billing

Not really an issue in the NHS, apart from the occasional non-resident foreign national.

(The "fundholding" system does mean there's a certain amount of internal billing which the patient is never aware of, but the beating Bevinist heart of the free-at-point-of-use system is still in place)


Free-at-point-of use process tend to be ones that require integration with a billing service, namely, to send information about the performed procedures to whatever system is paying for them, no matter if it's some state agency, private insurance, or whatever else - that's what I meant by non-medical companies that would need to be on the network.

A private practice where everything is paid by the patient in full by cash or CC could do without any integration with external systems (just run a standard cash register), but as soon as someone else is paying for it, you generally need to link the doctor's office systems to that in some way.


Until that doctor needs to submit patient info to a study, look up an obscure symptoms, talk with others in the medical community, etc.


It has an ethernet port, someone will plug an ethernet cable into it. The problem is not so much that the users are idiots, the problem is that people get distracted some of the time and make mistakes some of the time.

And yes, surely they should have super limited network features. The important word is "should."


Many of the computerized medical devices are diagnostic, so being able to send digital data to doctors quickly and easily over the internet is a key part of their functionality. Also, the other way around - being able to get patient data to the device without manually re-entering them, which is costly and error-prone and thus dangerous.


In the article, it mentioned that patient records servers and patient inprocessing workstations were affected. No mention was made about medical devices.

Workstations absolutely should be patched with security updates. Running an intranet-wide update server is non-trivial, but is well within the reach of a competent sysadmin. And failing to do it is negligent.


Those are not the systems involved in these attacks; the NHS systems compromised were the workstations used by doctors to access patient records and the Samba servers storing those records.


I supported a "cashless gaming" server for years which had the exact same contract. One Windows Update and I couldn't even get a failed disk replaced.


well I suspect that such devices should not be connected I was on dialysis at a clinic from one of the effected trusts and boy am I glad that my hemo dialysis machine was not connected to the network.


They had better at least honor their warantee and replace all this hosed equipment.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: