Hacker News new | comments | show | ask | jobs | submit login
Cyberattacks in 12 Nations Said to Use Leaked N.S.A. Hacking Tool (nytimes.com)
1248 points by ghosh on May 12, 2017 | hide | past | web | favorite | 478 comments



Edit: Botnet stats and spread (switch to 24H to see full picture): https://intel.malwaretech.com/botnet/wcrypt

Live map: https://intel.malwaretech.com/WannaCrypt.html

Relevant MS security bulletin: https://technet.microsoft.com/en-us/library/security/ms17-01...

Edit: Analysis from Kaspersky Lab: https://securelist.com/blog/incidents/78351/wannacry-ransomw...



This sounds like something straight out of a James Bond movie.


That was a dumb move by the malware coder ;)

Wouldn't you want to hide a kill switch?


The MalwareTech write up gives a plausible reason for the developer having accidentally added the kill switch: > I believe they were trying to query an intentionally unregistered domain which would appear registered in certain sandbox environments, then once they see the domain responding, they know they’re in a sandbox the malware exits to prevent further analysis.



I don't understand. What exactly do the live map points represents and where does the data come from?



Wow, that's pretty amazing work!

How is he able to add new supernodes to the cluster? I would expect a supernode to have some sort of credentials that are used for authentication. If not, isn't it possible to neutralize the botnet by overloading it with supernodes that don't send malicious commands?


According to his initial explanation - "In a peer to peer botnet, bots which can receive incoming connections act as servers (called supernodes)."

So in some cases the only requirement for a node to be a supernode is that it can receive incoming connections. I take this to mean that any computer that is 1. infected with the botnet program, 2. can receive incoming connections, becomes a supernode. Under those circumstances there's no need to reverse engineer the botnet program, all you have to do is set up a vulnerable computer, allow it to be compromised and infected becoming a supernode; then monitor the traffic of incoming connections.

He later mentions that supernodes can be filtered based on "age, online time, latency, or trust." This tells me that certain botnets do have a level of trust that is defined in each peer list.

I believe your last question refers to the concept of sinkholing or blackholing. These methods have been used by the FBI to take down botnets through DNS hijacking, I think.


>To ensure the entire network is discovered, we should start the crawler off with multiple supernode IPs and store all IPs found into a database, then each time we restart the crawler we seed it with the list of IPs found during the previous crawler; repeating this process for a couple of hours ensure all online nodes are found.

This would just discover supernodes though right? Or any node at some point broadcasts as a supernode?


Yes to your first question, no to your second. He goes on to explain that, "In order to map all workers, we’d need to set up multiple supernodes across the botnet which log incoming connections (obviously every worker doesn’t connect to every supernode at the same time, so it’s important that our supernodes have a stronger presence in the botnet)."

From what I understand the process is:

1. Write a program to pretend to be a compromised peer requesting a connection to a Supernode in order to obtain a peer list of other Supernodes.

2. Recursively crawl for existing Supernodes + the list of Supernode IPs. Store all addresses found.

3. Set up one or more Supernodes and 'infiltrate' the peer list of already established Supernodes. Log incoming connections from Workers.

http://whatis.techtarget.com/definition/botnet-sinkhole


That's amazing, thanks for the link.


Are we watching this thing wake up right now?


We are seeing new requests from existing bots, the historical data is not shown on this map.


Gotcha. So yeah, we're seeing it wake up. The first little increase (up to 600) was about the time the article was published.


Where are you seeing this? This isn't historical data.


Here's a page with more info

https://intel.malwaretech.com/botnet/wcrypt


Yeah it scrolls off to the left. So you came an hour after my comment and it was gone. Heck it was almost gone by my second comment.


If so, that is both scary and exciting.


it took me a while to realize this is live....


I am curious. How is this tracked? What signature or what component are they looking for to be able to say "Yeah, here is another one"?

I'm just curious and would like someone with more experience to weigh in.

EDIT: To add on further to my question, I wonder why it does not use a terrain / city / province overlay instead of all black? It seems it would be much more useful to us network and sysadmins out there just in case we realized "Oh, hey that dot is right on top of where we work out of. I should probably fire up WireShark or something and test for infected systems."


Great info. So, for the layman. How vulnerable are users behind a firewall or broadband router?


Pretty safe until a machine in the network gets infected. The first infection comes from a phishing email or similar. From then on, the worm infects other machines connected to the same network, but usually not across the internet.

It uses a vulnerability in a protocol that's used for network sharing, and that's usually blocked at your router


What is the significance of the time span indicators? Does the 1M selection indicate how many computers remain infected or how many that were infected within that time span?


> from Kaspersky Lab

... the lab with ties to Russian intelligence, who are suspected of leaking the NSA tools.


Your point?


> "Microsoft rolled out a patch for the vulnerability last March, but hackers took advantage of the fact that vulnerable targets — particularly hospitals — had yet to update their systems."

> "The malware was circulated by email; targets were sent an encrypted, compressed file that, once loaded, allowed the ransomware to infiltrate its targets."

It sounds like the basic (?) security practices recommended by professionals - keep systems up-to-date, pay attention to whether an email is suspicious - would have covered your network. Of course, as @mhogomchunu points out in his comment - is this the sort of thing where only one weak link is needed?

Still. Maybe this will help the proponents of keeping government systems updated? And/or, maybe this will prompt companies like MS to roll out security-only updates, to make it easier for sysadmins to keep their systems up-to-date...?

(presumably, a reason why these systems weren't updated is due to functionality concerns with updates...?)


> It sounds like the basic (?) security practices recommended by professionals - keep systems up-to-date, pay attention to whether an email is suspicious - would have covered your network.

This is secondhand information (so take it for what it's worth, there could be pieces I'm missing), but I talked with a startup that was focusing on this problem, and the issue was not quite the computers and servers that IT were using (although sometimes it was), it was that many medical devices (like CT scanners, pumps, etc) come shipped with old outdated versions of operating systems and libraries.

No big deal right? Just make sure those are up to date too? Well, many times the support contract for these medical devices are so strict that you can invalidate the warranty by installing third party software like an antivirus, or even doing something like Windows update.

Even worse, many hospitals don't even know what devices they have -- it's easy for IT to know about laptops and computers, but when every single medical device more complicated than a stethoscope has a chip in it and may respond to calls on certain ports, it's a tougher picture to know.

The startup was https://www.virtalabs.com/ by the way, they really are doing some cool things to help with this.


In defense of these medical devices, that is actually a FDA requirement. The entire combination of the system is certified to work, and even one patch for a security vulnerability leaves open the possibility that the patch breaks something and people die! Of course it goes without saying that you need to ensure that a virus cannot run on this machine by some other means. If these machines can get infected they automatically loses certification and cannot be used for medical purposes.


In offense of these medical devices, they should never have been running Windows or any general purpose OS in the first place! A lot easier to guarantee security if the entire thing is a well tested 10-50KLOC Rust daemon on top of seL4. I am not even asking them to do formal verification themselves, just a small trusted base and reasonable secure coding practices. I mean, come on, a critical medical device running the entirety of Windows XP (or say, Ubuntu with Apache, an X server and GNOME[1]) should be considered actual negligence. The FDA should make it outright impossible to certify such contraption.

Basically, the rule should be: if you are using general purpose consumer software, then you should be doing updates; if you are in an environment where updates are considered too risky, then running commodity software should also be considered too risky and you should be building very small locked down systems instead. Ideally without a direct internet connection (they can always connect through an updatable system that can't actually cause the medical device to malfunction, but can be reasonably protected against outgoing malware as well).

[1] I would be ok with some of these devices running a stripped down Linux (or NT) kernel, just not a full desktop OS. If you need a fancy UI, then that can be in an external (hopefully wired, not IoT) component that can be updated.


The FDA does not forbid the use of general purpose OS. However, they are strictly regulated. For every SOUP, software of unknown provenance/pedigree, that is every piece of software that was not developed specifically for a medical device, this is the responsibility of the manufacturer to provide performance requirements, test, risk analysis...

Moreover, the manufacturer have the obligation to assess every known bug of every SOUP and provide fixes if this can endanger the patient.

The issue is that to prove that a device is safe you have to execute costly tests. For a device I have been working on, we do endurance tests on multiple systems to simulate 10 years of use. Even with intensive scenario, on multiple systems it can take a few months. And if we encounter a single crash we reset the counter and start again. So in the end the product is safe but it is costly. This is why most of the time it is actually better to have the most simple stack possible on bare metal. But sometimes mistakes have been made, and you inherit a system full of SOUP and this is a nightmare to maintain.

I actually except some shitstorm on Monday morning, luckily I am working more on the embedded side so no Windows for me but some other divisions will be affected.


> In offense of these medical devices, they should never have been running Windows or any general purpose OS in the first place!

Except that people don't want to learn a new GUI for every machine...

Except that people want to be able to use a tablet for the interface...

Except that people want to control things from their phone...

Here's the reality: The end user doesn't give one iota of DAMN about security. People want to control their pacemaker or insulin pump from their PHONE. Ay yai yai.

Even worse: can your security get in the way when someone is trying to save a life? That's going to send you to court.


Most of these don't apply in context of medical devices. Sure, you can find some which will give you access to the usual OS desktop. But largely they're integrated and have a full-screen, completely customised interface


The devices itself should not run Windows. You should separate the two: one for the device, one for the user. The user machine (a full-blown windows if that's what you want), you can security-update all you want.


Of course, such devices can put their code in ROM, and so any malware would not survive a reboot.


Sure, but then, you also need strict W^X memory protections, without exceptions (kernel included), since malware in memory of a device that doesn't often reboot is dangerous enough. For example, the very best malware for network devices never writes itself to disk even if possible, in order to avoid showing up in forensics. This already precludes most general purpose OSes and is still technically vulnerable to some convoluted return-to-X attacks that just swap data structure pointers around and use existing code to construct the malicious behavior, so I'd still feel better with a minimal trusted base even then.


This 100x. I know it's extremely easy to Monday morning quarterback hospital IT but it's not as simple as people think. There's legal and, far more importantly, medical implications to updating software at a hospital. Oh you think it's ridiculous we use i.e. 7 in compatibility mode? It's because our mission critical emr only works in that (well it really works in everything but it's certified in 7) and if we use anything but the certified software load in accessing it the vendor puts all blame on us.


Yes, it actually is.

Life critical systems should be small, fully open stack, fully audited, and mathematically proven to be correct.

Non-critical systems, secondary information reporting, and possibly even remote control interfaces for those systems should follow industry best practices and try to do their best to stay up to date and updated.

Most likely many modern pieces of medical technology have not been designed with this isolation between the core critical components that actually do the job and the commodity junk around them that provide convenience for humans.


Maybe you really think it's just as simple as mandating exactly what you wrote here. But I'd imagine you'd agree that even doing this would have real and significant costs, which means tradeoffs are going to have to be made, e.g. some people won't receive medical care they otherwise would.


> some people won't receive medical care they otherwise would.

which is what happens when your whole computing network is remote-killed


The problem is that the technology stack required by modern equipment is too large to be satisfied by anything but a general-purpose OS. Good luck trying to get a mathematically proven OS.


Pretty sure you can build an X-Ray/MRI control software in Rust on top of seL4, and do lightweight verification (or, even better: hardware breakers of some sort) around issues like "will output lethal doses of radiation". That is a general purpose enough kernel and a general purpose enough programming language, without having to drag in tens of millions of lines of code intended for personal GUI systems... Then for malware issues you simply don't plug that device directly into the internet, nor allow it to run any new code (e.g. your only +X mounted filesystem is a ROM and memory is strictly W^X).


Rust has a lot of nice safety features, but the compiler hasn't been formally verified at all.


Yeah, I am aware. The problem is that using, say, CompCert might result in less security in practice, since although the compiler transformations are verified, code written in C is usually more prone to security issues. It also puts the burden of proving memory safety on the developer, which is a requirement for proving nearly anything else. I don't know Rust well enough to know if this applies for sure, but I think it is a lot less to ask from the manufacturer that they produce a proof of the form "assuming this language's memory model holds, we have properties X, Y and Z" and then just hope the compiler is sane, versus requiring a more heavy-weight end to end proof. Also, eventually there might be a mode for certified compilation in Rust/Go, at which point you get the best of both worlds.


This is true, but work is in progress, and some parts of the standard library already have been. And some of that work has found bugs too: https://github.com/rust-lang/rust/pull/41624


https://sel4.systems/ is a formally verified microkernel.


Does this distinction between critical and non-critical systems make sense for medical equipment? Displaying the information to humans (doctors and nurses) is probably life-critical. If the display is broken, it's not working.

It's not like medical devices have an entertainment system like cars and airplanes.


The display and the business end of the equipment are critical and should not be network-connected (or even have USB ports, for that matter). The part that uploads to whatever big server should have updates all the time. The critical bit should either be connected to the non-critical bit by a genuinely one-way link (e.g. unidirectional fiber) or should use a very small, very carefully audited stack for communication.

This is all doable, but it adds a bit of BOM cost and changes the development model.


An alternative would be to expose these subsystems on a network and have strict API's, encryption, and authentication between them. This would allow you to audit/update components individually rather than the whole device. So your display would act as a networked display and only have a very limited set of functions.


Yep. That worked fine for the Iranian uranium centrifuge guys...


stuxnet jumped airgap over usb, did it not?


We already have that distinction in our current regulations... some devices need FDA approval, some don't


Which is why bog standard COTS OS shouldn't be used for these types of devices. They should use a proper hardened embedded OS that has some form of mandatory access control / capability isolation system.

The long and short don't use standard desktop Windows (or even standard embedded Windows), Linux or MacOS to run these devices.


It's fine to certify devices for certain software, but a device must either be free to maintain and secure or it's not connected to a network.

If someone has a computer hooked to an MRI machine and to the hospital network, and it runs outdated/insecure software then someone made a mistake somewhere.


> If someone has a computer hooked to an MRI machine and to the hospital network, and it runs outdated/insecure software then someone made a mistake somewhere.

If you want a system to reach 100% it can't rely on not making mistakes. If all operating systems are supposed to be updated, then this has to be enforced as part of the software. The software e.g. shouldn't accept traffic unless it's up to date.


Oh you think it's ridiculous we use i.e. 7 in compatibility mode?

It's certainly ridiculous if you don't keep it utterly sandboxed and limited to only required use.

Also ridiculous is anyone falling for - or being allowed to fall for - a mail based phishing attack anywhere in the organisation.


Oh come on. They are doctors and nurses, not programmers.


But isn't it part of their job to care for their equipment?

This is a failure of management to properly train their employees.


Anyone disregarding common sense security advice anywhere in any organisation should leave the premises under escort within ten minutes. Would have upped standards everywhere years ago if implemented.


I'm curious, not trying to be smart: 1. Would running Windows 7 in a VM violate the certified software load? 2. Is new device software being written to run in containers/hypervisor level?

I could understand if 1 would be a violation, but perhaps, after today, the FDA could fast track manufacturer patches to run software loads on VMs?

I don't imagine 2 would solve current infrastructure issues any time soon given the size of investments in current equipment, but could it be a best practice going forward?


Usually you get the system integrated into some panel of the device. It's not the software itself that's certified. It's the device as a whole with everything running on it, hypervisors included.


Many years ago, early days of War on Terror, there were 'cyberstorm' exercises by the TLAs of the U.S. military allegedly on some mythical networks that were not 'the internet'.

In 2006 this involved a nice virus that sent all your photos and emails off to people they were not intended to go to, there was a psychological aspect to what was going on with this payload plus a full spectrum dominance aspect - the media were briefed with the cover story but I don't think any journalists deliberately infected a machine to see for themselves.

At the same time that this was going on there were some computer virus problems in U.K. hospitals, those same Windows XP machines they have today. The Russian stock market was taken down around this time too.

Suspiciously I tried to put two and two together on this, but with 'fog of war' you can't prove that the correlation = causation. The timing was uncanny though, a 'cyberstorm' exercise going on at the same time that the BBC News on TV was showing NHS machines taken out by virus.

https://www.dhs.gov/cyber-storm-i

http://www.telegraph.co.uk/news/uknews/1510781/Computer-bug-...

So that was in 2006. A decade ago. If you found a hole in a hospital roof a decade ago they would have had ample opportunity to fix it. They had a good warning a decade ago, things could have changed but nothing did.

I had the pleasure of a backroom tour of a police station one night, don't ask why, luckily I was a 'compliant' person, no trouble at all, allowed to go home with no charges or anything at all. An almost pleasant experience of wrongful arrest, but still with the fingerprints taken - I think it is their guest book.

Every PC I saw was prehistoric. The PC for taking mugshots was early 1990's, running Windows 95 or 98. I had it explained to my why things were so decrepit.

Imagine if during the London riots of 2011 if the PCs PC network had been taken down with all of that police bureaucracy becoming unworkable?!? I believe that the police computers are effectively in the same position as the NHS, with PCs dedicated to one task, e.g. mugshots, and that a take down of this infrastructure would just ruin everything for the police. I think that targeting the UK police and getting their computers compromised (with mugshots, fingerprints, whatever) and then asking the police to pay $$$ in bitcoin before they were locked out for good next week, that would have made me chuckle with schadenfreude.


Just wait until Congress tries to impeach Trump, people are rioting in the street, various factions fighting each other, and then the shit you just described happens. Maybe the Education Secretary's brother has some resources in contingency for such an event.


In a perfect world there would be market pressure on device manufacturers: those device manufacturers who patch devices and ensure the patched versions are recertified, would win out over those who do not, in an environment where the expectation is for all these devices to be networked! But of course this requires a competitive market to exist, AND for recertification and patching to be trivial costs. Since they're not, even if a hospital administrator were to price in the risk of losing certifications on all their devices, it's likely that the risk would end up being less expensive than choosing that (potentially non-existent) security-conscious device manufacturer.

Anyone considering disclosure, responsible or not, should be aware of these types of secondary effects. Had these vulnerabilities hypothetically been discovered by a white hat or found their way to a leak-disseminating organization, the discoverers and gatekeepers should consider that not everything can be patched, and the ethical thing to do here would have been to notify Microsoft and wait for a significant product cycle to release technical details. I somehow doubt the Shadow Brokers had that aim, though. And it's saddening that even in the hypothetical case, many people would choose "yay transparency!" over a thoughtful enumeration of consequences.


Seems like the FDA should certify on software tests and not software versions.


Including virus like attacks and fuzzing.


The computer systems affected by the NHS ransomware incident weren't medical devices; they were patient records servers and emergency receptionist workstations. No excuse for failure to patch.


Don't blame the NHS IT staff. The decision to not pay for XP security updates came from the highest level, the UK Tory government: https://www.benthamsgaze.org/2017/05/13/the-politics-of-the-...


If that's the case that's fine, but then my question is why are these computers networked to any extranet source? It seems a natural conflict: you cannot update the system due to it being so important that it always works, yet we need to attach it to an outside network which allows risk of infection. In my opinion, if the computer HAS to be connected to a network that is accessible from outside, then it MUST be allowed to be updated with latest antivirus/protection updates.


If it is infeasible to keep certain critical, networked device up to date, then I propose an alternative solution: those devices should only produce output, they should not read anything at all from their external ports. Their only input, should be their physical user interface. Would that work, for, say, an x-ray machine, or an MRI?

We saw a fictional example of a scheme like this on Battlestar Galactica. Officers phoned and faxed orders around the ship, using simple devices that did not execute software. The CIC had its data punched in by radar operators, instead of networking with shipwide sensors. It was a lot of work, but it did keep working in the face of a sophisticated, combined malware/saboteur attack.


In theory sure that could work. In practice it would raise healthcare costs even further due to the extra manual labor. So that's not going to happen.


Why are those devices being connected to an unsecure network? Surely they should have super limited data exchange features?


As is commonly the case, hardware vendors are more concerned with selling you the hardware and probably spend bottom-dollar for their software developers. I can't say that I've worked in such an environment, but my impression is that management at such companies probably see software dev as a cost-centre rather than something to actually spend money on for quality.


But the hospital management shouldn't be plugging them onto the same network where end-users have access, no?


Surely that's the point of hooking them up to the network, so you can e.g. get the pictures out of your CT scanner on to the doctor's PC?


The doctors' PC can run just fine on an isolated network and doesn't have to be connected to the internet.


No that wouldn't work. Modern healthcare is a team effort, especially for patients with complex conditions. Doctors must be able to collaborate with each other including securely sharing data across the Internet in order to deliver effective patient care. No one is going to give up on that just to prevent a few isolated security incidents.


> securely sharing data

> security incidents


That's the idea behind N3, the NHS's internal network. The idea of a hard shell with a soft centre. With N3 as large as it is, the idea breaks down. Security in depth is required, secure at every level. The hard shell idea is outdated, and N3 is scheduled to be turned off in 2019.


So you propose a separate, isolated network linking all the medical facilities, doctor's offices and private practices nationwide? Even the military doesn't do that for most of their offices.

Also, the doctor's computer pretty much needs to interface with the system(s) that handles patient billing (and thus non-medical companies) and the system(s) that handle patient scheduling, reminders, etc.


> patient billing

Not really an issue in the NHS, apart from the occasional non-resident foreign national.

(The "fundholding" system does mean there's a certain amount of internal billing which the patient is never aware of, but the beating Bevinist heart of the free-at-point-of-use system is still in place)


Free-at-point-of use process tend to be ones that require integration with a billing service, namely, to send information about the performed procedures to whatever system is paying for them, no matter if it's some state agency, private insurance, or whatever else - that's what I meant by non-medical companies that would need to be on the network.

A private practice where everything is paid by the patient in full by cash or CC could do without any integration with external systems (just run a standard cash register), but as soon as someone else is paying for it, you generally need to link the doctor's office systems to that in some way.


Until that doctor needs to submit patient info to a study, look up an obscure symptoms, talk with others in the medical community, etc.


It has an ethernet port, someone will plug an ethernet cable into it. The problem is not so much that the users are idiots, the problem is that people get distracted some of the time and make mistakes some of the time.

And yes, surely they should have super limited network features. The important word is "should."


Many of the computerized medical devices are diagnostic, so being able to send digital data to doctors quickly and easily over the internet is a key part of their functionality. Also, the other way around - being able to get patient data to the device without manually re-entering them, which is costly and error-prone and thus dangerous.


In the article, it mentioned that patient records servers and patient inprocessing workstations were affected. No mention was made about medical devices.

Workstations absolutely should be patched with security updates. Running an intranet-wide update server is non-trivial, but is well within the reach of a competent sysadmin. And failing to do it is negligent.


Those are not the systems involved in these attacks; the NHS systems compromised were the workstations used by doctors to access patient records and the Samba servers storing those records.


I supported a "cashless gaming" server for years which had the exact same contract. One Windows Update and I couldn't even get a failed disk replaced.


well I suspect that such devices should not be connected I was on dialysis at a clinic from one of the effected trusts and boy am I glad that my hemo dialysis machine was not connected to the network.


They had better at least honor their warantee and replace all this hosed equipment.


Karen Sandler has an interesting story about medical devices and how they are, literally, putting her life on the line. She's both a lawyer and a hacker, and you should hear the stories she tells about how people distrust her for this and think she's trying to trick them when all she wants to do is learn about the software and hardware that is keeping her alive:

https://www.youtube.com/watch?v=iMKHqO28FcI


Wow, that video deserves more views than it has. Really interesting and relevant for a two year old talk.


If you run a large installation of computers, taking updates can be a huge risk. Often they can break things, and then you're in the position of being blamed for running an update. Not updating can often lead to much higher stability.

In previous environments I've worked that were "regulated", any change to the environent, such as a firmware upgrade, triggered an entire re-regulation process (testing, paperwork, etc).


That's wrong. If you run a large installation of computers, and you do not have a plan and a process for quickly deploying security patches, you should be fired with cause.

In this specific case, there are mitigations available that do not require installation of software, but merely a configuration change. Also in this specific case, the people who run IT at NHS are completely incompetent, and this has been well-documented for several years.

In the general case, "I have a lot of machines" is an excuse provided by the unable to evade being held responsible by the uninformed.


Easy for your to say. I have been unable to do my job for a several days because some update broke a service I was using. Sure the service was badly written, but we didn't know that until the patch was applied.

The phone company used to have (they still might, I'm not in the business anymore) large labs that were small replications of their network. I've been in meetings where the goal was to decide if we should try to get our latest release through their process - if yes and we were successful they would pay big for the latest features, but if yes and it failed [I can't remember, I think we had to pay them for test time, but the contract was complex]. A lot of time was spent looking at every known bug to decide if it was important.


Was the update a Microsoft Security Update?


> Also in this specific case, the people who run IT at NHS are completely incompetent

That's what you get when you defund critical services.


funding is necessary but not the determining factor. There are just as many incompetent IT admins in well funded private companies earning top pay, Sensible and aware top management is far more critical,


if your job is to keep a bunch of computers working, keeping the systems running is the goal. Deploying security patches quickly is not always considered a requirement.

Again, the problem is that rolling out patches quickly often leads to unplanned problems that can't be easily detected or rolled back from. That can cause problems worse than leaving security issues unpatched.


If your systems are exposed to the Internet, then deploying security patches quickly is a part of keeping the systems running - as illustrated by this case, where the systems obviously are not running and can't be easily rolled back to a working state.

The business of cybercrime is changing. With the growing popularity of ransomware, we should expect a gradual decrease in time between a published remote vulnerability and your systems getting attacked. It may be useful to delay patches by a day to see if there aren't any glaring problems encountered by others - but it's not a reason do leave open holes that were patched in March. Frankly, there was no good reason why this attack hadn't happened a month ago; next time the gap may be much smaller.

Yes, there is a chance that installing a security update to break your systems. But there's also a chance that not installing a security update will break your systems, and that chance, frankly, is much higher.

Furthermore, "That can cause problems worse than leaving security issues unpatched" seems trivially untrue. Every horrible thing that might happen because of a patch broken in a weird way may also happen because of an unpatched security issue. Leaving security issues unpatched can take down all your systems and data, plus also expose confidential information. A MS patch, on the other hand, assuming that it's tested in any way whatsoever, won't do that - at most, it will take down some of your systems, which is bad, but not as bad as e.g. Spain's Telefonica is experiencing right now. What patch could have caused them even worse problems?


When you say 'the people who run IT at the NHS' you are aware that thanks to recent governments attempts to break up central structures, each hospital trust, each GP surgery is likely to have someone different handling IT - market forces are good etc.


Do you know this assertively?


Downloading Microsoft security updates is simple and safe.

You just download the monthly rollup: http://www.catalog.update.microsoft.com/search.aspx?q=401221...

Any competent sysadmin will have these available on their internal update server and push updates+restart during off-peak hours.

Receptionist computers that can open websites with untrusted JavaScript can't reasonably be held to this certification. Certification isn't what kept the NHS from applying patches.


Some vertical markets use a lot of software that integrates with Microsoft Office applications. The result is that there is a much higher change of a Microsoft update breaking a critical application. [0] is a recent (September 2015) example of two Microsoft patches that were widely blocked in the legal industry until Microsoft released a follow up patch. iManage and Workshare, the products mentioned in the blog entry, are considered critical applications in any law firm that uses them. iManage is a widely used document management system (think primitive VCS with Office add-ins). All documents are stored in the DMS so access to it is critical to the business. Workshare is used for document comparison and metadata scrubbing. Metadata scrubbing is used on all outgoing emails.

[0] http://www.kraftkennedy.com/block-2-microsoft-patches-preven...


I'd like to hear r/sysadmin opinion on that.


Translation: "My feelings make me feel that the statement isn't right. Instead of finding out, I'm just going to say that I wish someone would tell this commenter they're wrong."


Translation: "Microsoft has lied about the content of their updates before."


> If you run a large installation of computers, taking updates can be a huge risk. Often they can break things, and then you're in the position of being blamed for running an update. Not updating can often lead to much higher stability.

There is such a thing as staged rollouts for this exact type of scenario.


Well this justifies MS's decision for forced updates in Win10. Not that I like it, just saying.


So your workstation is next to a bed and is attached to a machine which feeds a drip to keep a little girl alive and it gets your untested patch or whole OS upgrade and the dosage is increased or the driver stops and the patient dies.

Only non-critical machines can just automatically apply software patches from Redmond (or anybody). This is not laziness or incompetence - only a few weeks ago military grade exploits from the USG were leaked onto the internet and are currently being re-purposed for non-spying applications. Does anyone think any organisation is prepared for this? Chinese chatter indicates that ms17-010smb doesn't even fix all cases! Many organisations will have been saved by infra guys making sure ms17-010smb was rushed through and that McAfee sigs were updated 'just because'.

edit: fixed CVE (Eternalblue)


Machines like that, which cannot shoulder the risk of applying updates to a network-connected general purpose OS designed to run third party (potentially malicious) code on a non-deterministic non-realtime system... probably should not be using such a system. Patching is risky, not patching is risky.

They should have formally validated software running on formally validated deterministic realtime hardware, running in non-networked environments (But with telemetry and remote control from networked computers if that's convenient) we just don't bother because it's cheaper and legal to get away with selling hacky nonsense.


I agree. A mission critical MRI machine should not be running an off the shelf OS (Win, Mac, Linux). If you're paying $5 million for a machine, it better have its own real time operating system that had been independently audited.

Now the machine that you pull up the images on is most likely going to be a general purpose PC/Mac. You still need to patch that. Your IT dept needs to have patch cycles that deploy in sets, so all mission critical equipment can be tested before everything gets patched. It takes diligence, and planning. If you prepare at a very large hospital with two MRI machines, then a bad patch can leave you degraded, but not totally offline.


Custom operating systems would require higher development costs and extremely rare sysadmin skills, which would mean larger hospital budgets, which would mean higher taxes or premiums.

Yeah, not gonna happen.


So your workstation is next to a bed and is attached to a machine which feeds a drip to keep a little girl alive and it gets hit by a worm like this one, stops working and the patient dies.

As long as the chance of cyberattacks is larger than the chance of horrible patches, you simply accept the risk of horrible patches and install them anyway. Or keep the system totally isolated from everything, if it's that critical.


The IV drip machine is not plugged into the CoW (Computer on Wheels). That workstation is running a version of enterprise Windows primarily to allow the medical professional to view and update patient records.

The IV drip machine is plugged into the wall, and is operated by buttons on the front.


In reality, a huge number of modern IV drip machines plug into the wall for power and get their network connectivity via 802.11. This is to allow remote configuration and status monitoring.


Are the IV drip machines running Windows CE or XP embedded? Was there a news report that claimed that IV drip machines were affected by malware?


Your hypothetical situation distracts from the actual issue. The ransomware infected NHS patient records servers and receptionist workstations, according to the article.


> Well this justifies MS's decision for forced updates in Win10. Not that I like it, just saying.

Unfortunately, I think the active hours period cannot be set to more than twelve hours, which is less than the time required for some surgical interventions. I can almost imagine it: OK everyone, ten-minute break while Windows installs its updates, this guy who's been on life support for the last ten hours can wait a little longer.

That's why updates are not forced on business-grade installs, and forcing them would be a very, very stupid decision.

Forced updates make sense for home users, since Microsoft can't depend on someone requiring them to keep their networks secure. For other types of users, second-guessing update policies is always a bad idea.


Windows is not a real time OS. Neither is Linux (except for perhaps a limited number of forks/distros).

If someone is going to die if a computer stops working for any reason at all, it should not be running Windows, or Linux, or macOS. It certainly shouldn't be connected to the internet or to any other network.

When we treat computers as nice-to-have mixed-use machines with all the bells and whistles, you need to treat them like nice-to-haves and not need-to-haves.


Surgeries are scheduled in advance except for the most urgent procedures; most surgeons and surgical nurses don't work on weekends.

Surgeon workstations can absolutely be restarted once per month to install the monthly roll-up.

The article mentions patient records servers and receptionist computers being affected by the ransomware. Not life support equipment.


> Surgeon workstations can absolutely be restarted once per month to install the monthly roll-up.

I was replying to the part about forcing updates. I didn't know about the group policy setting (rightfully pointed out by sp332); without it, you don't wait a month, you wait at most 12 hours :-).


Win10 Pro is more flexible, although you might have to drop down to Group Policy to do it. http://pureinfotech.com/defer-windows-10-upgrades-updates/ At the very least, the workstations can be pointed to internal WSUS servers which control the rollouts. I'm guessing that's how most of the currently-vulnerable computers stayed vulnerable until now.


I stand corrected, I didn't know about that group policy setting.


Keep in mind that on business grade installs the updates are not forced.

And there have already been situations where updates have caused problems. Maybe not as severe as a full on attack, but enough to potentially disrupt production and thus risk someones job.


The infection I'm dealing with happened from a fully-patched Win 10 Pro machine running Windows Defender and Outlook 2013 (32). It already had authenticated access to the files on the server it encrypted.

There was a Windows script file on the desktop, something like "UPS tracker.js", but it disappeared before I could grab it and a free space recovery didn't return it. (Possibly due to TRIM, it was on an SSD workstation.)


Not forced if you are using Windows 10 Enterprise

https://docs.microsoft.com/en-us/windows/deployment/update/w...


It justifies security updates for all operating systems. It does not justify the installation of spyware or changes to the user interface.


Sadly this distinction is rarely made and imo intentionally kept ambiguous. Lovejoy's law is used for justifying spy, bloat and crapware.


Not all patches require reboots on many OSs. Some OS's even apply kernel patches live. :) They could have taken a more user friendly approach. I understand they were boxed in a little technically, but they built the box.


>It sounds like the basic (?) security practices recommended by professionals

The problem with windows is that crap can not be upgraded without stopping workflow and rebooting.

With linux distros you can upgrade packages in a background (I think that's because you can upgrade the file being executed in linux, while in windows you can't, but I'm not sure) without even rebooting after upgrade. You can even patch your kernel without reboot.

It windows you have to see an upgrade screen for an hour without an opportunity to do something useful, and after that you have to reboot. That sucks.


Functionality is always second to security. In this situation you patch all the machines and then you test which still work . If the machines at the hospital dont work because some software was incompatible with the vulnerability fix (which is almost unthinkable in most cases)- then those coputers are simply unavailable and surgeries are cancelled or whatever the impact may be.


I'm sure this sort of stuff doesn't help with speedy updates.

https://arstechnica.com/tech-policy/2017/03/public-universit...


I saw this at work from the inside of a big telco that did this. They replaced one guy who needed only vague instructions to configure complex software replaced by a team of five who needed detailed step-by-step manuals written out by the vendor and still took twice as long and couldn't cope with any hiccups along the way.

I do not believe outsourcing saves money. It only does so either by cutting quality of service, or in cases where the IT department was heavily mismanaged anyway. Bring in capable management and you don't need to outsource.


I've never seen a case where outsourcing of general-purpose IT things saves money over the long-term. It might make the budget look better for a year or two. Which, I think, is the motivation for a lot of the people making decisions to outsource. It is cheaper right now, so who cares about later?

Special-purpose stuff can still be cheaper to outsource, though. If I need something to work next week and it would take my staff a month to get up to speed, I'd spend the money on outsourcing it.


I think this is an excellent example that we can all reference the next time someone says that governments should be allowed to have backdoors to encryption etc.

This shows that no agency is immune from leaks and when these tools fall into the wrong hands the results are truly catastrophic.


> This shows that no agency is immune from leaks

That's well known for a long time. During cold war a lot of Russian weapons were based on the US designs. There is a TV series, Americans, which shows how to manipulate people and steal secrets. Even atomic bomb secrets were stolen (by Klaus Fuchs and others).

So I guess a lot of people in military complex make a lot of money on these exploits, PRISM and other projects. And they just don't care about whole society.


If you explicitly ask someone with the form "are there are organizations that are infalliable to leaks?" they're likely to say "no of course not. Humans make errors"

But if you phrase it to something like "Can the government be trusted with backdoors to protect us from terrorists and Chinese hackers", then suddenly public sentiment will change dramatically.


To quote Göring,

> Göring: Oh, that is all well and good, but, voice or no voice, the people can always be brought to the bidding of the leaders. That is easy. All you have to do is tell them they are being attacked and denounce the pacifists for lack of patriotism and exposing the country to danger. It works the same way in any country.

Patriotism is both a wonderful and terrible thing, and it is made worse by fearing the "other". Any time people create a boogeyman (China, Mexico, Muslims, what have you), be on the lookout for what the true motivations are.


> Patriotism is both a wonderful and terrible thing

I found that hypothesis widely accepted, without so much for it.

Patriotism fuses core values like freedom or solidarity with a flag. That's why it is easier to pervert.

Patriotism tells people that because there are people born in the same line limits that you, you should be proud of what they do, and you should help them first.

Patriotism distorts history.

> "Fourteen thousand years ago, Sweden was still covered by a thick ice cap." https://sweden.se/society/history-of-sweden/

Bullshit. Sweden didn't exist 14000 years ago. All history is learned as if the current countries were an inevitable result thousands of years ago. World history, human history, gets displaced to be able to build a national sentiment.

> "The colonial history of the United States covers the history of European settlements from the start of colonization until their incorporation into the United States of America" https://en.wikipedia.org/wiki/Colonial_history_of_the_United...

Again, we get that feeling of pre-determination. As if those people weren't free to choose their future as if they weren't individuals but just a means to create a country.

Patriotism narrows the mindset of populations. I don't see that usefulness. Anything that people does for patriotism will be better done for freedom, equality, fraternity, etc.

Why is patriotism a wonderful thing? What arguments am I missing?


Patriotism was temporarily necessary while we rapidly increased standard of living for ourselves, and didn't have enough resources to do it globally. In the early 21st century it was still a zero sum game on subdecade timescales.

Now we have more than enough resources to provide basics for all 10 billion of us (and decreasing) so patriotism has largely been confined to friendly rivalry around sports and regional cuisine. It was just a matter of mapping out the world's local customs and needs so the resources could be distributed intelligently.

And even at that, only about 4% of GWP goes to basic food, shelter, health, education, and cultural-ecological preservation these days. Entertainment and luxury goods make up the rest. This was unthinkable in the 2020s, but there was a lot of duplication of effort due to the maintenance of corporate moats in the basic sustenance industry at that time.

Sent from my iPhone 16S


16s? No Neuralink?


Only available on plus models.


It's even stronger than that; it's tribal. It affects political affiliations as well; once you've identified yourself as part of a group, you're more inclined to take on group's opinions, and you start to feel knee-jerk disgust at the rationales of the opposing side.

Keep the temperature up, and it eventually leads to civil war, just like amped up patriotism / nationalism leads to wars between states.


Patriotism can be a way to align the interests of a group ahead of those of the individuals in the group.

This can be a wonderful and terrible thing.


The scary thing is that it's about the group as opposed to people not in the group. This tribalism is nothing but scary.


It can be, yes.

But it's often patriotism that is seen as what enabled things like the congressional Republicans in the Nixon era to authorize the special investigations which brought him down.

That's only one example - there are plenty of others where an individual puts the interest of the group ahead of themselves. That isn't always a bad thing: the alternative is the tyranny of the strong, where the strongest individual has the most say.


I could also care about justice and people in general regardless of race or nationality or where they live.


Snowden comes to mind.


I'm yet to see anything positive from patriotism. It's a form of outdated tribalism. Even the idea of a nation-state isn't that old - this all started with the Napoleonic Wars.

Patriotism always leads to "us" vs "them", it seems.


Patriotism seems to be a euphemism for nationalism.


It's quite hard to find a good pitch for patriotism. I like my country, by like any relationship, it is conditional on not being a sociopath. Furthermore, everything i like about my country i country i can like directly: free speech is laudable in itself. Without free speech, what is the us? Nothing i care for.


Greetings from Germany. Losing WW2 thoroughly destroyed patriotism here. We do fine.


Greetings from Britain. Unfortunately we didn't get that benefit too, as Brexit and the current election demonstrate...


> That's well known for a long time.

But the implications of it are not. Otherwise, no one (including heads of TLAs) could continue to claim that gov't backdoors are a good idea without being widely perceived as an idiot.


We now know that the USSR A-bomb design was a copy of the US's first implosion design, but the USSR H-bomb design was completely new, very different from the US design.


To be completely fair, it's not the NSA's fault that software has faults. Its the software manufacturers'.

The ethical concern here is whether the NSA should have reported the holes to the manufacturers and the failure to handle its privileged knowledge in a safe manner.


> ... it's not the NSA's fault that software has faults.

But every time they ask for there to be legally mandated backdoors - they need to be reminded of these incidents.

The NSA actively wants there to be "faults" like these. They just only want the "good" guys to have access to them.


I definitely agree wrt intentional exploits ("backdoors") to be added. To me this news highlights the need for fundamentally safe software. Just like we might have safety laws in the automotive or airline industry.


If the NHS has been significantly crippled by this, and the NSA is partly at fault, could the NHS successfully sue the NSA in the UK?

(edit: my logic and phrasing was really bad)


At least in the US, there is limited ability to sue foreign sovereigns in our courts - not sure if that's the case in the UK too. Beyond that, I doubt this is a rabbit hole any government, much less the UK - which has a fairly imperialistic past - wants to go down. Glass houses and all.


Now that the U.S. has set an alarming precedent that the Kingdom of Saudi Arabia can be sued in U.S. court over terrorist funding, maybe the U.S. government could be sued.

I don't think they'd win; the ransomware authors and operators are the ones who perpetrated the act. The U.S. government probably wouldn't be found negligent since the software was stolen. NHS carries partial liability since it was negligent with its patching, according to industry-wide IT security standards.

Comparing it to firearms, I can be held partially liable for a wrongful death if I leave my Colt 1911 out on my porch; it's different if a burglar stole my gun safe and committed a crime.

(obligatory disclaimer that I am not a lawyer, I just play one on Hacker News)


They've been told for years to get off XP. They weren't paying MS to keep it updated. The exploit was patched months ago. Why were these machines even on the internet?

I'd say the NHS is far more at fault than anyone else here.


That would be a tough argument to make. Similar to how you would have trouble going after a gun manufacturer for murder rather than the attacker.


He is not talking about the actual flaws as being the example as to why we shouldn't give the NSA backdoor access; he is saying that the leaks prove that even the NSA can't keep their stuff secret. If they couldn't keep their hacking tools secret, why should we think they can keep their backdoor access secret?


In case anyone has been living under a rock for the past 3 years:

FBI's (recently fired) James Comey has been asking for an encryption backdoor for the past 3 years:

2014: https://www.fbi.gov/news/speeches/going-dark-are-technology-...

At that time, he said unbreakable encryption should be illegal: http://www.newsweek.com/going-not-so-bright-fbi-director-jam...

2015 (asking for a backdoor): https://www.theguardian.com/technology/2015/jul/08/fbi-chief...

2016 (same): https://arstechnica.com/tech-policy/2016/03/fbi-is-asking-co...

2016 (tried to force apple to create a backdoor for the iphone): https://www.apple.com/customer-letter/

And then here recently, he's upped it to an international agreement to create a backdoor: https://www.techdirt.com/articles/20170327/10121437009/james...

He's not the first, only, or last person to ask for it.



Good time to remind folks that gmail, facebook, whatsapp, amazon etc aren't going to be able to protect their data forever at the levels they currently are capable off.

A couple of bad business decisions and they are where yahoo is today. So be smart about how you use these services and educate the non-technical folks around you.


What would 'being smart' about using these services mean? It is pretty difficult to get through life in the modern age without using email for sensitive documents (or at least without using ACCESS to your email as a way to gain access to sensitive services, eg password reset emails, proof of ownership, etc)

Since email in the modern world has this type of importance, what should I do? If you say gmail can't protect their data forever, do I not use gmail for email? What do I use then? No service will be free from data leakage, even an email server I run myself.


Did I say stop using them?

Distribute risk. Use multiple accounts. Don't handle all work/financial stuff on a single account. Keep work and personal accounts separate. Reduce the number of hours you spend online being a data milch cow for these corps. This automatically reduces dependence. Don't allow messenger chat transcript backups to happen by just uninstalling the app every other night. Don't restore any saved transcripts on disk on reinstall.

I could go on and on but basic rule is use your imagination. Don't use these tools the way they want you to use them. Use them as you would use a tool in a workshed as an aid, not as a drug you are dependent on.


Just make sure whatever email provider you use offers IMAP and use a client like Thunderbird to keep a local copy in sync. Back that up somewhere safe and you're fine. If you need good, fast search, use something like X1.


This was something I thought POP did better since it requires maintaining one's own copies after downloading. But it was much less convenient as people used more devices.

Sad that managing our own multi device services is so time consuming.


That will protect you from data loss, but not data theft.


Data theft is a separate issue. Whether your using gmail, your own mail server or an account with your ISP; if you're machine is compromised all bets are off (including all your other files, not just email). At least with a backup you wont lose your data as a result of the theft.


I would say that it's probably smart to occasionally purge all your content from online services and keep your data in cold storage you physically control.


There is quite a large cost to that, though. Being able to search through old emails is a lifesaver. I can't count how many times I have searched through email to find some account info I set up years ago, or to get date information about when something happened. Just today, I searched my email for my old FastTrak account info, and found it on an email from 5 years ago.

Deleting all my email would be a big cost to pay for a gain that I can't exactly quantify; I would have to figure out the likelihood of my data being leaked over time and the cost to me if the data was leaked. That isn't readily obvious what the risk factor is for me, but I KNOW the cost factor.


You can download/store your email to a medium that you control, like a portable hard drive. Storing email online invites theft and can provide hackers with personal information that can be mined for personal info.


I agree about this ethical concern, but this attack also shows that reporting the holes to manufacturers is of limited use -- these exploits have been known to manufacturers since at least March, and while patches have shipped, the computers remain vulnerable. Clearly, automatic security updates are still not aggressive enough to prevent these kinds of problems. Though it isn't clear from the article how out-of-date the vulnerable systems are, which would help in planning for the future. For example, Windows 10 pushes security updates very aggressively, and I wonder how many of the infected computers were running Windows 10 -- health care providers' computer systems are often notoriously out-of-date.


No-one running a large organisation's IT systems is going to be letting individual machines just install whatever updates the software maker feels like pushing, even on Windows 10. That would be a big risk in itself: plenty of software makers, including Microsoft, have pushed horrible breaking changes in updates in the past.

Personally, where I would point the finger squarely at Microsoft is in its recent attempts to conflate security and non-security updates. Plenty of people, including organisations who are well aware of what they're doing technically, have scaled down or outright stopped Windows updates since the GWX fiasco and other breaking changes over the past few years.

This also leads to silliness like the security-only monthly rollups for Windows 7 not being available via Windows Update itself for those who do update their own systems (not that this matters much if Windows Update was itself broken on your system by the previous updates and now runs too slowly to be of any use). Instead, if you don't want whatever other junk Microsoft feel like pushing this month, you have to manually download and install the update from Microsoft's catalog site. Even then, things like HTTPS and support for non-IE browsers took an eternity to arrive, and whether the article for the relevant KB on Microsoft's support site includes things like checksums to verify the files downloaded were unmodified seems to be entirely random.

I get that Microsoft would like everyone to use Windows 10, but since for some of us that isn't an option or simply isn't desirable. Since we bought Windows 7 with Microsoft's assurance that it would be supported with security patches until 2020, this sort of messing around is amateur hour and they really should be called out on it a lot more strongly than they have been.


I would be curious about this too. I'd assume many of them would be running Windows 7, maybe? (Let's hope it's not XP).

Also, does Windows 10 Pro attached to a domain controller still have the same aggressive updates? Or do domain admins dictate that policy?

At one company I worked at, everyone in IT could volunteer for the patch group to get security patches a few days before the rest of the machines. That seems to work pretty well. Is there any evidence there might have been a 0 day involved that wasn't patched? I find it disheartening that so many machines in large managed networks like telecos and hospitals could be so far behind on patches! (3 months is A LOT in Internet time).

If people are just doing really basic stuff like order entry for doctors/nurses, we really need to get away from the full PC model. Seems like most of these machines should just be Chromebooks, Linux boxes that boot straight to a browser or something of that nature instead of a full PC/Macs. Lower the attack surface with something that's easy to update. Those machines would be lower cost too and easier to manage/patch -- moving back to the terminal/thin-client model.


> Let's hope it's not XP

BMJ released a report[0] just two days ago alleging that up to 90% of the NHS's computers are still running XP.

> Many hospitals use proprietary software that runs on ancient operating systems. Barts Health NHS Trust’s computers attacked by ransomware in January ran Windows XP. Released in 2001, it is now obsolete, yet 90% of NHS trusts run this version of Windows.

[0] http://www.bmj.com/content/357/bmj.j2214


It appears the Theresa May is trying to deflect attention from the fact that there has been massive under investment in NHS IT infrastructure by reinforcing that it is a 'international attack on a number of countries and organisations'.

Whilst this is true, it's probably also true that the impact of this attack is highly concentrated across organisations with chronic under-investment and a laissez-faire attitude to security.


>Whilst this is true, it's probably also true that the impact of this attack is highly concentrated across organisations with chronic under-investment and a laissez-faire attitude to security.

Good developers are rare enough, but good IT security and security-minded developers are even more rare. And it's even more rare that they decide to work within healthcare.

There just isn't enough of you to go around and you can't be everywhere.

Even if you can afford to have a dedicated pentesting team (I'd like to work at a healthcare system/hospital network that did), physical security is still a major problem if only because it's very easy to impersonate people.


In fairness, massive over-investment in NHS IT infrastructure hasn't gone so well either:

https://amp.theguardian.com/society/2013/sep/18/nhs-records-...


Fair point. Good example of death march project!


https://m.theregister.co.uk/2012/01/12/drone_consoles_linux_...

Military drones were using XP until they just had too much spyware on the machines to operate the drones.


It makes no difference whether they created the security holes by moles in the developers company or whether they simply withheld the information. They put human lives at risk by doing it.


> To be completely fair, it's not the NSA's fault that software has faults. Its the software manufacturers'.

While this is true, it doesn't address the point that you were responding to:

> this is an excellent example that we can all reference the next time someone says that governments should be allowed to have backdoors to encryption etc

...where "should be allowed to have" is interpreted as "should be given by software manufacturers".


>To be completely fair, it's not the NSA's fault that software has faults. Its the software manufacturers'.

The NSA has a specific mission to secure the nation's infrastructure. In witholding key information from US companies, it's failing that mission.


That's half the NSA's mission. Tt has another half and that is eavesdropping and getting into things. Those two missions are at odds with each other, and so the NSA has to make decisions about trade-offs. As these incidents show, the trade-offs the NSA has chosen to make have turned out to have been bad ideas.


This is why the NCSC was split from GCHQ in the UK. https://en.wikipedia.org/wiki/National_Cyber_Security_Centre...


Ok, so show us how you write perfectly secure code. It's sure as hell is the NSA's fault here for mishandling all their hacks into commercial sw.


I don't think you can completely separate the issue from other gov't actions. When the NSA or other gov't agencies come knocking on the door requiring a backdoor or other system security compromises, I would argue that those actions become a broad discouragement for private industry invest in security beyond a certain point.


You may be thinking of the 2nd law of thermodynamics. Possibly.


If I understand correctly, there were no backdoors used here. Only zero-days. If the NSA is guilty of anything, they're guilty of not informing system designers of exploitable vulnerabilities. But then the argument becomes entirely ideological and naive since we all know the NSA's mission is almost entirely counter to that outcome.

Edit: Apparently, not zero days. Vulnerabilities were patched months ago. I think the point still stands, which is that this outcome really has little to do with debate over encryption backdoors.

2nd Edit: On second thought, there is an argument that, if a backdoor were in place that only government agencies had access to, the means to access it could be leaked just as easily and in a similar manner to the way that information about these vulnerabilities was leaked. Then, we'd really be fucked since a backdoor could likely not be "fixed" with a simple patch (it might be fundamental to the design of a system). Considering this, I'll have to walk back my earlier statement and agree that the topic of backdoors is quite relevant here.


> Only zero-days.

The exploits released by Wikileaks' Vault 7 dump went public months ago. They're as much a 0-day as JFK's assassination was just a few days ago.


I've seen a lot of security people sticking to "this is not an 0day you idiots" retort, downplaying the importance of the leak. Frankly I think that's a pedantic argument that ignores too much of the real world.

The NSA leaks contained previously undisclosed security vulnerabilities that were patched only because they were stolen. In MSFT's case it was less than 30 days, and they basically skipped a patch week to make it happen.

It's manifestly obvious that 0day and 30day can both be considered extremely dangerous in the real world.


The difference is that at least five nationstates could have gotten in a 30 day window without much trouble.


Small correction: Nearly everything in WikiLeaks Vault 7 material was already patched (With the exception of something Cisco related which has since been patched I believe). The Vault 7 content was from CIA.

This issue is apparently based on a more recent leak by the Shadow Brokers, containing content from NSA and some other DoD elements who worked on offensive cyber operations.


Just because patches are available does not mean that they have been applied. Legacy applications, specialized hardware, vendor shenanigans, and organizational inertia can be significant impediments to keeping operating systems at current runlevels.


No zero days were used. This was patched in March.


Yes, but exploits for these bugs has been published now.


I worry that they might sell it as a reason backdoors are necessary: if only we had backdoors, we could've saved those patients! The flaw of this logic would be lost on most lawmakers.


Humor me... if encryption had a backdoor, then ransomware could be effectively mitigated.... Though I'm not a proponent of backdoors by any means, I don't see the logical flaw here.


...Because criminals are going to use state-sanctioned encryption software with mandated backdoors?

Even if everything off the shelf and open source has some built-in escrow unlocking keys compiled in, hackers are just going to find those code paths and remove them. Encryption works because of certain mathematical principals and laws.

Backdoors will only let governments look at legitimately encrypted data and not anything made by criminals who know how technology works.

There's a bigger question here: what if the NSA or CIA or some other intelligence/defence organisation discovers a solution to solve some of these hard problems in polynomial time .. and then doesn't release that information so they can use it to spy.

In that situation you're going even further: you have agents who are literally holding back scientific research that could change the entire field of mathematics and human understanding, research that could advance number theory by orders of magnitude (a jump equal to that of going from the first flight Kitty Hawk to the Saturn 5 rocket), for limited political gain.


That makes sense...

So "If encryption had a backdoor" is meaningless. It's really "If a given encryption implementation had a back door" and no one is making the criminals use certain algorithms.

thanks


Well, the bigger problem would be ensuring that the criminals used known broken encryption. The only advantage is that many of these attacks are copy-cat, so if you released the source code for a broken ransomware implementation, it will probably get used more or less verbatim… as has been shown in the past. (https://threatpost.com/bitcrypt-ransomware-deploying-weak-cr..., https://www.utkusen.com/blog/destroying-the-encryption-of-hi...)

Anyone who actually knows what they are doing, and are prepared to break the law, would just use AES. All of those law-abiding institutions would be forced to use a weak encryption scheme.

Sure, it might help stop script kiddies, but it won't help to stop professionals, and professionals are the ones that you have to worry about, since they end up hosing 45,000+ installations in a day.


If they don't just replace your data outright with noise.


Assuming that the criminal opts to use the encryption with an NSA backdoor and the victim is able to schedule time at their local NSA Genius Bar to recover their data.


> if encryption had a backdoor

This is the flaw in the logic. "Encryption" can't have a backdoor any more than math can have a back door.

Specific types of encryption can. But there's nothing to stop a malicious user from using a non-backdoored encryption algorithm or inventing their own.


Yeah, I don't think ransomware is going to use the US approved algorithm. What they are doing is already illegal.


So developers of ransomware would build backdoors into their ransomware because the law requires them to?


How would you practically do that? Send all those encrypted hard drives to NSA to be decrypted? Publish the backdoor, effectively rendering that encryption scheme broken?


Just ask the NSA to send you the un-encrypted files - they probably have them in their database anyway.


Wouldn't the attackers just use a crypto scheme that didn't have a backdoor?


The logic is that encryption without a backdoor already exists, and no law can stop a criminal writing a virus from using that.


Then encryption wouldn't be doing what it's set out to do.


The logic is sound in theory. But in practice if the government can't protect its exploits, they mot likely can't protect their keys to the backdoor.


Why would the people reaping the rewards of ransomware use encryption that has backdoors if backdoorless encryption already exists.


It's either turtles all the way down (backdoor of the backdoor of the backdoor..) or you always strive for secure software.


Why would ransomers use encryption with a back door? It's not like you can force them to only use the crackable math.


Who has the keys to the backdoor? How do you force the ransomware authors not to use the good encryption?


Only if the bad guys use the NSA-backdoored encryption.


Problem is that people (politician) wanting to push it through simply don't care. They just want to have access and they think there are agencies that can deal with potential consequences. It is frankly all about the money - they want to have ability to access sensitive data and therefore be more attractive to people willing to pay the bribe.


And it follows that anything that can create such harm CAN and eventually will "leak" or fall into the wrong hands.

Maybe one day, as a species, we'll learn not to create this kind of devices.

(sorry if the message seems too exaggerated)


also that it is very unethical for the US government to find some vulnerability in android/windows/whatever and not report it


Is it particularly unethical? Many governments around the world are discovering 0-days in commonly deployed products and not revealing that to the vendor, but instead using it as a weapon for navigating computer networks.

Revealing the vulnerability would place the US Govt at a distinct disadvantage.


This is an argument that highlights the difference between attack and defence in cyber: attack is easier than defence, an is the most chosen path because of this reason.

Your point is actually valid, but that doesn't mean I have the intention to pardon the NSA for having compromised the network of my university, the same network I used each and every single day during my studies (and no, I am not a terrorist, nor I know anyone involved in terrorism, child pornography, or what-else they had in mind).

Sorry to say, but "anyone is doing it", is not an excuse or a reason for doing something.

If instead of exploiting half of the world, they had dedicated their experience in making their (and everyone else) infrastructure safer (by sharing security conscious design concepts, considerations with software developers and hardware manufacturers), now we probably would not have had massive botnets, exploitations and leaks (least but not least the political consequences of perpetrating and sustaining this kind of decisions).

Where is the point when maintaining the supremacy of one's country over the others through deceit, intrigue, and espionage costs too much in terms of negative outcomes?

For me that line, US and many others included, has been passed a long time ago. But that's just my humble opinion. Each one is free to draw conclusions through his own point of view.


the entirety of your argument seems to be "it's not unethical because other countries do it", which is not compelling when you consider other forms of unethical behavior using this defense.


Were all of these unreported?


Not only unreported but weaponized by the US Government.


You are right but I'm not pleased that your comment has hijacked all discussion on this article.

(Not that it's your fault, it's somewhat germane to the overall issue of government, I'm just whining)


I agree with this but there is a good argument to be made that well engineered backdoors are better than intelligence agencies hoarding undisclosed exploits.


Is there? I can't think of one.


I am in Tanzania(East Africa) and my father's computer is infected.

All he did to get infected was plugging his laptop on the network at work(University of Dar Es Salaam).

The laptop is next to me and my task this night is to try to remove this thing.


This malware is well written, and uses strong encryption.

I would suggest that you and your father spend the evening reading up on backup practices, and reconsider the value proposition of open source software.

I hope I am not coming off as a smug jerk. My hope is that rather than becoming frustrated and demoralized after an evening of fruitless hacking, you and your uni will recover, and become resilient against future attacks.


He has backups of his data.

I personally use linux and my github repo is here[1] where i have a bunch of encryption related projects(zuluCrypt,SiriKali and lxqt_wallet). The last windows computer i used was windows xp.

I dont want to move him to linux because i am not always around and he can ask other people for help when he is on windows.

[1] https://github.com/mhogomchungu


Thank God for backups! And thank you for making sure people make backups.

My mother is in a similar situation. She is an elementary school teacher, and has little time for unrelated endeavors like this. What time she does have, is spent in the garden, as it should be.

Nevertheless, we are now seeing that the time-cost of closed source software, is greater than that of open-source software. My solution has been to prepare a KDE based distro for her, to work with her, side by side, whenever she needs to learn new tools. It is a good bonding experience, when both people can maintain a positive attitude about it.

The solution to the problem of malware, is education.


How quickly some forget heartbleed.

The solution to malware is obscurity. Have an OS that no one wants to break into, and you won't be broken into.


I think you are referring to diversity, not obscurity. Diversity does indeed increase the resilience of the network, but there will always be enough common factors across the board, that diversity alone will not suffice.

In the end, the software that we depend on, must be reviewable by anyone who is concerned about it. A prerequisite for that, is that software should be as small, clean, and simple as possible, to encourage such scrutiny. IIRC, the real problem with heartbleed, is that the OpenSSL codebase was a mess, and no-one wanted to work on it.


> The solution to malware is obscurity. Have an OS that no one wants to break into ...

... and you'll have an OS for which neither malware authors nor legitimate software developers want to write applications.

There's a trade-off involved. We could all use pen an paper and be invulnerable to malware, but then how would we post on HN?


That's my point, as I type this on fully patched Win 10 Pro.

Certainly Windows has its issues, but it's biggest 'flaw' when it comes to malware isn't that it's closed-source, but that it's ubiquitous and therefore a highly attractive target.


Linux is ubiquitous in the data center. We are not a low-value target. Also, corporations with cloud-based infrastructure are more likely to pay large ransoms for their data, especially if it is the backup/archive system that is attacked.


Data centers are dwarfed in size by the consumer and business markets, while also being much less vulnerable due to their more specialised nature and therefore ease of update. Case in point: there are plenty of windows data centres out there, but its not likely any of them were effected by this incident.


Heartbleed is a very different class of vulnerability -- it allows sensitive information to leak but does not provide root access.


Quick, migrate to TempleOS!


That's rather extreme, I think but Haiku or Windows 98 with IE4 could possibly be quite safe from viruses nowadays, even if not exactly "secure".


Are macOS or ChromeOS options?

Heck, some of my relatives are good with an iPad for 90% of their online activities.


Chromeos and everything in browser sandbox with always up to date software sounds pretty solid.


So reinstall the OS from orbit. It's the only way to be sure.


I understand that people love open source, but how is that relevant here? For example OpenSSL is open source, yet it didn't prevent Heartbleed and other exploits from happening?


OpenSSL was an example of open source done badly; neither of our communities can claim to be universally perfect. The solution, was to fork and replace OpenSSL with a superior project: LibreSSL. That part of the story, is a success for open source. It shows us recovering quickly and permanently from the worst catastrophe imaginable.


How widely is LibreSSL used, compared to OpenSSL?


Working fine on FreeBSD 10 for me, but it's not default yet as far as I know.

My thoughts on the matter are, this is all a pointless waste of time/effort, or otherwise said, an arms race of exploits/bugs that will go on and on and produce nothing of value, except justifying a military budget in various govs.

If they truly were doing their jobs and being of benefit, we wouldn't have the corruption we do, the paedo rings, the drug cartels etc.

To be secure, you have to beat the smartest people on the planet I would have thought, and unless you have a nation's resources, that's tricky. Tightening laws I'm not sure is the answer either, it feels like human nature expressed in Internet terms.


There is no way to know for sure, because we have not embedded telemetry / spyware in open source operating systems.

One of the problems here, is that large organizations are reluctant to update software across a large population of computers. If those updates were smaller, more transparent, and could be separated based on whether they are a security fix, a new feature, or a new tool that allows a 3rd party to monitor user activity, then the sysadmins would be empowered to close security issues quickly, while introducing minimal risk.


macOS uses LibreSSL:

  % ssh -V
  OpenSSH_7.4p1, LibreSSL 2.5.0
On Linux, at least Alpine is using LibreSSL.


From what I've seen, it isn't particularly well written. However, you're probably correct about the encryption being strong.

One of the reasons the infection rates are dropping off is that the malware had some kind of poorly implemented sandbox detection, where it would attempt to resolve a non-existent domain. However, now the domain has actually been registered by a researcher, so now every new infection thinks it's running in a sandbox.

This is the work of someone who doesn't really know what they're doing, and they probably copied a large chunk of the code from somewhere else.


There is at least one cryptolocker variant that supports Linux.


Out of curiosity, how does that spread?

I'm intrigued because I've seen people claim that "linux is just as vulnerable as windows to user stupidity," but I have a hard time understanding how. The vast majority of windows infections occur because somebody got tricked into running an executable file.

On every Linux distro I've used, scripts and binaries need the executable bit set or be explicitly run through the desired shell. As far as I know, no browser sets the executable bit on downloads. To run scripts, you need to know what you're doing.

Now,

  curl http://... | sudo sh
is an entirely different problem. As are remote execution vulnerabilities in the kernel. As are adding random package manager repositories found on internet forums. But all seem a bit more technically involved than opening an executable file with a .pdf extension.


Tip: Don't spend time trying to remove malware and undo its effects. You'll never know if you succeeded; most malware is designed to hide itself, and likely this particular malware is well-written.

Wipe the laptop and reinstall. It's more certain, and probably won't take much longer than trying to remove the malware. If the malware infects firmware or other subsystems below the OS, and thus won't be removed by a reinstall, buy a new laptop if that's an option.


Just that? No click somehwhere?


If someone connects to a network which has been infected and they've not applied the appropriate patch (MS17-010) it looks like they're in trouble if they're running Windows and don't have a firewall blocking incoming connections.

So first person in a network has to have fallen for the phishing attack, but once it's in the network it can spread via the ETERNALBLUE exploit.


I confirm that. Once inside network it's a carnage.


It can copy itself across a network through a vulnerability in SMB, Windows' file-sharing protocol. That's the bug that was disclosed in the NSA leaks. Microsoft released a patch in March, but of course not all computers are patched.


Clicks are for phishing and trojans, i.e. human vulnerabilities. This is due to an operating system bug, which is a technical vulnerability.

If you can get the right network packets to an unpatched machine, you can infect that machine.

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: