Hacker News new | comments | show | ask | jobs | submit login
Cyberattacks in 12 Nations Said to Use Leaked N.S.A. Hacking Tool (nytimes.com)
1248 points by ghosh 11 months ago | hide | past | web | favorite | 478 comments



Edit: Botnet stats and spread (switch to 24H to see full picture): https://intel.malwaretech.com/botnet/wcrypt

Live map: https://intel.malwaretech.com/WannaCrypt.html

Relevant MS security bulletin: https://technet.microsoft.com/en-us/library/security/ms17-01...

Edit: Analysis from Kaspersky Lab: https://securelist.com/blog/incidents/78351/wannacry-ransomw...



This sounds like something straight out of a James Bond movie.


That was a dumb move by the malware coder ;)

Wouldn't you want to hide a kill switch?


The MalwareTech write up gives a plausible reason for the developer having accidentally added the kill switch: > I believe they were trying to query an intentionally unregistered domain which would appear registered in certain sandbox environments, then once they see the domain responding, they know they’re in a sandbox the malware exits to prevent further analysis.



I don't understand. What exactly do the live map points represents and where does the data come from?



Wow, that's pretty amazing work!

How is he able to add new supernodes to the cluster? I would expect a supernode to have some sort of credentials that are used for authentication. If not, isn't it possible to neutralize the botnet by overloading it with supernodes that don't send malicious commands?


According to his initial explanation - "In a peer to peer botnet, bots which can receive incoming connections act as servers (called supernodes)."

So in some cases the only requirement for a node to be a supernode is that it can receive incoming connections. I take this to mean that any computer that is 1. infected with the botnet program, 2. can receive incoming connections, becomes a supernode. Under those circumstances there's no need to reverse engineer the botnet program, all you have to do is set up a vulnerable computer, allow it to be compromised and infected becoming a supernode; then monitor the traffic of incoming connections.

He later mentions that supernodes can be filtered based on "age, online time, latency, or trust." This tells me that certain botnets do have a level of trust that is defined in each peer list.

I believe your last question refers to the concept of sinkholing or blackholing. These methods have been used by the FBI to take down botnets through DNS hijacking, I think.


>To ensure the entire network is discovered, we should start the crawler off with multiple supernode IPs and store all IPs found into a database, then each time we restart the crawler we seed it with the list of IPs found during the previous crawler; repeating this process for a couple of hours ensure all online nodes are found.

This would just discover supernodes though right? Or any node at some point broadcasts as a supernode?


Yes to your first question, no to your second. He goes on to explain that, "In order to map all workers, we’d need to set up multiple supernodes across the botnet which log incoming connections (obviously every worker doesn’t connect to every supernode at the same time, so it’s important that our supernodes have a stronger presence in the botnet)."

From what I understand the process is:

1. Write a program to pretend to be a compromised peer requesting a connection to a Supernode in order to obtain a peer list of other Supernodes.

2. Recursively crawl for existing Supernodes + the list of Supernode IPs. Store all addresses found.

3. Set up one or more Supernodes and 'infiltrate' the peer list of already established Supernodes. Log incoming connections from Workers.

http://whatis.techtarget.com/definition/botnet-sinkhole


That's amazing, thanks for the link.


Are we watching this thing wake up right now?


We are seeing new requests from existing bots, the historical data is not shown on this map.


Gotcha. So yeah, we're seeing it wake up. The first little increase (up to 600) was about the time the article was published.


Where are you seeing this? This isn't historical data.


Here's a page with more info

https://intel.malwaretech.com/botnet/wcrypt


Yeah it scrolls off to the left. So you came an hour after my comment and it was gone. Heck it was almost gone by my second comment.


If so, that is both scary and exciting.


it took me a while to realize this is live....


I am curious. How is this tracked? What signature or what component are they looking for to be able to say "Yeah, here is another one"?

I'm just curious and would like someone with more experience to weigh in.

EDIT: To add on further to my question, I wonder why it does not use a terrain / city / province overlay instead of all black? It seems it would be much more useful to us network and sysadmins out there just in case we realized "Oh, hey that dot is right on top of where we work out of. I should probably fire up WireShark or something and test for infected systems."


Great info. So, for the layman. How vulnerable are users behind a firewall or broadband router?


Pretty safe until a machine in the network gets infected. The first infection comes from a phishing email or similar. From then on, the worm infects other machines connected to the same network, but usually not across the internet.

It uses a vulnerability in a protocol that's used for network sharing, and that's usually blocked at your router


What is the significance of the time span indicators? Does the 1M selection indicate how many computers remain infected or how many that were infected within that time span?


> from Kaspersky Lab

... the lab with ties to Russian intelligence, who are suspected of leaking the NSA tools.


Your point?


> "Microsoft rolled out a patch for the vulnerability last March, but hackers took advantage of the fact that vulnerable targets — particularly hospitals — had yet to update their systems."

> "The malware was circulated by email; targets were sent an encrypted, compressed file that, once loaded, allowed the ransomware to infiltrate its targets."

It sounds like the basic (?) security practices recommended by professionals - keep systems up-to-date, pay attention to whether an email is suspicious - would have covered your network. Of course, as @mhogomchunu points out in his comment - is this the sort of thing where only one weak link is needed?

Still. Maybe this will help the proponents of keeping government systems updated? And/or, maybe this will prompt companies like MS to roll out security-only updates, to make it easier for sysadmins to keep their systems up-to-date...?

(presumably, a reason why these systems weren't updated is due to functionality concerns with updates...?)


> It sounds like the basic (?) security practices recommended by professionals - keep systems up-to-date, pay attention to whether an email is suspicious - would have covered your network.

This is secondhand information (so take it for what it's worth, there could be pieces I'm missing), but I talked with a startup that was focusing on this problem, and the issue was not quite the computers and servers that IT were using (although sometimes it was), it was that many medical devices (like CT scanners, pumps, etc) come shipped with old outdated versions of operating systems and libraries.

No big deal right? Just make sure those are up to date too? Well, many times the support contract for these medical devices are so strict that you can invalidate the warranty by installing third party software like an antivirus, or even doing something like Windows update.

Even worse, many hospitals don't even know what devices they have -- it's easy for IT to know about laptops and computers, but when every single medical device more complicated than a stethoscope has a chip in it and may respond to calls on certain ports, it's a tougher picture to know.

The startup was https://www.virtalabs.com/ by the way, they really are doing some cool things to help with this.


In defense of these medical devices, that is actually a FDA requirement. The entire combination of the system is certified to work, and even one patch for a security vulnerability leaves open the possibility that the patch breaks something and people die! Of course it goes without saying that you need to ensure that a virus cannot run on this machine by some other means. If these machines can get infected they automatically loses certification and cannot be used for medical purposes.


In offense of these medical devices, they should never have been running Windows or any general purpose OS in the first place! A lot easier to guarantee security if the entire thing is a well tested 10-50KLOC Rust daemon on top of seL4. I am not even asking them to do formal verification themselves, just a small trusted base and reasonable secure coding practices. I mean, come on, a critical medical device running the entirety of Windows XP (or say, Ubuntu with Apache, an X server and GNOME[1]) should be considered actual negligence. The FDA should make it outright impossible to certify such contraption.

Basically, the rule should be: if you are using general purpose consumer software, then you should be doing updates; if you are in an environment where updates are considered too risky, then running commodity software should also be considered too risky and you should be building very small locked down systems instead. Ideally without a direct internet connection (they can always connect through an updatable system that can't actually cause the medical device to malfunction, but can be reasonably protected against outgoing malware as well).

[1] I would be ok with some of these devices running a stripped down Linux (or NT) kernel, just not a full desktop OS. If you need a fancy UI, then that can be in an external (hopefully wired, not IoT) component that can be updated.


The FDA does not forbid the use of general purpose OS. However, they are strictly regulated. For every SOUP, software of unknown provenance/pedigree, that is every piece of software that was not developed specifically for a medical device, this is the responsibility of the manufacturer to provide performance requirements, test, risk analysis...

Moreover, the manufacturer have the obligation to assess every known bug of every SOUP and provide fixes if this can endanger the patient.

The issue is that to prove that a device is safe you have to execute costly tests. For a device I have been working on, we do endurance tests on multiple systems to simulate 10 years of use. Even with intensive scenario, on multiple systems it can take a few months. And if we encounter a single crash we reset the counter and start again. So in the end the product is safe but it is costly. This is why most of the time it is actually better to have the most simple stack possible on bare metal. But sometimes mistakes have been made, and you inherit a system full of SOUP and this is a nightmare to maintain.

I actually except some shitstorm on Monday morning, luckily I am working more on the embedded side so no Windows for me but some other divisions will be affected.


> In offense of these medical devices, they should never have been running Windows or any general purpose OS in the first place!

Except that people don't want to learn a new GUI for every machine...

Except that people want to be able to use a tablet for the interface...

Except that people want to control things from their phone...

Here's the reality: The end user doesn't give one iota of DAMN about security. People want to control their pacemaker or insulin pump from their PHONE. Ay yai yai.

Even worse: can your security get in the way when someone is trying to save a life? That's going to send you to court.


Most of these don't apply in context of medical devices. Sure, you can find some which will give you access to the usual OS desktop. But largely they're integrated and have a full-screen, completely customised interface


The devices itself should not run Windows. You should separate the two: one for the device, one for the user. The user machine (a full-blown windows if that's what you want), you can security-update all you want.


Of course, such devices can put their code in ROM, and so any malware would not survive a reboot.


Sure, but then, you also need strict W^X memory protections, without exceptions (kernel included), since malware in memory of a device that doesn't often reboot is dangerous enough. For example, the very best malware for network devices never writes itself to disk even if possible, in order to avoid showing up in forensics. This already precludes most general purpose OSes and is still technically vulnerable to some convoluted return-to-X attacks that just swap data structure pointers around and use existing code to construct the malicious behavior, so I'd still feel better with a minimal trusted base even then.


This 100x. I know it's extremely easy to Monday morning quarterback hospital IT but it's not as simple as people think. There's legal and, far more importantly, medical implications to updating software at a hospital. Oh you think it's ridiculous we use i.e. 7 in compatibility mode? It's because our mission critical emr only works in that (well it really works in everything but it's certified in 7) and if we use anything but the certified software load in accessing it the vendor puts all blame on us.


Yes, it actually is.

Life critical systems should be small, fully open stack, fully audited, and mathematically proven to be correct.

Non-critical systems, secondary information reporting, and possibly even remote control interfaces for those systems should follow industry best practices and try to do their best to stay up to date and updated.

Most likely many modern pieces of medical technology have not been designed with this isolation between the core critical components that actually do the job and the commodity junk around them that provide convenience for humans.


Maybe you really think it's just as simple as mandating exactly what you wrote here. But I'd imagine you'd agree that even doing this would have real and significant costs, which means tradeoffs are going to have to be made, e.g. some people won't receive medical care they otherwise would.


> some people won't receive medical care they otherwise would.

which is what happens when your whole computing network is remote-killed


The problem is that the technology stack required by modern equipment is too large to be satisfied by anything but a general-purpose OS. Good luck trying to get a mathematically proven OS.


Pretty sure you can build an X-Ray/MRI control software in Rust on top of seL4, and do lightweight verification (or, even better: hardware breakers of some sort) around issues like "will output lethal doses of radiation". That is a general purpose enough kernel and a general purpose enough programming language, without having to drag in tens of millions of lines of code intended for personal GUI systems... Then for malware issues you simply don't plug that device directly into the internet, nor allow it to run any new code (e.g. your only +X mounted filesystem is a ROM and memory is strictly W^X).


Rust has a lot of nice safety features, but the compiler hasn't been formally verified at all.


Yeah, I am aware. The problem is that using, say, CompCert might result in less security in practice, since although the compiler transformations are verified, code written in C is usually more prone to security issues. It also puts the burden of proving memory safety on the developer, which is a requirement for proving nearly anything else. I don't know Rust well enough to know if this applies for sure, but I think it is a lot less to ask from the manufacturer that they produce a proof of the form "assuming this language's memory model holds, we have properties X, Y and Z" and then just hope the compiler is sane, versus requiring a more heavy-weight end to end proof. Also, eventually there might be a mode for certified compilation in Rust/Go, at which point you get the best of both worlds.


This is true, but work is in progress, and some parts of the standard library already have been. And some of that work has found bugs too: https://github.com/rust-lang/rust/pull/41624


https://sel4.systems/ is a formally verified microkernel.


Does this distinction between critical and non-critical systems make sense for medical equipment? Displaying the information to humans (doctors and nurses) is probably life-critical. If the display is broken, it's not working.

It's not like medical devices have an entertainment system like cars and airplanes.


The display and the business end of the equipment are critical and should not be network-connected (or even have USB ports, for that matter). The part that uploads to whatever big server should have updates all the time. The critical bit should either be connected to the non-critical bit by a genuinely one-way link (e.g. unidirectional fiber) or should use a very small, very carefully audited stack for communication.

This is all doable, but it adds a bit of BOM cost and changes the development model.


An alternative would be to expose these subsystems on a network and have strict API's, encryption, and authentication between them. This would allow you to audit/update components individually rather than the whole device. So your display would act as a networked display and only have a very limited set of functions.


Yep. That worked fine for the Iranian uranium centrifuge guys...


stuxnet jumped airgap over usb, did it not?


We already have that distinction in our current regulations... some devices need FDA approval, some don't


Which is why bog standard COTS OS shouldn't be used for these types of devices. They should use a proper hardened embedded OS that has some form of mandatory access control / capability isolation system.

The long and short don't use standard desktop Windows (or even standard embedded Windows), Linux or MacOS to run these devices.


It's fine to certify devices for certain software, but a device must either be free to maintain and secure or it's not connected to a network.

If someone has a computer hooked to an MRI machine and to the hospital network, and it runs outdated/insecure software then someone made a mistake somewhere.


> If someone has a computer hooked to an MRI machine and to the hospital network, and it runs outdated/insecure software then someone made a mistake somewhere.

If you want a system to reach 100% it can't rely on not making mistakes. If all operating systems are supposed to be updated, then this has to be enforced as part of the software. The software e.g. shouldn't accept traffic unless it's up to date.


Oh you think it's ridiculous we use i.e. 7 in compatibility mode?

It's certainly ridiculous if you don't keep it utterly sandboxed and limited to only required use.

Also ridiculous is anyone falling for - or being allowed to fall for - a mail based phishing attack anywhere in the organisation.


Oh come on. They are doctors and nurses, not programmers.


But isn't it part of their job to care for their equipment?

This is a failure of management to properly train their employees.


Anyone disregarding common sense security advice anywhere in any organisation should leave the premises under escort within ten minutes. Would have upped standards everywhere years ago if implemented.


I'm curious, not trying to be smart: 1. Would running Windows 7 in a VM violate the certified software load? 2. Is new device software being written to run in containers/hypervisor level?

I could understand if 1 would be a violation, but perhaps, after today, the FDA could fast track manufacturer patches to run software loads on VMs?

I don't imagine 2 would solve current infrastructure issues any time soon given the size of investments in current equipment, but could it be a best practice going forward?


Usually you get the system integrated into some panel of the device. It's not the software itself that's certified. It's the device as a whole with everything running on it, hypervisors included.


Many years ago, early days of War on Terror, there were 'cyberstorm' exercises by the TLAs of the U.S. military allegedly on some mythical networks that were not 'the internet'.

In 2006 this involved a nice virus that sent all your photos and emails off to people they were not intended to go to, there was a psychological aspect to what was going on with this payload plus a full spectrum dominance aspect - the media were briefed with the cover story but I don't think any journalists deliberately infected a machine to see for themselves.

At the same time that this was going on there were some computer virus problems in U.K. hospitals, those same Windows XP machines they have today. The Russian stock market was taken down around this time too.

Suspiciously I tried to put two and two together on this, but with 'fog of war' you can't prove that the correlation = causation. The timing was uncanny though, a 'cyberstorm' exercise going on at the same time that the BBC News on TV was showing NHS machines taken out by virus.

https://www.dhs.gov/cyber-storm-i

http://www.telegraph.co.uk/news/uknews/1510781/Computer-bug-...

So that was in 2006. A decade ago. If you found a hole in a hospital roof a decade ago they would have had ample opportunity to fix it. They had a good warning a decade ago, things could have changed but nothing did.

I had the pleasure of a backroom tour of a police station one night, don't ask why, luckily I was a 'compliant' person, no trouble at all, allowed to go home with no charges or anything at all. An almost pleasant experience of wrongful arrest, but still with the fingerprints taken - I think it is their guest book.

Every PC I saw was prehistoric. The PC for taking mugshots was early 1990's, running Windows 95 or 98. I had it explained to my why things were so decrepit.

Imagine if during the London riots of 2011 if the PCs PC network had been taken down with all of that police bureaucracy becoming unworkable?!? I believe that the police computers are effectively in the same position as the NHS, with PCs dedicated to one task, e.g. mugshots, and that a take down of this infrastructure would just ruin everything for the police. I think that targeting the UK police and getting their computers compromised (with mugshots, fingerprints, whatever) and then asking the police to pay $$$ in bitcoin before they were locked out for good next week, that would have made me chuckle with schadenfreude.


Just wait until Congress tries to impeach Trump, people are rioting in the street, various factions fighting each other, and then the shit you just described happens. Maybe the Education Secretary's brother has some resources in contingency for such an event.


In a perfect world there would be market pressure on device manufacturers: those device manufacturers who patch devices and ensure the patched versions are recertified, would win out over those who do not, in an environment where the expectation is for all these devices to be networked! But of course this requires a competitive market to exist, AND for recertification and patching to be trivial costs. Since they're not, even if a hospital administrator were to price in the risk of losing certifications on all their devices, it's likely that the risk would end up being less expensive than choosing that (potentially non-existent) security-conscious device manufacturer.

Anyone considering disclosure, responsible or not, should be aware of these types of secondary effects. Had these vulnerabilities hypothetically been discovered by a white hat or found their way to a leak-disseminating organization, the discoverers and gatekeepers should consider that not everything can be patched, and the ethical thing to do here would have been to notify Microsoft and wait for a significant product cycle to release technical details. I somehow doubt the Shadow Brokers had that aim, though. And it's saddening that even in the hypothetical case, many people would choose "yay transparency!" over a thoughtful enumeration of consequences.


Seems like the FDA should certify on software tests and not software versions.


Including virus like attacks and fuzzing.


The computer systems affected by the NHS ransomware incident weren't medical devices; they were patient records servers and emergency receptionist workstations. No excuse for failure to patch.


Don't blame the NHS IT staff. The decision to not pay for XP security updates came from the highest level, the UK Tory government: https://www.benthamsgaze.org/2017/05/13/the-politics-of-the-...


If that's the case that's fine, but then my question is why are these computers networked to any extranet source? It seems a natural conflict: you cannot update the system due to it being so important that it always works, yet we need to attach it to an outside network which allows risk of infection. In my opinion, if the computer HAS to be connected to a network that is accessible from outside, then it MUST be allowed to be updated with latest antivirus/protection updates.


If it is infeasible to keep certain critical, networked device up to date, then I propose an alternative solution: those devices should only produce output, they should not read anything at all from their external ports. Their only input, should be their physical user interface. Would that work, for, say, an x-ray machine, or an MRI?

We saw a fictional example of a scheme like this on Battlestar Galactica. Officers phoned and faxed orders around the ship, using simple devices that did not execute software. The CIC had its data punched in by radar operators, instead of networking with shipwide sensors. It was a lot of work, but it did keep working in the face of a sophisticated, combined malware/saboteur attack.


In theory sure that could work. In practice it would raise healthcare costs even further due to the extra manual labor. So that's not going to happen.


Why are those devices being connected to an unsecure network? Surely they should have super limited data exchange features?


As is commonly the case, hardware vendors are more concerned with selling you the hardware and probably spend bottom-dollar for their software developers. I can't say that I've worked in such an environment, but my impression is that management at such companies probably see software dev as a cost-centre rather than something to actually spend money on for quality.


But the hospital management shouldn't be plugging them onto the same network where end-users have access, no?


Surely that's the point of hooking them up to the network, so you can e.g. get the pictures out of your CT scanner on to the doctor's PC?


The doctors' PC can run just fine on an isolated network and doesn't have to be connected to the internet.


No that wouldn't work. Modern healthcare is a team effort, especially for patients with complex conditions. Doctors must be able to collaborate with each other including securely sharing data across the Internet in order to deliver effective patient care. No one is going to give up on that just to prevent a few isolated security incidents.


> securely sharing data

> security incidents


That's the idea behind N3, the NHS's internal network. The idea of a hard shell with a soft centre. With N3 as large as it is, the idea breaks down. Security in depth is required, secure at every level. The hard shell idea is outdated, and N3 is scheduled to be turned off in 2019.


So you propose a separate, isolated network linking all the medical facilities, doctor's offices and private practices nationwide? Even the military doesn't do that for most of their offices.

Also, the doctor's computer pretty much needs to interface with the system(s) that handles patient billing (and thus non-medical companies) and the system(s) that handle patient scheduling, reminders, etc.


> patient billing

Not really an issue in the NHS, apart from the occasional non-resident foreign national.

(The "fundholding" system does mean there's a certain amount of internal billing which the patient is never aware of, but the beating Bevinist heart of the free-at-point-of-use system is still in place)


Free-at-point-of use process tend to be ones that require integration with a billing service, namely, to send information about the performed procedures to whatever system is paying for them, no matter if it's some state agency, private insurance, or whatever else - that's what I meant by non-medical companies that would need to be on the network.

A private practice where everything is paid by the patient in full by cash or CC could do without any integration with external systems (just run a standard cash register), but as soon as someone else is paying for it, you generally need to link the doctor's office systems to that in some way.


Until that doctor needs to submit patient info to a study, look up an obscure symptoms, talk with others in the medical community, etc.


It has an ethernet port, someone will plug an ethernet cable into it. The problem is not so much that the users are idiots, the problem is that people get distracted some of the time and make mistakes some of the time.

And yes, surely they should have super limited network features. The important word is "should."


Many of the computerized medical devices are diagnostic, so being able to send digital data to doctors quickly and easily over the internet is a key part of their functionality. Also, the other way around - being able to get patient data to the device without manually re-entering them, which is costly and error-prone and thus dangerous.


In the article, it mentioned that patient records servers and patient inprocessing workstations were affected. No mention was made about medical devices.

Workstations absolutely should be patched with security updates. Running an intranet-wide update server is non-trivial, but is well within the reach of a competent sysadmin. And failing to do it is negligent.


Those are not the systems involved in these attacks; the NHS systems compromised were the workstations used by doctors to access patient records and the Samba servers storing those records.


I supported a "cashless gaming" server for years which had the exact same contract. One Windows Update and I couldn't even get a failed disk replaced.


well I suspect that such devices should not be connected I was on dialysis at a clinic from one of the effected trusts and boy am I glad that my hemo dialysis machine was not connected to the network.


They had better at least honor their warantee and replace all this hosed equipment.


Karen Sandler has an interesting story about medical devices and how they are, literally, putting her life on the line. She's both a lawyer and a hacker, and you should hear the stories she tells about how people distrust her for this and think she's trying to trick them when all she wants to do is learn about the software and hardware that is keeping her alive:

https://www.youtube.com/watch?v=iMKHqO28FcI


Wow, that video deserves more views than it has. Really interesting and relevant for a two year old talk.


If you run a large installation of computers, taking updates can be a huge risk. Often they can break things, and then you're in the position of being blamed for running an update. Not updating can often lead to much higher stability.

In previous environments I've worked that were "regulated", any change to the environent, such as a firmware upgrade, triggered an entire re-regulation process (testing, paperwork, etc).


That's wrong. If you run a large installation of computers, and you do not have a plan and a process for quickly deploying security patches, you should be fired with cause.

In this specific case, there are mitigations available that do not require installation of software, but merely a configuration change. Also in this specific case, the people who run IT at NHS are completely incompetent, and this has been well-documented for several years.

In the general case, "I have a lot of machines" is an excuse provided by the unable to evade being held responsible by the uninformed.


Easy for your to say. I have been unable to do my job for a several days because some update broke a service I was using. Sure the service was badly written, but we didn't know that until the patch was applied.

The phone company used to have (they still might, I'm not in the business anymore) large labs that were small replications of their network. I've been in meetings where the goal was to decide if we should try to get our latest release through their process - if yes and we were successful they would pay big for the latest features, but if yes and it failed [I can't remember, I think we had to pay them for test time, but the contract was complex]. A lot of time was spent looking at every known bug to decide if it was important.


Was the update a Microsoft Security Update?


> Also in this specific case, the people who run IT at NHS are completely incompetent

That's what you get when you defund critical services.


funding is necessary but not the determining factor. There are just as many incompetent IT admins in well funded private companies earning top pay, Sensible and aware top management is far more critical,


if your job is to keep a bunch of computers working, keeping the systems running is the goal. Deploying security patches quickly is not always considered a requirement.

Again, the problem is that rolling out patches quickly often leads to unplanned problems that can't be easily detected or rolled back from. That can cause problems worse than leaving security issues unpatched.


If your systems are exposed to the Internet, then deploying security patches quickly is a part of keeping the systems running - as illustrated by this case, where the systems obviously are not running and can't be easily rolled back to a working state.

The business of cybercrime is changing. With the growing popularity of ransomware, we should expect a gradual decrease in time between a published remote vulnerability and your systems getting attacked. It may be useful to delay patches by a day to see if there aren't any glaring problems encountered by others - but it's not a reason do leave open holes that were patched in March. Frankly, there was no good reason why this attack hadn't happened a month ago; next time the gap may be much smaller.

Yes, there is a chance that installing a security update to break your systems. But there's also a chance that not installing a security update will break your systems, and that chance, frankly, is much higher.

Furthermore, "That can cause problems worse than leaving security issues unpatched" seems trivially untrue. Every horrible thing that might happen because of a patch broken in a weird way may also happen because of an unpatched security issue. Leaving security issues unpatched can take down all your systems and data, plus also expose confidential information. A MS patch, on the other hand, assuming that it's tested in any way whatsoever, won't do that - at most, it will take down some of your systems, which is bad, but not as bad as e.g. Spain's Telefonica is experiencing right now. What patch could have caused them even worse problems?


When you say 'the people who run IT at the NHS' you are aware that thanks to recent governments attempts to break up central structures, each hospital trust, each GP surgery is likely to have someone different handling IT - market forces are good etc.


Do you know this assertively?


Downloading Microsoft security updates is simple and safe.

You just download the monthly rollup: http://www.catalog.update.microsoft.com/search.aspx?q=401221...

Any competent sysadmin will have these available on their internal update server and push updates+restart during off-peak hours.

Receptionist computers that can open websites with untrusted JavaScript can't reasonably be held to this certification. Certification isn't what kept the NHS from applying patches.


Some vertical markets use a lot of software that integrates with Microsoft Office applications. The result is that there is a much higher change of a Microsoft update breaking a critical application. [0] is a recent (September 2015) example of two Microsoft patches that were widely blocked in the legal industry until Microsoft released a follow up patch. iManage and Workshare, the products mentioned in the blog entry, are considered critical applications in any law firm that uses them. iManage is a widely used document management system (think primitive VCS with Office add-ins). All documents are stored in the DMS so access to it is critical to the business. Workshare is used for document comparison and metadata scrubbing. Metadata scrubbing is used on all outgoing emails.

[0] http://www.kraftkennedy.com/block-2-microsoft-patches-preven...


I'd like to hear r/sysadmin opinion on that.


Translation: "My feelings make me feel that the statement isn't right. Instead of finding out, I'm just going to say that I wish someone would tell this commenter they're wrong."


Translation: "Microsoft has lied about the content of their updates before."


> If you run a large installation of computers, taking updates can be a huge risk. Often they can break things, and then you're in the position of being blamed for running an update. Not updating can often lead to much higher stability.

There is such a thing as staged rollouts for this exact type of scenario.


Well this justifies MS's decision for forced updates in Win10. Not that I like it, just saying.


So your workstation is next to a bed and is attached to a machine which feeds a drip to keep a little girl alive and it gets your untested patch or whole OS upgrade and the dosage is increased or the driver stops and the patient dies.

Only non-critical machines can just automatically apply software patches from Redmond (or anybody). This is not laziness or incompetence - only a few weeks ago military grade exploits from the USG were leaked onto the internet and are currently being re-purposed for non-spying applications. Does anyone think any organisation is prepared for this? Chinese chatter indicates that ms17-010smb doesn't even fix all cases! Many organisations will have been saved by infra guys making sure ms17-010smb was rushed through and that McAfee sigs were updated 'just because'.

edit: fixed CVE (Eternalblue)


Machines like that, which cannot shoulder the risk of applying updates to a network-connected general purpose OS designed to run third party (potentially malicious) code on a non-deterministic non-realtime system... probably should not be using such a system. Patching is risky, not patching is risky.

They should have formally validated software running on formally validated deterministic realtime hardware, running in non-networked environments (But with telemetry and remote control from networked computers if that's convenient) we just don't bother because it's cheaper and legal to get away with selling hacky nonsense.


I agree. A mission critical MRI machine should not be running an off the shelf OS (Win, Mac, Linux). If you're paying $5 million for a machine, it better have its own real time operating system that had been independently audited.

Now the machine that you pull up the images on is most likely going to be a general purpose PC/Mac. You still need to patch that. Your IT dept needs to have patch cycles that deploy in sets, so all mission critical equipment can be tested before everything gets patched. It takes diligence, and planning. If you prepare at a very large hospital with two MRI machines, then a bad patch can leave you degraded, but not totally offline.


Custom operating systems would require higher development costs and extremely rare sysadmin skills, which would mean larger hospital budgets, which would mean higher taxes or premiums.

Yeah, not gonna happen.


So your workstation is next to a bed and is attached to a machine which feeds a drip to keep a little girl alive and it gets hit by a worm like this one, stops working and the patient dies.

As long as the chance of cyberattacks is larger than the chance of horrible patches, you simply accept the risk of horrible patches and install them anyway. Or keep the system totally isolated from everything, if it's that critical.


The IV drip machine is not plugged into the CoW (Computer on Wheels). That workstation is running a version of enterprise Windows primarily to allow the medical professional to view and update patient records.

The IV drip machine is plugged into the wall, and is operated by buttons on the front.


In reality, a huge number of modern IV drip machines plug into the wall for power and get their network connectivity via 802.11. This is to allow remote configuration and status monitoring.


Are the IV drip machines running Windows CE or XP embedded? Was there a news report that claimed that IV drip machines were affected by malware?


Your hypothetical situation distracts from the actual issue. The ransomware infected NHS patient records servers and receptionist workstations, according to the article.


> Well this justifies MS's decision for forced updates in Win10. Not that I like it, just saying.

Unfortunately, I think the active hours period cannot be set to more than twelve hours, which is less than the time required for some surgical interventions. I can almost imagine it: OK everyone, ten-minute break while Windows installs its updates, this guy who's been on life support for the last ten hours can wait a little longer.

That's why updates are not forced on business-grade installs, and forcing them would be a very, very stupid decision.

Forced updates make sense for home users, since Microsoft can't depend on someone requiring them to keep their networks secure. For other types of users, second-guessing update policies is always a bad idea.


Windows is not a real time OS. Neither is Linux (except for perhaps a limited number of forks/distros).

If someone is going to die if a computer stops working for any reason at all, it should not be running Windows, or Linux, or macOS. It certainly shouldn't be connected to the internet or to any other network.

When we treat computers as nice-to-have mixed-use machines with all the bells and whistles, you need to treat them like nice-to-haves and not need-to-haves.


Surgeries are scheduled in advance except for the most urgent procedures; most surgeons and surgical nurses don't work on weekends.

Surgeon workstations can absolutely be restarted once per month to install the monthly roll-up.

The article mentions patient records servers and receptionist computers being affected by the ransomware. Not life support equipment.


> Surgeon workstations can absolutely be restarted once per month to install the monthly roll-up.

I was replying to the part about forcing updates. I didn't know about the group policy setting (rightfully pointed out by sp332); without it, you don't wait a month, you wait at most 12 hours :-).


Win10 Pro is more flexible, although you might have to drop down to Group Policy to do it. http://pureinfotech.com/defer-windows-10-upgrades-updates/ At the very least, the workstations can be pointed to internal WSUS servers which control the rollouts. I'm guessing that's how most of the currently-vulnerable computers stayed vulnerable until now.


I stand corrected, I didn't know about that group policy setting.


Keep in mind that on business grade installs the updates are not forced.

And there have already been situations where updates have caused problems. Maybe not as severe as a full on attack, but enough to potentially disrupt production and thus risk someones job.


The infection I'm dealing with happened from a fully-patched Win 10 Pro machine running Windows Defender and Outlook 2013 (32). It already had authenticated access to the files on the server it encrypted.

There was a Windows script file on the desktop, something like "UPS tracker.js", but it disappeared before I could grab it and a free space recovery didn't return it. (Possibly due to TRIM, it was on an SSD workstation.)


Not forced if you are using Windows 10 Enterprise

https://docs.microsoft.com/en-us/windows/deployment/update/w...


It justifies security updates for all operating systems. It does not justify the installation of spyware or changes to the user interface.


Sadly this distinction is rarely made and imo intentionally kept ambiguous. Lovejoy's law is used for justifying spy, bloat and crapware.


Not all patches require reboots on many OSs. Some OS's even apply kernel patches live. :) They could have taken a more user friendly approach. I understand they were boxed in a little technically, but they built the box.


>It sounds like the basic (?) security practices recommended by professionals

The problem with windows is that crap can not be upgraded without stopping workflow and rebooting.

With linux distros you can upgrade packages in a background (I think that's because you can upgrade the file being executed in linux, while in windows you can't, but I'm not sure) without even rebooting after upgrade. You can even patch your kernel without reboot.

It windows you have to see an upgrade screen for an hour without an opportunity to do something useful, and after that you have to reboot. That sucks.


Functionality is always second to security. In this situation you patch all the machines and then you test which still work . If the machines at the hospital dont work because some software was incompatible with the vulnerability fix (which is almost unthinkable in most cases)- then those coputers are simply unavailable and surgeries are cancelled or whatever the impact may be.


I'm sure this sort of stuff doesn't help with speedy updates.

https://arstechnica.com/tech-policy/2017/03/public-universit...


I saw this at work from the inside of a big telco that did this. They replaced one guy who needed only vague instructions to configure complex software replaced by a team of five who needed detailed step-by-step manuals written out by the vendor and still took twice as long and couldn't cope with any hiccups along the way.

I do not believe outsourcing saves money. It only does so either by cutting quality of service, or in cases where the IT department was heavily mismanaged anyway. Bring in capable management and you don't need to outsource.


I've never seen a case where outsourcing of general-purpose IT things saves money over the long-term. It might make the budget look better for a year or two. Which, I think, is the motivation for a lot of the people making decisions to outsource. It is cheaper right now, so who cares about later?

Special-purpose stuff can still be cheaper to outsource, though. If I need something to work next week and it would take my staff a month to get up to speed, I'd spend the money on outsourcing it.


I think this is an excellent example that we can all reference the next time someone says that governments should be allowed to have backdoors to encryption etc.

This shows that no agency is immune from leaks and when these tools fall into the wrong hands the results are truly catastrophic.


> This shows that no agency is immune from leaks

That's well known for a long time. During cold war a lot of Russian weapons were based on the US designs. There is a TV series, Americans, which shows how to manipulate people and steal secrets. Even atomic bomb secrets were stolen (by Klaus Fuchs and others).

So I guess a lot of people in military complex make a lot of money on these exploits, PRISM and other projects. And they just don't care about whole society.


If you explicitly ask someone with the form "are there are organizations that are infalliable to leaks?" they're likely to say "no of course not. Humans make errors"

But if you phrase it to something like "Can the government be trusted with backdoors to protect us from terrorists and Chinese hackers", then suddenly public sentiment will change dramatically.


To quote Göring,

> Göring: Oh, that is all well and good, but, voice or no voice, the people can always be brought to the bidding of the leaders. That is easy. All you have to do is tell them they are being attacked and denounce the pacifists for lack of patriotism and exposing the country to danger. It works the same way in any country.

Patriotism is both a wonderful and terrible thing, and it is made worse by fearing the "other". Any time people create a boogeyman (China, Mexico, Muslims, what have you), be on the lookout for what the true motivations are.


> Patriotism is both a wonderful and terrible thing

I found that hypothesis widely accepted, without so much for it.

Patriotism fuses core values like freedom or solidarity with a flag. That's why it is easier to pervert.

Patriotism tells people that because there are people born in the same line limits that you, you should be proud of what they do, and you should help them first.

Patriotism distorts history.

> "Fourteen thousand years ago, Sweden was still covered by a thick ice cap." https://sweden.se/society/history-of-sweden/

Bullshit. Sweden didn't exist 14000 years ago. All history is learned as if the current countries were an inevitable result thousands of years ago. World history, human history, gets displaced to be able to build a national sentiment.

> "The colonial history of the United States covers the history of European settlements from the start of colonization until their incorporation into the United States of America" https://en.wikipedia.org/wiki/Colonial_history_of_the_United...

Again, we get that feeling of pre-determination. As if those people weren't free to choose their future as if they weren't individuals but just a means to create a country.

Patriotism narrows the mindset of populations. I don't see that usefulness. Anything that people does for patriotism will be better done for freedom, equality, fraternity, etc.

Why is patriotism a wonderful thing? What arguments am I missing?


Patriotism was temporarily necessary while we rapidly increased standard of living for ourselves, and didn't have enough resources to do it globally. In the early 21st century it was still a zero sum game on subdecade timescales.

Now we have more than enough resources to provide basics for all 10 billion of us (and decreasing) so patriotism has largely been confined to friendly rivalry around sports and regional cuisine. It was just a matter of mapping out the world's local customs and needs so the resources could be distributed intelligently.

And even at that, only about 4% of GWP goes to basic food, shelter, health, education, and cultural-ecological preservation these days. Entertainment and luxury goods make up the rest. This was unthinkable in the 2020s, but there was a lot of duplication of effort due to the maintenance of corporate moats in the basic sustenance industry at that time.

Sent from my iPhone 16S


16s? No Neuralink?


Only available on plus models.


It's even stronger than that; it's tribal. It affects political affiliations as well; once you've identified yourself as part of a group, you're more inclined to take on group's opinions, and you start to feel knee-jerk disgust at the rationales of the opposing side.

Keep the temperature up, and it eventually leads to civil war, just like amped up patriotism / nationalism leads to wars between states.


Patriotism can be a way to align the interests of a group ahead of those of the individuals in the group.

This can be a wonderful and terrible thing.


The scary thing is that it's about the group as opposed to people not in the group. This tribalism is nothing but scary.


It can be, yes.

But it's often patriotism that is seen as what enabled things like the congressional Republicans in the Nixon era to authorize the special investigations which brought him down.

That's only one example - there are plenty of others where an individual puts the interest of the group ahead of themselves. That isn't always a bad thing: the alternative is the tyranny of the strong, where the strongest individual has the most say.


I could also care about justice and people in general regardless of race or nationality or where they live.


Snowden comes to mind.


I'm yet to see anything positive from patriotism. It's a form of outdated tribalism. Even the idea of a nation-state isn't that old - this all started with the Napoleonic Wars.

Patriotism always leads to "us" vs "them", it seems.


Patriotism seems to be a euphemism for nationalism.


It's quite hard to find a good pitch for patriotism. I like my country, by like any relationship, it is conditional on not being a sociopath. Furthermore, everything i like about my country i country i can like directly: free speech is laudable in itself. Without free speech, what is the us? Nothing i care for.


Greetings from Germany. Losing WW2 thoroughly destroyed patriotism here. We do fine.


Greetings from Britain. Unfortunately we didn't get that benefit too, as Brexit and the current election demonstrate...


> That's well known for a long time.

But the implications of it are not. Otherwise, no one (including heads of TLAs) could continue to claim that gov't backdoors are a good idea without being widely perceived as an idiot.


We now know that the USSR A-bomb design was a copy of the US's first implosion design, but the USSR H-bomb design was completely new, very different from the US design.


To be completely fair, it's not the NSA's fault that software has faults. Its the software manufacturers'.

The ethical concern here is whether the NSA should have reported the holes to the manufacturers and the failure to handle its privileged knowledge in a safe manner.


> ... it's not the NSA's fault that software has faults.

But every time they ask for there to be legally mandated backdoors - they need to be reminded of these incidents.

The NSA actively wants there to be "faults" like these. They just only want the "good" guys to have access to them.


I definitely agree wrt intentional exploits ("backdoors") to be added. To me this news highlights the need for fundamentally safe software. Just like we might have safety laws in the automotive or airline industry.


If the NHS has been significantly crippled by this, and the NSA is partly at fault, could the NHS successfully sue the NSA in the UK?

(edit: my logic and phrasing was really bad)


At least in the US, there is limited ability to sue foreign sovereigns in our courts - not sure if that's the case in the UK too. Beyond that, I doubt this is a rabbit hole any government, much less the UK - which has a fairly imperialistic past - wants to go down. Glass houses and all.


Now that the U.S. has set an alarming precedent that the Kingdom of Saudi Arabia can be sued in U.S. court over terrorist funding, maybe the U.S. government could be sued.

I don't think they'd win; the ransomware authors and operators are the ones who perpetrated the act. The U.S. government probably wouldn't be found negligent since the software was stolen. NHS carries partial liability since it was negligent with its patching, according to industry-wide IT security standards.

Comparing it to firearms, I can be held partially liable for a wrongful death if I leave my Colt 1911 out on my porch; it's different if a burglar stole my gun safe and committed a crime.

(obligatory disclaimer that I am not a lawyer, I just play one on Hacker News)


They've been told for years to get off XP. They weren't paying MS to keep it updated. The exploit was patched months ago. Why were these machines even on the internet?

I'd say the NHS is far more at fault than anyone else here.


That would be a tough argument to make. Similar to how you would have trouble going after a gun manufacturer for murder rather than the attacker.


He is not talking about the actual flaws as being the example as to why we shouldn't give the NSA backdoor access; he is saying that the leaks prove that even the NSA can't keep their stuff secret. If they couldn't keep their hacking tools secret, why should we think they can keep their backdoor access secret?


In case anyone has been living under a rock for the past 3 years:

FBI's (recently fired) James Comey has been asking for an encryption backdoor for the past 3 years:

2014: https://www.fbi.gov/news/speeches/going-dark-are-technology-...

At that time, he said unbreakable encryption should be illegal: http://www.newsweek.com/going-not-so-bright-fbi-director-jam...

2015 (asking for a backdoor): https://www.theguardian.com/technology/2015/jul/08/fbi-chief...

2016 (same): https://arstechnica.com/tech-policy/2016/03/fbi-is-asking-co...

2016 (tried to force apple to create a backdoor for the iphone): https://www.apple.com/customer-letter/

And then here recently, he's upped it to an international agreement to create a backdoor: https://www.techdirt.com/articles/20170327/10121437009/james...

He's not the first, only, or last person to ask for it.



Good time to remind folks that gmail, facebook, whatsapp, amazon etc aren't going to be able to protect their data forever at the levels they currently are capable off.

A couple of bad business decisions and they are where yahoo is today. So be smart about how you use these services and educate the non-technical folks around you.


What would 'being smart' about using these services mean? It is pretty difficult to get through life in the modern age without using email for sensitive documents (or at least without using ACCESS to your email as a way to gain access to sensitive services, eg password reset emails, proof of ownership, etc)

Since email in the modern world has this type of importance, what should I do? If you say gmail can't protect their data forever, do I not use gmail for email? What do I use then? No service will be free from data leakage, even an email server I run myself.


Did I say stop using them?

Distribute risk. Use multiple accounts. Don't handle all work/financial stuff on a single account. Keep work and personal accounts separate. Reduce the number of hours you spend online being a data milch cow for these corps. This automatically reduces dependence. Don't allow messenger chat transcript backups to happen by just uninstalling the app every other night. Don't restore any saved transcripts on disk on reinstall.

I could go on and on but basic rule is use your imagination. Don't use these tools the way they want you to use them. Use them as you would use a tool in a workshed as an aid, not as a drug you are dependent on.


Just make sure whatever email provider you use offers IMAP and use a client like Thunderbird to keep a local copy in sync. Back that up somewhere safe and you're fine. If you need good, fast search, use something like X1.


This was something I thought POP did better since it requires maintaining one's own copies after downloading. But it was much less convenient as people used more devices.

Sad that managing our own multi device services is so time consuming.


That will protect you from data loss, but not data theft.


Data theft is a separate issue. Whether your using gmail, your own mail server or an account with your ISP; if you're machine is compromised all bets are off (including all your other files, not just email). At least with a backup you wont lose your data as a result of the theft.


I would say that it's probably smart to occasionally purge all your content from online services and keep your data in cold storage you physically control.


There is quite a large cost to that, though. Being able to search through old emails is a lifesaver. I can't count how many times I have searched through email to find some account info I set up years ago, or to get date information about when something happened. Just today, I searched my email for my old FastTrak account info, and found it on an email from 5 years ago.

Deleting all my email would be a big cost to pay for a gain that I can't exactly quantify; I would have to figure out the likelihood of my data being leaked over time and the cost to me if the data was leaked. That isn't readily obvious what the risk factor is for me, but I KNOW the cost factor.


You can download/store your email to a medium that you control, like a portable hard drive. Storing email online invites theft and can provide hackers with personal information that can be mined for personal info.


I agree about this ethical concern, but this attack also shows that reporting the holes to manufacturers is of limited use -- these exploits have been known to manufacturers since at least March, and while patches have shipped, the computers remain vulnerable. Clearly, automatic security updates are still not aggressive enough to prevent these kinds of problems. Though it isn't clear from the article how out-of-date the vulnerable systems are, which would help in planning for the future. For example, Windows 10 pushes security updates very aggressively, and I wonder how many of the infected computers were running Windows 10 -- health care providers' computer systems are often notoriously out-of-date.


No-one running a large organisation's IT systems is going to be letting individual machines just install whatever updates the software maker feels like pushing, even on Windows 10. That would be a big risk in itself: plenty of software makers, including Microsoft, have pushed horrible breaking changes in updates in the past.

Personally, where I would point the finger squarely at Microsoft is in its recent attempts to conflate security and non-security updates. Plenty of people, including organisations who are well aware of what they're doing technically, have scaled down or outright stopped Windows updates since the GWX fiasco and other breaking changes over the past few years.

This also leads to silliness like the security-only monthly rollups for Windows 7 not being available via Windows Update itself for those who do update their own systems (not that this matters much if Windows Update was itself broken on your system by the previous updates and now runs too slowly to be of any use). Instead, if you don't want whatever other junk Microsoft feel like pushing this month, you have to manually download and install the update from Microsoft's catalog site. Even then, things like HTTPS and support for non-IE browsers took an eternity to arrive, and whether the article for the relevant KB on Microsoft's support site includes things like checksums to verify the files downloaded were unmodified seems to be entirely random.

I get that Microsoft would like everyone to use Windows 10, but since for some of us that isn't an option or simply isn't desirable. Since we bought Windows 7 with Microsoft's assurance that it would be supported with security patches until 2020, this sort of messing around is amateur hour and they really should be called out on it a lot more strongly than they have been.


I would be curious about this too. I'd assume many of them would be running Windows 7, maybe? (Let's hope it's not XP).

Also, does Windows 10 Pro attached to a domain controller still have the same aggressive updates? Or do domain admins dictate that policy?

At one company I worked at, everyone in IT could volunteer for the patch group to get security patches a few days before the rest of the machines. That seems to work pretty well. Is there any evidence there might have been a 0 day involved that wasn't patched? I find it disheartening that so many machines in large managed networks like telecos and hospitals could be so far behind on patches! (3 months is A LOT in Internet time).

If people are just doing really basic stuff like order entry for doctors/nurses, we really need to get away from the full PC model. Seems like most of these machines should just be Chromebooks, Linux boxes that boot straight to a browser or something of that nature instead of a full PC/Macs. Lower the attack surface with something that's easy to update. Those machines would be lower cost too and easier to manage/patch -- moving back to the terminal/thin-client model.


> Let's hope it's not XP

BMJ released a report[0] just two days ago alleging that up to 90% of the NHS's computers are still running XP.

> Many hospitals use proprietary software that runs on ancient operating systems. Barts Health NHS Trust’s computers attacked by ransomware in January ran Windows XP. Released in 2001, it is now obsolete, yet 90% of NHS trusts run this version of Windows.

[0] http://www.bmj.com/content/357/bmj.j2214


It appears the Theresa May is trying to deflect attention from the fact that there has been massive under investment in NHS IT infrastructure by reinforcing that it is a 'international attack on a number of countries and organisations'.

Whilst this is true, it's probably also true that the impact of this attack is highly concentrated across organisations with chronic under-investment and a laissez-faire attitude to security.


>Whilst this is true, it's probably also true that the impact of this attack is highly concentrated across organisations with chronic under-investment and a laissez-faire attitude to security.

Good developers are rare enough, but good IT security and security-minded developers are even more rare. And it's even more rare that they decide to work within healthcare.

There just isn't enough of you to go around and you can't be everywhere.

Even if you can afford to have a dedicated pentesting team (I'd like to work at a healthcare system/hospital network that did), physical security is still a major problem if only because it's very easy to impersonate people.


In fairness, massive over-investment in NHS IT infrastructure hasn't gone so well either:

https://amp.theguardian.com/society/2013/sep/18/nhs-records-...


Fair point. Good example of death march project!


https://m.theregister.co.uk/2012/01/12/drone_consoles_linux_...

Military drones were using XP until they just had too much spyware on the machines to operate the drones.


It makes no difference whether they created the security holes by moles in the developers company or whether they simply withheld the information. They put human lives at risk by doing it.


> To be completely fair, it's not the NSA's fault that software has faults. Its the software manufacturers'.

While this is true, it doesn't address the point that you were responding to:

> this is an excellent example that we can all reference the next time someone says that governments should be allowed to have backdoors to encryption etc

...where "should be allowed to have" is interpreted as "should be given by software manufacturers".


>To be completely fair, it's not the NSA's fault that software has faults. Its the software manufacturers'.

The NSA has a specific mission to secure the nation's infrastructure. In witholding key information from US companies, it's failing that mission.


That's half the NSA's mission. Tt has another half and that is eavesdropping and getting into things. Those two missions are at odds with each other, and so the NSA has to make decisions about trade-offs. As these incidents show, the trade-offs the NSA has chosen to make have turned out to have been bad ideas.


This is why the NCSC was split from GCHQ in the UK. https://en.wikipedia.org/wiki/National_Cyber_Security_Centre...


Ok, so show us how you write perfectly secure code. It's sure as hell is the NSA's fault here for mishandling all their hacks into commercial sw.


I don't think you can completely separate the issue from other gov't actions. When the NSA or other gov't agencies come knocking on the door requiring a backdoor or other system security compromises, I would argue that those actions become a broad discouragement for private industry invest in security beyond a certain point.


You may be thinking of the 2nd law of thermodynamics. Possibly.


If I understand correctly, there were no backdoors used here. Only zero-days. If the NSA is guilty of anything, they're guilty of not informing system designers of exploitable vulnerabilities. But then the argument becomes entirely ideological and naive since we all know the NSA's mission is almost entirely counter to that outcome.

Edit: Apparently, not zero days. Vulnerabilities were patched months ago. I think the point still stands, which is that this outcome really has little to do with debate over encryption backdoors.

2nd Edit: On second thought, there is an argument that, if a backdoor were in place that only government agencies had access to, the means to access it could be leaked just as easily and in a similar manner to the way that information about these vulnerabilities was leaked. Then, we'd really be fucked since a backdoor could likely not be "fixed" with a simple patch (it might be fundamental to the design of a system). Considering this, I'll have to walk back my earlier statement and agree that the topic of backdoors is quite relevant here.


> Only zero-days.

The exploits released by Wikileaks' Vault 7 dump went public months ago. They're as much a 0-day as JFK's assassination was just a few days ago.


I've seen a lot of security people sticking to "this is not an 0day you idiots" retort, downplaying the importance of the leak. Frankly I think that's a pedantic argument that ignores too much of the real world.

The NSA leaks contained previously undisclosed security vulnerabilities that were patched only because they were stolen. In MSFT's case it was less than 30 days, and they basically skipped a patch week to make it happen.

It's manifestly obvious that 0day and 30day can both be considered extremely dangerous in the real world.


The difference is that at least five nationstates could have gotten in a 30 day window without much trouble.


Small correction: Nearly everything in WikiLeaks Vault 7 material was already patched (With the exception of something Cisco related which has since been patched I believe). The Vault 7 content was from CIA.

This issue is apparently based on a more recent leak by the Shadow Brokers, containing content from NSA and some other DoD elements who worked on offensive cyber operations.


Just because patches are available does not mean that they have been applied. Legacy applications, specialized hardware, vendor shenanigans, and organizational inertia can be significant impediments to keeping operating systems at current runlevels.


No zero days were used. This was patched in March.


Yes, but exploits for these bugs has been published now.


I worry that they might sell it as a reason backdoors are necessary: if only we had backdoors, we could've saved those patients! The flaw of this logic would be lost on most lawmakers.


Humor me... if encryption had a backdoor, then ransomware could be effectively mitigated.... Though I'm not a proponent of backdoors by any means, I don't see the logical flaw here.


...Because criminals are going to use state-sanctioned encryption software with mandated backdoors?

Even if everything off the shelf and open source has some built-in escrow unlocking keys compiled in, hackers are just going to find those code paths and remove them. Encryption works because of certain mathematical principals and laws.

Backdoors will only let governments look at legitimately encrypted data and not anything made by criminals who know how technology works.

There's a bigger question here: what if the NSA or CIA or some other intelligence/defence organisation discovers a solution to solve some of these hard problems in polynomial time .. and then doesn't release that information so they can use it to spy.

In that situation you're going even further: you have agents who are literally holding back scientific research that could change the entire field of mathematics and human understanding, research that could advance number theory by orders of magnitude (a jump equal to that of going from the first flight Kitty Hawk to the Saturn 5 rocket), for limited political gain.


That makes sense...

So "If encryption had a backdoor" is meaningless. It's really "If a given encryption implementation had a back door" and no one is making the criminals use certain algorithms.

thanks


Well, the bigger problem would be ensuring that the criminals used known broken encryption. The only advantage is that many of these attacks are copy-cat, so if you released the source code for a broken ransomware implementation, it will probably get used more or less verbatim… as has been shown in the past. (https://threatpost.com/bitcrypt-ransomware-deploying-weak-cr..., https://www.utkusen.com/blog/destroying-the-encryption-of-hi...)

Anyone who actually knows what they are doing, and are prepared to break the law, would just use AES. All of those law-abiding institutions would be forced to use a weak encryption scheme.

Sure, it might help stop script kiddies, but it won't help to stop professionals, and professionals are the ones that you have to worry about, since they end up hosing 45,000+ installations in a day.


If they don't just replace your data outright with noise.


Assuming that the criminal opts to use the encryption with an NSA backdoor and the victim is able to schedule time at their local NSA Genius Bar to recover their data.


> if encryption had a backdoor

This is the flaw in the logic. "Encryption" can't have a backdoor any more than math can have a back door.

Specific types of encryption can. But there's nothing to stop a malicious user from using a non-backdoored encryption algorithm or inventing their own.


Yeah, I don't think ransomware is going to use the US approved algorithm. What they are doing is already illegal.


So developers of ransomware would build backdoors into their ransomware because the law requires them to?


How would you practically do that? Send all those encrypted hard drives to NSA to be decrypted? Publish the backdoor, effectively rendering that encryption scheme broken?


Just ask the NSA to send you the un-encrypted files - they probably have them in their database anyway.


Wouldn't the attackers just use a crypto scheme that didn't have a backdoor?


The logic is that encryption without a backdoor already exists, and no law can stop a criminal writing a virus from using that.


Then encryption wouldn't be doing what it's set out to do.


The logic is sound in theory. But in practice if the government can't protect its exploits, they mot likely can't protect their keys to the backdoor.


Why would the people reaping the rewards of ransomware use encryption that has backdoors if backdoorless encryption already exists.


It's either turtles all the way down (backdoor of the backdoor of the backdoor..) or you always strive for secure software.


Why would ransomers use encryption with a back door? It's not like you can force them to only use the crackable math.


Who has the keys to the backdoor? How do you force the ransomware authors not to use the good encryption?


Only if the bad guys use the NSA-backdoored encryption.


Problem is that people (politician) wanting to push it through simply don't care. They just want to have access and they think there are agencies that can deal with potential consequences. It is frankly all about the money - they want to have ability to access sensitive data and therefore be more attractive to people willing to pay the bribe.


And it follows that anything that can create such harm CAN and eventually will "leak" or fall into the wrong hands.

Maybe one day, as a species, we'll learn not to create this kind of devices.

(sorry if the message seems too exaggerated)


also that it is very unethical for the US government to find some vulnerability in android/windows/whatever and not report it


Is it particularly unethical? Many governments around the world are discovering 0-days in commonly deployed products and not revealing that to the vendor, but instead using it as a weapon for navigating computer networks.

Revealing the vulnerability would place the US Govt at a distinct disadvantage.


This is an argument that highlights the difference between attack and defence in cyber: attack is easier than defence, an is the most chosen path because of this reason.

Your point is actually valid, but that doesn't mean I have the intention to pardon the NSA for having compromised the network of my university, the same network I used each and every single day during my studies (and no, I am not a terrorist, nor I know anyone involved in terrorism, child pornography, or what-else they had in mind).

Sorry to say, but "anyone is doing it", is not an excuse or a reason for doing something.

If instead of exploiting half of the world, they had dedicated their experience in making their (and everyone else) infrastructure safer (by sharing security conscious design concepts, considerations with software developers and hardware manufacturers), now we probably would not have had massive botnets, exploitations and leaks (least but not least the political consequences of perpetrating and sustaining this kind of decisions).

Where is the point when maintaining the supremacy of one's country over the others through deceit, intrigue, and espionage costs too much in terms of negative outcomes?

For me that line, US and many others included, has been passed a long time ago. But that's just my humble opinion. Each one is free to draw conclusions through his own point of view.


the entirety of your argument seems to be "it's not unethical because other countries do it", which is not compelling when you consider other forms of unethical behavior using this defense.


Were all of these unreported?


Not only unreported but weaponized by the US Government.


You are right but I'm not pleased that your comment has hijacked all discussion on this article.

(Not that it's your fault, it's somewhat germane to the overall issue of government, I'm just whining)


I agree with this but there is a good argument to be made that well engineered backdoors are better than intelligence agencies hoarding undisclosed exploits.


Is there? I can't think of one.


I am in Tanzania(East Africa) and my father's computer is infected.

All he did to get infected was plugging his laptop on the network at work(University of Dar Es Salaam).

The laptop is next to me and my task this night is to try to remove this thing.


This malware is well written, and uses strong encryption.

I would suggest that you and your father spend the evening reading up on backup practices, and reconsider the value proposition of open source software.

I hope I am not coming off as a smug jerk. My hope is that rather than becoming frustrated and demoralized after an evening of fruitless hacking, you and your uni will recover, and become resilient against future attacks.


He has backups of his data.

I personally use linux and my github repo is here[1] where i have a bunch of encryption related projects(zuluCrypt,SiriKali and lxqt_wallet). The last windows computer i used was windows xp.

I dont want to move him to linux because i am not always around and he can ask other people for help when he is on windows.

[1] https://github.com/mhogomchungu


Thank God for backups! And thank you for making sure people make backups.

My mother is in a similar situation. She is an elementary school teacher, and has little time for unrelated endeavors like this. What time she does have, is spent in the garden, as it should be.

Nevertheless, we are now seeing that the time-cost of closed source software, is greater than that of open-source software. My solution has been to prepare a KDE based distro for her, to work with her, side by side, whenever she needs to learn new tools. It is a good bonding experience, when both people can maintain a positive attitude about it.

The solution to the problem of malware, is education.


How quickly some forget heartbleed.

The solution to malware is obscurity. Have an OS that no one wants to break into, and you won't be broken into.


I think you are referring to diversity, not obscurity. Diversity does indeed increase the resilience of the network, but there will always be enough common factors across the board, that diversity alone will not suffice.

In the end, the software that we depend on, must be reviewable by anyone who is concerned about it. A prerequisite for that, is that software should be as small, clean, and simple as possible, to encourage such scrutiny. IIRC, the real problem with heartbleed, is that the OpenSSL codebase was a mess, and no-one wanted to work on it.


> The solution to malware is obscurity. Have an OS that no one wants to break into ...

... and you'll have an OS for which neither malware authors nor legitimate software developers want to write applications.

There's a trade-off involved. We could all use pen an paper and be invulnerable to malware, but then how would we post on HN?


That's my point, as I type this on fully patched Win 10 Pro.

Certainly Windows has its issues, but it's biggest 'flaw' when it comes to malware isn't that it's closed-source, but that it's ubiquitous and therefore a highly attractive target.


Linux is ubiquitous in the data center. We are not a low-value target. Also, corporations with cloud-based infrastructure are more likely to pay large ransoms for their data, especially if it is the backup/archive system that is attacked.


Data centers are dwarfed in size by the consumer and business markets, while also being much less vulnerable due to their more specialised nature and therefore ease of update. Case in point: there are plenty of windows data centres out there, but its not likely any of them were effected by this incident.


Heartbleed is a very different class of vulnerability -- it allows sensitive information to leak but does not provide root access.


Quick, migrate to TempleOS!


That's rather extreme, I think but Haiku or Windows 98 with IE4 could possibly be quite safe from viruses nowadays, even if not exactly "secure".


Are macOS or ChromeOS options?

Heck, some of my relatives are good with an iPad for 90% of their online activities.


Chromeos and everything in browser sandbox with always up to date software sounds pretty solid.


So reinstall the OS from orbit. It's the only way to be sure.


I understand that people love open source, but how is that relevant here? For example OpenSSL is open source, yet it didn't prevent Heartbleed and other exploits from happening?


OpenSSL was an example of open source done badly; neither of our communities can claim to be universally perfect. The solution, was to fork and replace OpenSSL with a superior project: LibreSSL. That part of the story, is a success for open source. It shows us recovering quickly and permanently from the worst catastrophe imaginable.


How widely is LibreSSL used, compared to OpenSSL?


Working fine on FreeBSD 10 for me, but it's not default yet as far as I know.

My thoughts on the matter are, this is all a pointless waste of time/effort, or otherwise said, an arms race of exploits/bugs that will go on and on and produce nothing of value, except justifying a military budget in various govs.

If they truly were doing their jobs and being of benefit, we wouldn't have the corruption we do, the paedo rings, the drug cartels etc.

To be secure, you have to beat the smartest people on the planet I would have thought, and unless you have a nation's resources, that's tricky. Tightening laws I'm not sure is the answer either, it feels like human nature expressed in Internet terms.


There is no way to know for sure, because we have not embedded telemetry / spyware in open source operating systems.

One of the problems here, is that large organizations are reluctant to update software across a large population of computers. If those updates were smaller, more transparent, and could be separated based on whether they are a security fix, a new feature, or a new tool that allows a 3rd party to monitor user activity, then the sysadmins would be empowered to close security issues quickly, while introducing minimal risk.


macOS uses LibreSSL:

  % ssh -V
  OpenSSH_7.4p1, LibreSSL 2.5.0
On Linux, at least Alpine is using LibreSSL.


From what I've seen, it isn't particularly well written. However, you're probably correct about the encryption being strong.

One of the reasons the infection rates are dropping off is that the malware had some kind of poorly implemented sandbox detection, where it would attempt to resolve a non-existent domain. However, now the domain has actually been registered by a researcher, so now every new infection thinks it's running in a sandbox.

This is the work of someone who doesn't really know what they're doing, and they probably copied a large chunk of the code from somewhere else.


There is at least one cryptolocker variant that supports Linux.


Out of curiosity, how does that spread?

I'm intrigued because I've seen people claim that "linux is just as vulnerable as windows to user stupidity," but I have a hard time understanding how. The vast majority of windows infections occur because somebody got tricked into running an executable file.

On every Linux distro I've used, scripts and binaries need the executable bit set or be explicitly run through the desired shell. As far as I know, no browser sets the executable bit on downloads. To run scripts, you need to know what you're doing.

Now,

  curl http://... | sudo sh
is an entirely different problem. As are remote execution vulnerabilities in the kernel. As are adding random package manager repositories found on internet forums. But all seem a bit more technically involved than opening an executable file with a .pdf extension.


Tip: Don't spend time trying to remove malware and undo its effects. You'll never know if you succeeded; most malware is designed to hide itself, and likely this particular malware is well-written.

Wipe the laptop and reinstall. It's more certain, and probably won't take much longer than trying to remove the malware. If the malware infects firmware or other subsystems below the OS, and thus won't be removed by a reinstall, buy a new laptop if that's an option.


Just that? No click somehwhere?


If someone connects to a network which has been infected and they've not applied the appropriate patch (MS17-010) it looks like they're in trouble if they're running Windows and don't have a firewall blocking incoming connections.

So first person in a network has to have fallen for the phishing attack, but once it's in the network it can spread via the ETERNALBLUE exploit.


I confirm that. Once inside network it's a carnage.


It can copy itself across a network through a vulnerability in SMB, Windows' file-sharing protocol. That's the bug that was disclosed in the NSA leaks. Microsoft released a patch in March, but of course not all computers are patched.


Clicks are for phishing and trojans, i.e. human vulnerabilities. This is due to an operating system bug, which is a technical vulnerability.

If you can get the right network packets to an unpatched machine, you can infect that machine.


One of the big problems here will be for any country which makes a lot of use of older computers using Windows XP as there is no patch for this vulnerability on that OS version.

How many systems that is, is debatable but by at least one benchmark (https://www.netmarketshare.com/operating-system-market-share...) we're looking at 7% of the desktop PC market that could be exposed with no patch available.



Going through their wallets it looks like they've gotten 32 pay outs, some for more than 300 USD. Are there any addresses that they are using outside of the four listed int he article?

It'd be an interesting project to try and track where these funds go and where they came from.

https://blockchain.info/address/13AM4VW2dhxYgXeQepoHkHSQuy6N... - 11 https://blockchain.info/address/115p7UMMngoj1pMvkpHijcRdfJNX... - 4 https://blockchain.info/address/12t9YDPgwueZ9NyMgw519p7AA8is... - 6 https://blockchain.info/address/1QAc9S5EmycqjzzWDc1yiWzr9jJL... - 11


They'll probably be tumbled (i.e Bitcoin laundering), meaning that we'll get no info from the transactions at all.


I haven't looked into tumbling recently, whats the volume look like these days? So far the attack has yielded less than 5 btc, I'd guess that amount can be laundered safely. Whats the current limit?


You don't have to worry about tumbling anymore.

You can use XMR.to, Shapeshift.io, or Changelly.com over TOR to move funds directly into another another blockchain currency. So have fun following things around Bitcoin blockchain like some high tech sleuth, but thats a wild goose chase.

I buy all my cryptocurrencies through those kind of services nowadays, because there's no risk or temptation to keep coins on custodial exchanges, instead of in a private wallet. As well as no worries about withdrawal limits (although shapeshift has fairly low per transaction limits, just make an additional transaction)

For unlinking the transaction, the only currency you want to cross-chain into is Monero. With its Ring Signatures and Stealth Addresses it is a private blockchain by default (in comparison with some other cryptocurrencies that have a secondary optional privacy feature like Zcash/Shadowcash/Dash).

I'm actually surprised that the ransomware isn't taking Monero directly yet as some exchanges have direct Monero/USD markets already.


Thanks for the info, I finally took the time to read the Monero whitepaper, cool stuff!


> I'm actually surprised that the ransomware isn't taking Monero directly yet as some exchanges have direct Monero/USD markets already.

Buying Bitcoin is much easier


Marginally.

And the malware controllers can just as easily add instruction to users to shapeshift bitcoin to their Monero address


I can't speak for a current safe limit, but it isn't very hard to transfer funds between various cryptocurrencies. Tumbling is not secure with a large amount of coins. All you have to do is monitor all transactions and figure out what went in and what came back out. If you start splitting things off in small amounts on various blockchains things become much harder to track.

This is assuming the attackers know what they are doing. I would be against that.


A cursory check doesn't bring up any Helix (the largest tumbler out there) stats. I'm not certain they even make that info public. Also I wouldn't know how to measure the level of privacy in this context.


Does that means the wannacrypt attack has paid only 60,000$ ?


So far, it looks like tx are still rolling in. They haven't cashed it out yet, so they really have ~35 BTC, nothing in USD yet.


where did you get these addresses from? I only get the 115p address on a retweeted photo from an NHS computer


They were posted in the Kaspersky analysis

https://securelist.com/blog/incidents/78351/wannacry-ransomw...


This gives the lie to the notion that a government master key or back door scheme could be protected from leaks and abuse.


Came here to say this. I completely agree.


Malware tech need recongnition! By being the first to register the hard coded domain in the malware they have slowed the spread significantly ...

https://twitter.com/josephfcox/status/863171107217563648


The real world doesn't update in 2 months. (I wish it did.)

The NSA should have responsibly disclosed the vulnerabilities they had been sitting on as soon as they were discovered.

That protects national security - not this.


Wikileaks should have disclosed before dumping publicly.

Burning down the house to prove that there are fire safety issues is the wrong approach.


This has nothing to do with Wikileaks, who have tried not to release any unpatched vulnerabilities in the Vault 7 documents and have been ignored by many companies they have approached offering to disclose vulnerabilities.

At least double check you've got the right person before labeling them an arsonist.


Wikileaks did not disclose these exploits

https://motherboard.vice.com/en_us/article/shadow-brokers-du...


Likewise not calling the fire department when you know all of our houses are likely to be burnt down in the hopes that some "bad guys" might get burnt is also the wrong approach.


I sort of wonder if Microsoft could create a mode for Windows where if it detects a security update available, it MUST update. I have spent a lot of the time trying to get Windows to just fucking STOP, but there are environments where security-before-use would seem ideal.


I wonder if any NSA computer or employee is affected by this. The best thing would be if the family of the directors and management would be affected. Just to show how stupid and irresponsible they act.


You can keep an eye on their bitcoin wallet (or at least one of them): https://blockchain.info/address/13AM4VW2dhxYgXeQepoHkHSQuy6N...



One of the side effect if states participate in the proliferation of offensive tools. Won't be the last time state-sponsored tools, exploits or backdoors fall into the hands of interested third parties.

I think collateral damage like that is way underrated by politicians all around the globe that call for their respective intelligence agencies to build up offensive capabilities to be able to conduct cyber warfare and whatnot.


The vulnerability is already patched, it is not a 0-day. Regardless of the leak, anyone could have reverse engineered the security patch to see how it worked.


Cisco's TALOS team just published an analysis:

http://blog.talosintelligence.com/2017/05/wannacry.html


Apparently, this has spread to Deutsche Bahn...

1) a railway dispatcher just tweeted that IT systems will be shut down (https://twitter.com/lokfuehrer_tim/status/863139642488614912)

2) a journalist tweeted that an information display of DB fell victim to ransomware (https://twitter.com/Nick_Lange_/status/863132237822394369).

I guess that #1 and #2 are related, though.


BBC says up to 74 nations now: http://www.bbc.com/news/live/39901370


In other words, time to stop talking about nations if it's about the global internet.



Wow, the future is here and it's not looking very good. We need to diversify our OS's in the enterprise. This time it was MSFT next it could be linux. No OS gives an absolute guarantee. The systems are relatively dumb now what will happen when AI has gotten deeper into our everyday lives. This is a wake up call.


Wow, this is so insane. I really don't think the NSA should be finding vulnerabilities and keeping them to themselves.

I mean I get it is all to help stop the bad guys, but if you are keeping cyber weapons like this. You should be required to keep them as secure and locked as possible if you don't follow responsible disclosure.

Just like how a cop would keep their weapon on them, instead of sitting it down on the table while eating lunch.


Your analogy doesn't really work because you can't copy a gun. These tools are way more dangerous than a gun because you can replicate them very quickly. You can never destroy the tools once they are created, someone always has a copy.

This is what scares me more than nuclear weapons. A nuke requires a huge amount of people and infrastructure to maintain and launch. But a digital weapon? Pfft, copy that shit onto a USB key and one guy can wipe out power stations across the entire country.


Why are power stations on the same network with some guy with a USB key?


Because you left an infected usb key or ten in the power station parking lot, the power of human curiosity, and the marginal cost of proactively protecting against something "very unlikely" by e.g. epoxying usb slots because procedure says it can't happen.


Because people still use USB drives to copy information to airgapped computers. It is easier than the alternatives.


Yes, they do. Yes, it easier. Yet, it completely undoes the "airgap" thing.


Security would be so much easier if it weren't for all these users ...


Your incredulity would be fully justified in the 1990s, but with every year that passes, it is becoming harder and harder to fully isolate computer systems from other computer systems. I'd like to think I wouldn't let untrusted devices near my power plant, but I have some sympathy for those who struggle to keep their stuff secure in a world where security is ever harder to achieve.


Because they are all connected in a IoT with MongoDB, React, and Node.js

/s



Your last sentence seems to contradict your first, whereas what you would really prefer is to disarm the police. Sadly I don't think that's so practical, in the same way that it would be impractical for US police to go unarmed given the high incidence of gun ownership in the US. I grew up in a country where police are not normally armed (other than with a small baton or similar personal defense weapon) and much prefer that, but when there's a lot of weapons around that's a reality you have to deal with.

As regards these cyberattacks, the NSA is at fault for its poor security allowing the weapons to become available to bad actors, but the mere existence or stockpiling of weapons is not the direct cause of crime. It might be more useful right now to consider who is operating these weapons, where they are firing them from, and how best to neutralize them.

tl;dr when you're under fire is not the time to worry about gun control.


The problem with the gun analogy in your particular argument is that a 'cyber weapon' or exploit is the flip side of a flaw in normal software.

The NSA is in a very weird position because they have a task to protect the systems of the US (Information Assurance) but also to attack those of adversaries.

In this case I think they are legitimately to blame for failing to discharge their assurance duties. They've failed to properly calculate the risk of leaking the exploit and now US interests are harmed because of that failure. This is a direct result of stockpiling exploits and not exposing them to the respective software vendors. In my opinion the security of your own systems is more important the insecurity of that of your adversaries, which is why I believe that the hoarding is bad.


I'm not defending the NSA's poor security of bad strategic choices; the reason I use the gun analogy is that mass -production of weapons is as much the flip side of industrial production as cyber weapons are the flip side of normal software vulnerabilities.

Also, when you're under attack it might be more useful to worry about the identity and source of your attackers than where they stole the weapons from. Weapons facilitate aggression but are not the cause thereof, and we're not the only people who know how or maintain an interest in such weapons.


I think we should have armed police, along with anyone else that's sane. All for the second amendment. Just in the cyber world, it just feels irresponsible because of the unlimited nature the internet has. Also probably the fact, I read posts that get popular on HN from time to time where the researcher does a responsible disclosure is probably influencing that feeling too.

I guess what's standard in the "tech world", is probably totally different in the intelligence community.

Like companies can't protect themselves, if there's no updates. There's basically no defense. Same reason I'm not a fan of nuclear power. Once you make the waste, it's hard to get rid of.


Right, I'm sure the NSA doesn't currently take any effort to secure their trove of 0-days. It's not like they're valuable assets or anything.

Edit: My point is that thinking that requiring the NSA to keep them "as secure as possible" as though that would eliminate risk is just silly. There will always be risk of breach or insider theft, as well as the requirement that the exploits actually be put to use outside some theoretical digital lockbox. And more importantly, there will always be the risk of human error. The only way to ensure this can't happen again is to require disclosure & patching.


Wasn't the story behind the NSA leak that it explicitly wasn't well protected, and was passed relatively freely between contractors and without much in the way of oversight?


Not at all, you are thinking of the allegations regarding the CIA content from WikiLeaks.


> Right, I'm sure the NSA doesn't currently take any effort to secure their trove of 0-days.

This case seems to show that whatever effort they're making is not sufficient.


Welp, here we are.


Yeah, I am pretty sure all governments do this. Why would they release it if their goal is to get unrestricted access to the public. They don't want those holes patched, so to speak.

I wonder how much of their efforts are deterred when good minded infosec persons find vulnerabilities and report them; remember Heart bleed.


The problem is that the NSA is run by humans. Humans leak things by their own volition. No amount of best practices or levels of trust can change this.


There's the bitcoin ransom aspect, but presumably a worm like this could extract a massive amount of data from infected servers and send that back to someone/somewhere?

Bank transactions, patient medical data, stored passwords/keys/CA info, contacts, emails, configuration files, registry dumps for firewall rules etc etc. (I'm not that creative so there's probably a lot more that's been exfiltrated).

Pretty hellish knowing they'd let that quietly sit there, in the name of espionage. I'm not sure the benefits outweigh the damage they're doing, without even mentioning the chilling effect and lack of confidence this instills in IT everywhere.


Right, the real money is not going to come from the bitcoin ransoms, but from the information on millions of patients which they surely made copies of.


We really are living in the future. My condolences to the NHS, but what a time to be alive.


Out of curiosity, what about this attacks feels futuristic? If anything it feels very retro, in that it hails back to the notorious worm attacks from the earliest days of networked computing.


the headline more than anything -- pilfered secret spy software stolen by (probably) a rival intelligence agency, released to the public without scrutiny, repurposed by cybercriminals, used to ransom data indiscriminately for decentralized software currency, bringing major institutions to their knees, and defeated by a guy in his bedroom at his parents' house who accidentally found a secret kill switch. it all feels very cyberpunk, and very much like a fictional plot, unfortunate circumstances aside


aha, and i see i'm not the only one who thinks so: http://www.antipope.org/charlie/blog-static/2017/05/rejectio...

though a bit of it is in your camp as well:

> It's a worm — a boringly old-hat idea first introduced into fiction by SF author John Brunner in his 1977 novel "The Shockwave Rider".


Cyber attacks use patched exploit to attack systems running out of date software, even in large enterprises handling sensitive data?

I give a pass to individuals (bandwidth for updates can be expensive, regular users don't know about patch Tuesday etc), but enterprise scale deployment should have IT for this, and IT should have been well aware of this kind of thing happening.


Strangely enough, people like Matt Blaze are out beating the "don't blame the victim" drum by stating the exact opposite, giving a pass to large enterprises under the "patching is hard" mantra:

https://www.cs.columbia.edu/~smb/blog/2017-05/2017-05-12.htm...


If I want a deep technical analysis of what we know so far, where do I go?


What gets me is why we don't see more viruses that _deliver_ the patch to fix the vulnerability.

It's perhaps a little more difficult as you'd need a vulnerability to keep spreading the innoculation. Arguably, though you release the virus, let it spread and then trigger the innoculation using a mechanism like calling out to a webserver, just as the kill switch worked here.


You run the risk of jail time without the upside of ransom payments.


True, plus, I forget the legislation but you are effectively breaking into the computer first which is a crime. Committing a crime for a noble outcome is still a crime.

Incentives is a real issue here and those that provide the patch would, reasonably, expect a reward i.e. MS for updates, AV provider for testing, finding and securing the vulnerability and a whitehat for disclosure. However, there is no reason why a "charitable" hacking group wouldn't do this as part of some sort of digital vigilantism. Sometimes people do things without extrinsic reward and the thrill here is that it is as hard as cracking, but you get to know that your efforts could be immediately applied.


That's an interesting idea: release a virus to cite a virus. Reminds me of the game Uplink, where [spoiler alert] you choose to either help spread a virus to destroy the Internet, or help spread a "counter-virus", hacking large servers to cure them before they're overrun. Digital vigilantism, that's what that is.


"Digital vigilantism" that's exactly it.


If NSA made it, and failed to protect it - then NSA should be liable for law suits to pay for damages.


Well, gun manufacturers have been sued in the US. Now they're protected by the Protection of Lawful Commerce in Arms Act.

But NSA is part of US DOD. So the chances of a successful claim are about zero.


Should that apply only to NSA or also to the writers of e.g. Metasploit exploits?


NSA made a weapon with the purpose of harming someone. In court, intent matters.


So: A makes a product with flaws, B makes an exploit, C leaks that exploit, D adds a harmful payload to the exploit and goes on to extort/profit from E, who has computers systems they failed to patch in time... and somehow B and only B is at fault?


FTFY: and somehow D and only D is at fault? You'll see that they'll get the blame and the rest goes free.


So does being able to enforce it. No one is going to sue the US DoD and come out winning.


> The attacks were reminiscent of the hack that took down dozens of websites last October, including Twitter, Spotify and PayPal, via devices connected to the internet, including printers and baby monitors.

Lazy writing at NYTimes; what on earth does this attack have to do with the one at hand? It's not broadly the same type of attack, nor the same scale, nor the same outcome.


As far as I can see it hasn't moved the needle on Bitcoin/$ today though.

Ransom ware was a play for big Bitcoin holders to unwind large positions at the highs without too much downward pressure in Bitcoin market.


It could also just be the NSA banking on everyone assuming it's someone using NSA tools.


While I can understand WikiLeaks position, I feel like it was incredibly short sighted and uninformed of them to release the code itself. Unless you believe that they are working with the Russian (and other?) governments to destabilize the west. Personally, I wouldn't be surprised if this was the case.


My impression was hat the Shadow Brokers already had, or were about to release the tools which Wikileaks ended up leaking. Regardless, these should've been disclosed to the manufacturers under Obama's policies.


I would be curious as to the agenda of these "Shadow Brokers" it all sounds very Gibsonesque. Recent events have made Neuromancer seem more and more prophetic to me.


They were hackers who acquired a trove of state secrets and were looking to make a quick buck. I've linked an archive of their initial statement below. I think it speaks volumes about how far the NSA can be trusted that these people were the ones to leak the tools instead of a state actor or someone previously known.

https://web.archive.org/web/20160815124425/https://github.co...


that's pretty heavy. life imitating art for sure. thanks


So If I pay how does the hackers decrypt my HD? Is there a way to sniff the key and pay once - decrypt everywhere?


You send them the code (encrypted key?) from your machine, they send you back the key that works for your machine.

If you have multiple computers (as these large organizations do), you need to pay for each one separately; the key for one won't work for the other. Perhaps they offer volume discounts?


Send it how, do they provide an email address and helpfully send it back by return mail? A tor hidden service maybe? I'd have assumed they just take the money without bothering to decrypt anything, but maybe they are looking for repeat customers.


Depends on the particular malware, but generally it will direct you to a (tor) website explaining the details, often with newbie-friendly guides on how to set up the accounts needed to buy and transfer bitcoin.

They generally do offer a way to decrypt, it's a long term business for them, not a one-time prank; and the results matter - first, the "audience" who are willing and able to pay generally have multiple devices, and they won't pay for the dozen other devices if the first "trial" device isn't successfully decrypted, and second, the infection spreads over victim's contacts - so your buddy who also got the malware managed to decrypt, you're more likely to pay, and if your buddy paid and failed to decrypt, the crooks won't get a dime from you.

There are all kinds of options. For example, one piece of malware offered to decrypt two files of your choosing for free when you contacted them, just to show that they can do so, as a 'teaser' before paying the full amount.

Besides, why wouldn't they decrypt? It's not like it costs them anything or takes much effort; if they have the ability but wouldn't send the keys, then that's just hurting their business "PR/advertising" for no reason whatsoever.


Once it's reported that paying for (your specific one, or even just some) ransomware doesn't give users files back, you loose a lot of money. Ransom only works if it's believable you have something to ransom.


Here is a link to the malware sample and technical implementation details.

https://gist.github.com/rain-1/989428fa5504f378b993ee6efbc0b...


I was debugging a private web app today when I noticed a python script agent suddenly performing a port scan on me. it was querying for something called "a2billing/common/javascript/misc.js". After googling that phrase it seems im not the only person who has seen this today. The country of origin of the IP was Britain.

After Further investigation, it appears this attack could be in relation to this http://www.cvedetails.com/cve/CVE-2015-1875/


> Security experts described the attacks as the digital equivalent of a perfect storm.

Just in case there are any journalists reading - never use the term "perfect storm".


Im hearing the password wncry@20l7 decrypts the zip within the PE resources. anyone confirm?


https://twitter.com/0xSpamTech/status/863224147576594432

Believe what you want of it of course.


First of all, while I of all people love to pile onto the anti-NSA bandwagon (within constitutional reason that is, I don't advocate their abolishment, but that's a different conversation), there are quite a few non-three-letter related things that have contributed to this story and ones like it.

The primary issue at the heart of things like this, beyond the backdoors and 0-days is this: bad IT.

That being said though, bad IT is far too often the fault of upper management, and not the IT people themselves. After years of sysadmining, I've seen the inside of hundreds of companies, from fortune 500 oil to medium sized law firms. You know what they have all been doing over the years? Cutting costs by cutting IT. Exept... they completely fail to consider long term consequences, which end up costing more.

I blame things like this on two main groups. Boards of directors, and company executives. Far too often I ran into a situation where a company didn't even have a CIO or a CTO, and you had some senior one man miracle show drowning in technical debt reporting to a CEO or CFO and getting nowhere, and therefore getting no support, no budget, no personell, etc. I've seen exceptions too, but they are far too rare. If it's not technical debt that's drowning the company, it tends to be politics. The bottom line is forward thinking IT personell don't get heard, and inevitably companies hire people or an MSP with all the proprietary, cisco, microsoft, oracle, etc bullshit certs that make the C's feel better, but don't actually produce the wanted results. They inevitably end up providing an inferior product with inferior service at a short term cost just as high as doing it right the first time, and a much higher long term cost.

If I could say one thing that could help prevent issues like this, besides my standard whinging on about FOSS and the four freedoms and such, is that we need better CTO's and CIO's to advocate on behalf of IT departments, and I think senior sysadmins who feel they have hit a ceiling should consider going for their MBA's and transitioning to those titles.

Now, onto the NSA angle of the story. Well... all I can say is I told ya so, with an extra note that HN in the past few years has been surprisingly dismissive of FOSS proponents who have been warning about these things.

First they made fun of us for saying everything was being spied on, and then Snowden happened. (often followed by bullshit like "are you suprised?" or "what do you have to hide?"

Then we warned about proprietary systems, and then NSA/CIA tool leaks happened. (often followed by things like "but its for foreign collection only" and "but the NSA contributes to SElinux")

Ya'll aren't listening until after the fact, and that's not going to fix anything.


IT is just a reflection of overall society. In the name of immediate profit, we're cutting all we can cut, including essential services and maintenance; sooner or later we end up paying the full price for it.

This will not change until the reward systems for managerial classes change significantly.



It's not 12 nations.... it's all over the world...


Yes, 70+ countries.


Botnets don't care about countries. It's not an attack against 70+ countries, it's an attack against everyone on the internet.


The point was that it is worldwide.


When a nuclear bomb is dropped on Hiroshima, is that an attack against hundreds of buildings, or is it an attack against Hiroshima? :)

Thinking of it as "an attack against 70+ countries" is an anachronism. The attack doesn't care about countries. It doesn't even need to acknowledge their existence.


Maybe it is now the time for a major review of the NHS Microsoft software dependency and should seriously consider switching to Linux based software.

Here is the BBC news update about the NHS Cyber attack:

"NHS trusts 'ran outdated software'

Some who have followed the issue of NHS cyber security are sharing a report from the IT news site Silicon, which reported last December that NHS trusts had been running outdated Windows XP software.

The website says that Microsoft officially ended support for Windows XP back in April 2014, meaning it was no longer fixing vulnerabilities in the system - except for clients that paid for an extended support deal.

The UK government initially paid Microsoft £5.5 million to keep providing security support - but the website adds that this deal ended in May 2015."


A simple patching policy would have fixed this


https://youtu.be/VjfaCoA2sQk Hitler rants about cloud security. Sorry couldn't resist...


Linux doesn't have a magic fix for buffer overflows in networking stacks written in C.


> except for clients that paid for an extended support deal

It does have a fix for this, though


Yeah, it's called "install the latest kernel".

Upgrading to a new version of Windows was apparently not possible, which also means that upgrading to a new Linux version would also have been out of the books.

So the only solution would have been to hire someone to backport whatever fix was needed.


An open source update can be audited much more easily than a closed source update. It is also usually possible, with OSSW, to find the discussion where the software's developers proposed various solutions, and debated their merits and risks.


Does Debian still support Woody? Does Red Hat still support whatever OS they were shipping in 2001?


Medical offices are notorious for having machines out of date, not properly secured, and not backed up. Just recently I wanted to get test results from a few years earlier from a previous doctor. Nope, the machine they were on runs a proprietary GE setup and it crashed. The same test a few years earlier? The hospital lost them and had no record of them being done. A different test I had done a month ago was hooked up to an aging Windows XP machine. Yes, it was networked, though I'm unsure if it was intranet only (I doubt it).

In the US, you have to manage your own healthcare. Get every result as a hard copy or on disk (in the case of MRI etc) and save it yourself. And back it up. That way you're prepared.


I recently went to a consultancy sales meeting with a GP who wanted me to port the MS Dos Patient Record Management system used by his medical centre to the cloud. While I'm sure with a suitable budget it could have been figured out the fact that I could only find a handful of references to the database file format when searching google didn't bode well. It looked like I would have to reverse engineer the parsing and interpretation of the bytecode. In the end my advice was to hire data entry professionals to do it manually.


You likely preserved your own sanity and theirs.


I hope the NSA can be hold accountable for this and we can finally all agree that a government holding on to 0-days and asking for loophole encryption always bites back to the very people they claim to protect.


So... I'm running Linux on all my systems, how bad will it be for me?


Oh, and I'm flying tomorrow, what software does an Airbus run on?


SCADE, FWIW...


My university sent around an email with a photo of GRUB displaying some ransomware message with a demand 222 bitcoins. Sure freaked me out, as a linux user who usually gets to ignore emails like this, but upon investigation it was unrelated [1] to today's events. The screenshot was of ransomware that while still terrifying, existed before today.

[1] https://www.welivesecurity.com/2017/01/05/killdisk-now-targe...


Q: does anyone know how to disable regular internet access in Windows except through a virtual machine (VMware or Virtualbox)?

I have set up my mom to use a live debian cd through VMware, but I would also like to disable networking through Windows Edge and Explorer. I don't know how to do this however.

Myself, I follow a similar scheme but using a linux virtual guest and host. Is it easy to disable networking for all networking except for apt/yum and vmware/kvm?

Lastly, does anyone know what it costs for a personal subscription to grsecurity?


My first thought would be to clear the routing table on Windows (maybe using a batch script on startup?) and using bridged networking in the VM.

That would totally disable internet access on Windows though, including updates (but you also wouldn't have that attack surface!)


Thanks. Had a brief look, seems useful.

Does the VM using the "nat" mode of networking also use Windows routing table? I don't know much about the networking between guest and host, except that the guest uses NetworkManager through its ethernet device. Even though this is a virtual device, I didn't think it would go through Windows' own net stack.

Would the bridged networking be any different than passing through the USB wifi adapter directly to VMware? (at which point the host doesn't have access to internet)


As far as I understand it, with bridged networking you're basically sharing the network device -- your VM has it's own stack down to the MAC address. So as long as your network device is still online (in the sense of being enabled in Windows and having a cable attached), packets for a particular MAC will travel to the right network stack.

This is probably useful from the VirtualBox manual:

> With bridged networking, VirtualBox uses a device driver on your host system that filters data from your physical network adapter. This driver is therefore called a "net filter" driver. This allows VirtualBox to intercept data from the physical network and inject data into it, effectively creating a new network interface in software...

I'd try it, it wouldn't be hard to reverse.


This site [1] discusses pretty much what you are asking (all networking going through a virtualbox pfSense) however it's written for windows 7, not sure if this still works for 8-10

http://timita.org/wordpress/2011/07/29/protect-your-windows-...

I would think if you set up the VM to deny everything coming from windows, and allow anything coming from the other linux VM it should work fine (just set up multiple NICs in the pfSense VM and have the linux VM go in through a different NIC than the host windows)

I personally do something similar with linux on linux where I have the host linux be allowed to only reach my internal network and the debian mirrors directly, and anything else is done through VMs.


Hello. Thanks for this. This is very close to what I was looking for. Security is a long journey, but it seems we can't avoid the task any more.

Hopefully we will find a way to be connected but not vulnerable to all these threats.


https://www.linkedin.com/pulse/penetration-testers-guide-win...

The above link seems pretty good for locking down Windows if anyone is looking.


Just to let you know in the UK we'll all be safe from things like this. The UK's banning encryption so stuff like this won't happen in the future. Phew. I feel safer!


"Emergency rooms were forced to divert people seeking urgent care."

I feel like the words "urgent" and "forced" might both be a bit shy of absolutely true here?


Just for reminder - the second leak does not match the vault7 leak, which is supposed to be from the very same NSA.

There is not a single proof or reason to believe that the second leak was not a fake (while the vault7 leak looks more legit) .

There are reasons to think that the same people are behind the second leak and the malware, and the malware, which is said to be based on "a leaked NSA exploit", was the part of a single plan.

It is not that hard to guess who is behind the internet bullying.


"Microsoft rolled out a patch for the vulnerability last March, but hackers took advantage of the fact that vulnerable targets — particularly hospitals — had yet to update their systems."

What Microsoft's software should be updated now to protect against this particular attack? Windows? Windows at the end user machines? The servers?

Could someone share a "What should I do now to protect myself" guide, please?

Thanks!


From everything I read last year... as long as someone has write access to a shared network resource, your network is vulnerable.

I read about ways to detect it early with FSRM, but never tried it:

https://chrisreinking.com/stop-cryptolocker-from-hitting-win...

Experts, chime in? What is out there in 2017 (paid or not paid) as a way to protect network drives from ransomware?


The same way you protect those network drives from an employee accidentally or intentionally deleting everything.

Limited permissions work, backups work, journaling data storage systems with an ability to rollback all changes work.

In most environments nowadays I guess there's no valid reason to have a literal "network drive" - if your users don't need to wrangle terabyte-sized data blobs, most environments can afford the overhead to have the company document/file sharing to happen in some system that stores full history of changes, and where normal users can not remove that history even if they're malicious or infected with malware. Probably even Dropbox or its competitors would be sufficient for that, no need to go to the more enterprisy vendors.


Proper backup system?


Well yes that's obvious, I meant more along the lines of:

Are there any ways to detect and stop it from happening in 2017? Third party software? New group policies from MS?


Not really, ultimately if someone has write to your network drive, it's not any different than malware having it. The best solution is a good backup and protecting your hosts from being infected as much as possible.

I believe some people were trying to do rate limiting and traversal detection, which should be possible, but also is common in many tools, like running grep or find on a network share, so it's far from a perfect solution. It could also probably be avoided by clever malware if it were to be widely deployed.


For this, run Windows update and install all updates. Additionally, it's smart to disable SMBv1 on all machines.


I disabled SMBv1 on the server. Good enough to protect our network share? Or is there some reason/benefit to disabling SMBv1 on client machines too?

(I ran the simple powershell command on server: https://support.microsoft.com/en-us/help/2696547/how-to-enab...)


The main one is to have _all_ machines patched through windows update. That is what will protect you.

SMBv1 is an outdated protocol, in which there have been some severe vulnerabilities disclosed in the last few weeks, hence why I recommended to get rid of it at the same time.

That being said, the vulnerability being exploited here is in SMBv2, hence why patching all machines is crucial.


I notice that Microsoft is claiming the exploit is in SMBv1 in their patch description [1].

[1] https://support.microsoft.com/en-us/help/4012598/title


There are two exploits. One targets SMBv1 and the other SMBv2. Then there's the backdoor.


If you are working with SCCM and 20,000+ clients (computers), you will know that all machines will never be patched. It just does not happen. On any given large network there will always be a certain number of unpatched clients. There are a myriad of reasons for patching to fail, from advertisement errors to installation issues, to machines simply being offline (and later coming back online).


You are right that this is often the reality of things. Some systems also will just never be patched because the software running on them stops working if you do and the vendor cannot or will not provide an update that addresses this.

However, in such cases it becomes crucial to have e.g. proper network segmentation in place to help mitigate the risk.

Unfortunately, at this time, there are seldom perfect solutions when it comes to security and a patching scenario can only do so much. In this case, patches are available, but the day a ransomware starts using proper 0-day we'll see a different scenario play out.

It therefore remains important to also keep focus on the reduction of attack surface, and the reduction of software complexity, besides resolving individual technical vulnerabilities.


Thanks for the info. I patched all client machines last night.

Here is the offline installer:

https://support.microsoft.com/en-us/help/3125574/convenience...

Prerequisites: must have installed SP1, along with the April 2015 convenience rollup. Links are provided in the prerequisites section.

Here's a direct link to the catalog download for the May 2017 security rollup (this supercedes all previous monthly rollups):

http://catalog.update.microsoft.com/v7/site/Search.aspx?q=KB...

For more information on the monthly rollups and how they supersede each other, see:

https://blogs.technet.microsoft.com/windowsitpro/2016/08/15/...


If anyone reading this was effected by this attack, please take this as an opportunity to start the journey to become "antifragile". If you are severely effected by this (mainly speaking about ransomeware) it means you lack backups and the ability to self-heal infrastructure. These attacks will only get more frequent and more sophisticated. So, start now.


Is it possible to cause havoc on banks worldwide?


Can't law enforcement follow the transactions of the public address of the ransom bitcoin wallet until the bitcoin is sold?


That's assuming the attacker doesn't know how to launder the coins. It is not very hard.


There are services that will mix your coins making it impossible to track because he will receive other people coins from the pool.


Not impossible, just hard.

And the cops can go and track each individual person from that pool if they really care. Even if we are talking about thousands.

Remember the story from a few days ago where to track a possible spy they went through all glasses prescriptions from a city.


It's different beast. It's almost impossible if done right. How would you track this person? You only see end transactions from those addresses which are not mixed with coins of attackers. You would need to check EVERY possible place where bitcoin exchange happen and there hundreds in hundreds of countries in blind to check if bitcoin address x was used there. Then some countries maybe even will not give you any information because electronic currency doesn't exist in their law and it's not a felony to use mixing service etc. etc. That's why they use bitcoin in the first place for 99% of criminal activities in Internet.


Do you think it would be possible for those services to block or 'embargo' transactions from 'tainted' addresses, such as the ones used for the cyberattacks' ransom?


Why would they? It's against their business model. They don't have company name and street address on their sites for a reason.


There are a handful of Bitcoin exchanges that don't follow anti-money laundering laws and presumably that's how these ransomware guys cash out, as it's been a problem for a while now.


I see the Rust Evangelism Strike Force are out in action again.

Guys, it may surprise you, but some of this kit predates Rust :)


Er, aside from yours, there are literally two comments in this 444-comment thread that have mentioned Rust, both of them written by a single person. Given that both those comments also mention sel4, perhaps we ought to invoke the sel4 Evangelism Strike Force? :P


I think tools like this should be secured at least as well as "research" stores of smallpox and other biotoxins. And certainly tracked long after they've outlived their usefulness within the agency that produced them.

Or maybe smallpox isn't actually stored as securely as I assume?


DHS Statement on Ongoing Ransomware Attacks: https://www.dhs.gov/news/2017/05/12/dhs-statement-ongoing-ra...


Is Russia being hit the most because it was the NSA the one that was exploiting this vulnerability before? Perhaps they are leveraging some other leaked NSA tool that gives them more direct access to Russian computers?


The entertainment system on my flight is mysteriously down. I wonder if it's connected. As a side thought does anyone know the vulnerability of critical systems such as airliners, air traffic control etc?


Why don't telecom providers help remove devices who are requesting an exorbitant amount of requests? Wouldn't this kill bot nets, if the exponential growth effect became impossible?


Does any one have a running list of the organizations effected so far?


There's no evidence that this attack targeted the NHS or other health systems, right? Just spreading randomly by email, highest infection probabilities certain older Microsoft OSs?


This definitely wasn't targeted. Check here - https://intel.malwaretech.com/botnet/wcrypt/?t=24h&bid=all


Yeah it looks like "large public institutions" were affected simply because that's where you'll find more unpatched (or unpatchable, in the case of XP) machines.


We Linux people really should not miss this opportunity to bring people on board. Ubuntu is a great starting point.


It looks to me like common stupidity...people opening attachments that they should not be opening. No need to involve CIA NSA or other tree letters agency hacking tool...just old school phishing. I see this happening much to often....people opening *.pdf.js attachment. No need for another conspiracy theory...stupidity explains it all. Just my 50¢.


It looks like you have not done any "looking" at this at all. This is a worm that is using the ETERNALBLUE (and possibly other) exploits to infect all vulnerable machines on a network without user interaction

plenty of stupidity for sure, but the stupidity is at the number of unpatched systems


My bad...the article is not really clear thou... My first comment...and my first fail... /me sad!


It doesnt involve only pdf.js file, the key is a bug in samba that means that all you need to get infected is to connect to an infected network.


Not in Samba, in the SMB protocol implementation on Windows.

Samba servers are safe.


What exactly does this NSA tool do? Every story I've seen glosses over how it works.


The tools in reference are from the Equation Group dump the Shadow Brokers did. Equation Group is believed to be the NSA (a group within). EG activity dates back to at least 1996.

More info on EG: https://securelist.com/files/2015/02/Equation_group_question...

The dump contains many tools; but the ones used in this attack are two exploits for vulnerabilities in Windows SMB (Server Message Block, a file sharing protocol) implementation. Microsoft patched this in March, but as we all know, many systems remain unpatched. The vulnerabilities allowed for remote code execution.

Practical exploit info: https://www.exploit-db.com/docs/41896.pdf

The two exploits, EternalBlue and EternalChampion targets respectively SMBv2 and SMBv1. That's not how the ransomware gets inside the network in the first place though, that is done by a user executing a file received via email, or downloaded from a received URL. But, through these two exploits, once inside, it can spread through the network (subnet) worm-like. Actually, the ransomware first checks for the existence of the backdoor (also from the same dump of tools) called DoublePulsar. If the ransomware does not find it to be implanted, it will use one of the two aforementioned exploits, based on which ports and protocols it makes a connection to.

The DoublePulsar backdoor is installed on at least 400,000+ systems worldwide.

You can read more about it here: https://countercept.com/our-thinking/analyzing-the-doublepul...


It seems this one was designed to shut down hospitals ;)


I'm surprised by the lack of speculation on the identity of the perpetrators.


is this supporting evidence of the us doing something "wrong" by creating these tools?

disclaimer: i hope no b/c it's like any other military tech being leaked and used, but am not sold either way.


Q: could fuzzing techniques help to take down such (p2p) botnets?


Anticiaption for an attack tied to an all time high bitcoin?


12 Nations that did not apply security patches


The article could have been writen in 15 lines or less. Why u do this


Is OSX affected ?


For the record, no.


This is what blowback looks like.

The US military and intelligence communities focused hard on cyber offense, rather than improving the defensive standards and technologies practiced among allies. Because of this, several allies have important systems compromised by (essentially) US-engineered malware.

Well, at least DARPA is sort of on it: http://archive.darpa.mil/cybergrandchallenge/

(There's also work stemming from the HoTT body of work on verified systems, as I understand it. But that doesn't have a sexy webpage.)


Isn't it peculiar that Russia remains the least hit or not even hit at all? It seems like the West was a clear target. Connecting the dots here, it's suffice to say Shadow Brokers serves Russian interests.

We are seeing bullet holes from what seem to have been cyber warfare between the former cold war foes.


Time of day, come back in 12 hours and check again.

That said the Russian government is trying to move people to local distributions of Linux, like Astra Linux, but I don't think the uptake is enough to explain low infection rate in Russia.


Yeah definitely downvote manipulations going on again...

At this point I'm not even upset or shocked. It just further supports the narrative Russia is seeking to manipulate/exploit the internet to their benefit.

Considering the average Russian is poorer than an Indian, it looks like Putin is going to fuck over his country as his countrymen cheer him on and suffer in poverty and alcoholism.

The West will crush the feeble Russian economy back to Tsar days.


We asked you many times to stop breaking the Hacker News guidelines with uncivil and unsubstantive comments. Since you've ignored us, continued, and gotten worse, we've banned your account. Insinuations of astroturfing and shillage without evidence are not allowed here [1], and bad enough, but national rants and slurs are completely unwelcome.

1. https://hn.algolia.com/?query=by:dang%20astroturf&sort=byDat...


Bernie's loss still sting?


> Isn't it peculiar that Russia remains the least hit or not even hit at all?

Whatever, the situation in Russia when it comes to malware and viruses is no different from anywhere else in the world. People and businesses get their computers hacked or infected all the time. So please, spare us your conspiracy theories.


According to Kaspersky, Russia was by far the worst hit: https://securelist.com/blog/incidents/78351/wannacry-ransomw...

(Ukraine, India, and Taiwan were also unusually heavily affected.)


[flagged]


Hi there! Welcome to HN! Be sure to check out the community guidelines to ensure you and your brand are well-represented on HackerNews.

https://news.ycombinator.com/newsguidelines.html


Just use Linux and 90% of your problems with malware is history.Your own customization of kernel will make your even more secure.


Let's not pretend that Linux is invulnerable to the class of exploits that make this kind of malware possible [1]. Windows isn't a target because it's vulnerable (all software is vulnerable). Windows is targeted because it's widely used. If the majority of systems were using Linux, malware authors would simply adapt to write malware targeting Linux instead.

[1] https://nvd.nist.gov/vuln/detail/CVE-2016-7117


  all software is vulnerable
This is false, and spreads FUD. It does a great disservice to those who do meticulously maintain their systems, to those who sacrifice convenience and beauty for stability and security, to those who take the time to scrutinize other people's work. It is possible to build and deploy secure software.

Linux dominates the datacenter; we are a high value target, and have been for quite some time now.


>It is possible to build and deploy secure software.

By secure, you don't mean 100% secure, do you?


I mean secure as in, when the last of that product line's devices have retired or died of old age, there have been no successful exploits against that product.


Has there ever been such a product? What about exploits on the software/hardware underlying the supposedly secure software?


Critical devices should either be simple, or they should run open source firmware. If governments had required the ability to audit the IC designs that go into medical, military and national infrastructure equipment, then we would now have open source ICs.

I am seeing an incredible resistance to this idea of increasing the situational awareness and capabilities of the people who provision and maintain large deployments. Perhaps it is too soon to propose solutions. Perhaps, today, we should just express solidarity with the victims, and try to warn operators of unaffected, but vulnerable systems to temporarily take them offline.

My apologies to those that I have offended. As a software developer who has struggled for years to articulate the need for transparency and simplicity in our systems, I feel very frustrated right now.


How could you ever possibly verify that?


By simplifying the design, until your team can verify its security without throwing up their arms in frustration at the mere prospect. When people's lives are on the line, security is more important than features or convenience.


What you're describing is formal verification. While I agree with what you're saying, I'm not sure if you're just understating the the complexity of formally verifying systems or if you're implying that "being really careful and doing your due diligence" is practically invulnerable.


I had a feeling I might get called out on that... I meant that for all practical purposes, all software is theoretically vulnerable. Of course verifiable computing is a thing, but wildly impractical for most applications.

Meticulously maintained is not even close to being invulnerable. Everyone would like to say they meticulously maintain the projects they work on, but it would be incredibly arrogant to say that you couldn't conceive of ever unintentionally introducing a vulnerability.


Imagine if your next surgeon had this sort of attitude about the cleanliness of her tools, the operating theater, and her staff's equipment. Cleaning is hard, maintaining cleanliness is hard, and pathogens evolve in amazingly clever ways. Perhaps, it will always be possible to propose a theoretical flaw in the procedure.

This is no reason to give up though! It is no excuse for not following best practices, consistently! That is malpractice, when done by a doctor! And their field is at least as complex as our own.


I don't know why you think I'm advocating that attitude. I'm not disagreeing that open source is a good thing for security. I'm just saying it's not the silver bullet that some people are claiming it to be.

I would be equally concerned if my surgeon said "I already know the best possible techniques for surgery. No point in investigating further or exploring better methods."


That won't matter if you can't easily run the software you need. I really hope Linux will one day make this possible, but then it will also come under the scrutiny of malware creators.


I do not believe that attacks of this scale or coordination are undertaken by private actors. This is warfare; it just isn't kinetic yet.


From the Guardian:

"He adds that the fear is that the ransonware cannot be broken and thus data and files infected are either lost or that the only way to get them back would be to pay the ransom, which would involve giving money to criminals."

The new terrorism.

https://www.theguardian.com/society/live/2017/may/12/england...


How is it terrorism if the purpose is to get money?


I meant it in a more general way: a group of horrible people taking over a core function of society and saying "If you don't do x we will do y." And they will actually do y.

As you may have gathered, my original statement is more eloquent.


It isn't more eloquent, because it's wrong. Wouldn't saying: "The new mafia." or "The new shake-down" be more accurate?

Terrorism is done for political reasons and often involves things that involve putting fear into the populace. Your general "If you don't do x we will do y." statement does cover terrorism, but it covers terrorism because it covers _all kinds of threats_. So I suppose what you really meant was: "The new threat."

Words are important :]


Ah sorry, I got it wrong twice.

But you got me thinking again: because this ransomware is targeting the infrastructure itself (national healthcare service) isn't this playing with fear too? If I was in hospital, or my friends/family, I would be acutely paranoid that medical devices will go wrong, medicine administration will go wrong, the A&E will go bonkers et cetra. I've worked in healthcare before, and this kind of domino effect is very easy to believe in.

(Funnily enough, my old organisation was making a fuss about upgrading from Windows XP just last year. A lot of my colleagues complained that this was hardly a priority)


Hmmm, that's a fair point. And now I'm wondering what my primary care last security report turned up. There's also some things that I still haven't told me doctors-- because I really don't believe in their ability to not disclose it somehow. And I want to remind everyone that you will pay for computer security no matter what--- you can either pay for it upfront, or you can pay ransoms in the future.


There's no evidence that this attack targeted the NHS specifically, they are simply one of the most visible victims




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: