Don't get me wrong: a reliable modern Windows drive-by RCE is nothing to sneeze at. But if it wasn't this exploit, it'd be a different one --- will be a different one, likely not "stolen from NSA", when the next half-life of this bug is reached.
The phenomenon of a single bug having intense, distinctive utility is not new. In any couple of years, there will usually be a couple bugs like that --- a very popular RCE that's publicly known, and a very popular RCE that is somehow kept on the DL. Back when those RCEs were in things like UW IMAP, nobody wrote NYT stories about how the NSA had accidentally unleashed them on us like some filovirus from Zaire leaked from the CDC.
Meanwhile: seized on by North Korea, Russia, and China? Come on. We know roughly how much it costs to develop a reliable remote (Project Zero has written them up, what, a dozen times?). The smallest SIGINT agencies in the world can afford this kind of work out of petty cash. I'm sure none of them are going to look a gift bug in the mouth. But CVS-2017-0144 isn't fundamentally enabling them to do anything they couldn't have done themselves.
I think Mitre should just start assigning stupid NSA names to CVEs, so that people will take them more seriously.
(There were exceptions, e.g. in the third world, but there was a big difference between a front-rank player (the USSR) murdering someone like Hafizullah Amin during the opening moves of an invasion, and something like the shootdown of Dag Hammarskjöld's plane during the Congo crisis: assassinating the UN Secretary General was a huge scandal at the time.)
Today, the use of weaponized CVEs by SIGINT agencies seems to have developed into a free-for-all: the big players were clueless about the potential for accidentally normalizing this practice, or the scale of the consequences as computers became ubiquitous and indispensable components of everything from hospital medicine carts to automobiles, and thereby leveled the playing field and quasi-legitimized it for non-state level actors.
And we're all going to be paying the price for at least a generation.
Is your argument instead that things like ransomware attacks are more akin to sabotage than to intelligence gathering? In that case, I'd suggest keeping two things in mind:
1. The one well-attributed case of state-level ransomware attacks was to North Korea, the norms-violating exception that somewhat proves the rule.
2. The majority of ransomware attacks are carried out by bona fide private actors unaffiliated with any state, and ransomware is also a part of our current norm, and has been for many years.
It will be interesting to see what happens when China shuts the power grid in Omaha, Nebraska off --- a capability they almost certainly have. The fact that you don't see things like that happening today suggests to me that the norm (I believe) you're referring to remains intact.
I'd prefer that present. But it's certainly not the one we're living in.
And the NSA, Russian APT 28/29, Chinese APT 1 have responsibility. Not just for their own actions, but for creating a world in which similar actions by anyone are normal.
What matters is whether the state, with its ultimate monopoly on force, sanctions a given method by its active use.
The US and Europe have failed to clearly state the consequences for a major attack, instead, we muddle along assuming that that the implications will be enough. Even huge damage attacks by nation state actors such as the Sony North Korea hack seem to have gone unpunished.
That all seems to have gone out the window in the cyber realm, where there aren't any standards or consequences for nefarious actions.
This seems like a very basic and obvious observation.
The basic observation is false - the standards haven't disappeared because (unlike major assasinations) there never were any informal standards that would prohibit this; and (unlike nuclear and missile arms) there have never been any formal treaties restricting this, nor are such standards or treaties likely to happen in the coming decades because multiple major powers believe that lack of such standards is beneficial to them.
We haven't seen anything yet.
It didn't turn hot because of mutual destruction threat.
There's this huge pandora's box about opening up state-level weaponry to the general public. Ultimately, it undermines the nation state as an organizing principle. And that's the world we live in now.
(For that matter, all these attacks are credited to Russia or North Korea or Iran - but do we actually have any evidence that that's who's behind them? Couldn't it just as easily be a teenager in his bed in upstate NY who has compromised a server in Russia? $100K in Bitcoin ransom is peanuts for a nation-state and would even be peanuts for a number of Silicon Valley techies on this site, but it's a lot of money to a high school kid.)
I feel like this point is hardly ever appreciated. I don't think we quite live in that world (where's my open source nuke?), but we're always oscillating on the continuum if we take a long enough view of the situation. The extent to which power is distributed relies on the extent to which technology is distributed, but also the extent to which the distributed technology can be utilised by a relatively small group. A small group with equally good spear designs still loses to the larger group. A small group with equally good drones, nukes, viruses, and firewalls might be able to put up something closer to an even fight. A radical open sourcing of technology might lead to a breakup of the modern nation state into smaller entities. Until it does, or some sort of parallel structures form alongside the nation states which they are unable to actually police, I wouldn't call them undermined. The parallel structure scenario might be somewhat arguable today, but always has been; the line between organized crime and government is blurry. I don't think we've moved significantly far in that direction as a result of open source tools yet. If there ever comes a day when private defense contractors can accept Bitcoin without KYC and taxes, just rolling through the streets in a small division of tanks selling AK-47s to passing school children, I'll completely agree that the nation state has been undermined.
Any halfway competent person in a number of fields (cyber security, virology, chemistry) could probably "take over" a small town with a few buddies.
And then the US government surrounds it. Blockades it. Allows nothing in or out. And waits.
As the Palestinians, South Sudanese, Tibetans, Taiwanese, Hongkongers have found (with varying degrees of success), independence is only viable as a going concern if you have international allies and trade.
Not all states are "nation-states". A "nation" (⋆) is specifically a group of people who are culturally homogeneous, and a "nation-state" (⋆⋆) is a sovereign state that is also culturally homogeneous. In most of these discussions we are talking simply about sovereign states, not nation-states specifically.
Someone else pointed this out onetime in another HN comment, and now "can't unsee".
(⋆) https://en.wikipedia.org/wiki/Nation (⋆⋆) https://en.wikipedia.org/wiki/Nation_state
I think that in modern English, "nation" and "nation state" mostly mean "sovereign country or polity". I think this usage is partly because obviously most uses of "state" are references to obviously-non-sovereign internal divisions of territory, whether in the US or Austria or Australia or Brazil or India, and including extremal cases like the states of Palau (most of which comprise less than a thousand people).
Probably another cause is that so few people take 19th-C / 20th-C nationalism seriously that there's no real need for the word "nation" in this context.
But if you're keen on the political-science definitions, note that states don't have to be sovereign. The word "country" won't do because they don't have to be sovereign either (e.g. England or Greenland). I think if you want to formally say "sovereign state", you have to say "sovereign state".
It's just very recent modern infosec and internet English and it's seeping out into all sorts of other places. Probably too late to stop it.
As a check, I just searched NYT's site for 'nation state' and other than its original non-redundant sense or in quotes, they don't use it, best I can tell.
There are "open-source" (leaked) designs out there, I'm sure. Or at least, old designs. Maybe for something like the Hiroshima bomb.
But you'd need enriched uranium. And for that, you'd need a bunch of gas centrifuges. And a chemical plant to make the hexafluoride, and convert the enriched stuff back to uranium.
And that's why you don't have open-source nukes.
In Russia, there is no extradition for hackers, and it's rare for someone to be prosecuted for ransomware. Malware writers and organized crime rings form a loose network with Kremlin officials. Attacks are done as favors, or even launched because the hackers think Putin would like it. Guccifer 2.0 is a prime example of this.. their espionage op against the DNC got blown, and 12 hours later they relaunched the op as a hacktivist campaign. It's altogether more casual and efficient.
As a bonus, you don't have to pay your hackers if you let them make their own profits... hence all the ransomware. $100k is a good haul for an enterprising Russian criminal.
> it's rare for someone to be prosecuted for ransomware.
Because of the choice between prison time with blackmail and other dirty deeds or working on the 3-letter organization. Russian law enforcement agencies are large criminal organizations themselves.
> Malware writers and organized crime rings form a loose network with Kremlin officials.
Those form a very tight network, former also with Lubyanka officials (location of FSB HQ).
100k isn't a lot but you are talking about NK, a country that has it's diplomats selling meth and heroin
You almost never see these lying around the Internet. There is a lot of work involved in creating a tool that'll successfully identify and exploit every single build of an executable, with 99.9% reliability. There is lots of testing to be done, lots of tweaks required. Things that random individuals just don't do by themselves.
If it took the NSA a year to develop, then it can't be done "on a slow day".
We don't have to wonder whether this is some space alien technology that only the NSA can develop; it's a reliable Windows remote in pre-Win8 SMB servers. Read a writeup on it; it's less complicated than most type confusion browser RCEs.
This may have been true 15 years ago when GitHub didn't exist and people hoarded their code, but I would say it's the exact opposite now. You'd be hard-pressed now to find a viable public exploit that doesn't have fully functioning PoC code available in multiple languages through a simple search of the CVE on GitHub.
However I do think there is benefit in looking at this as a portent of state-sponsored backdoors being inevitably stolen and used maliciously. If the NSA can’t protect their tools, who can?
Incredibly, the German minister of the interior has recently announced his intention to force messengers like WhatsApp, Signal, etc. to decrypt messages when required by warrant.
Of course, they'll be able to control the backdoor better than the NSA, right. Right?
Only source in English I found quickly: https://www.bleepingcomputer.com/news/security/german-minist...
c2 servers and machines used to deliver exploits are necessarily connected to the internet, and often are from low-tier commercial hosting services- c2.tao.nsa.gov is a bit too easy to attribute, as it turns out.
An IM backdoor could rely on a key kept in an HSM, across an airgapped network allowing only encrypted messages in one direction and decryptions in the other. Assuming the airgapped side only needs to decrypt something like a LEAF, communication over something like serial means almost zero attack surface would be exposed to anything reachable from the internet.
So, yes, you probably can control an IM backdoor significantly better than the NSA can control their offensive tools. Will the government manage that? Ehhhhhhh.
There are much better arguments against an IM backdoor. Saying "the government can't keep it secure" is especially problematic because if that's perceived as the anti-backdoor argument and the government comes up with a secure way to implement a backdoor...well, then, why not do it?
it shouldn't need to touch almost any code in the OS- the UART can pretty much receive raw bytes. There's no signalling layer besides start/stop. The serial "stack" is tiny- as long as you don't somehow manage to involve the OS's terminal emulation layer, which should generally not be a concern- compared to TCP/IP.
there's no risk of accidentally leaving a service listening on the machine
it's incompatible with normal network connections- don't really have concerns about somehow connecting it to the Internet.
all network adapters on the machine can be disabled, rather than "only one network adapter"
But the main point is the reduction of attack surface going from a general purpose networking stack to a bidirectional stream of bytes.
The messaging companies could drop service to Germany. People could use a VPN to access messengers.
And also regardless of what one's opinion about the law is, I'd be sceptical about the idea of a company arbitrarily cancelling service provision to an entire nation because they disagree with a domestic law.
I assume whatsapp could provide an api and an open source client but then again they've got a strong commercial incentive to keep it walled in. And anyway, how many users would really switch to a third party client only for encryption? I figure these laws will be relatively effective in practise.
The adoption of Signal (whose messaging protocol has influenced Whatsapp) shows that some users, including high-value targets, have moved to 3rd-party open-source clients.
If the goal is mass surveillance, then it would be enough to backdoor Whatsapp. But if the goal is to target a semi-competent adversary who can use a search engine, such a law would be entirely non-effective.
There are existing laws for targeting of data from specific endpoints, e.g. via iCloud, Google. Some of these laws span multiple countries. They cover data from many apps, not just messaging.
So we spent significant government money finding and and developing an exploit for the CVE. Yes, someone else could have found it too/instead.
But what if instead, our government had spent that significant money finding the exploit... and getting it patched for everyone. Would that not have made us all safer?
We are choosing to fund an enormous "black hat" operation, when we could be funding an enormous "white hat" operation instead.
Theoretically, there is a government policy that it is the "default" to disclose any vulnerability found by national security, with a (secret of course) process to go through to argue that a vulnerability should instead be kept to be weaponized.
I am dubious to what extent that has actually changed practice at all. The times the vulnerabilities were kept undisclosed would of course be secret, but in theory all the times an NSA-discovered vulnerability was disclosed should be public record somewhere. I haven't seen such a list, not sure if anyone has tried to make one or if it's available at all. Theoretically the government policy is that disclosing vulnerabilities should be the standard and default behavior except in unusual circumstances. I am dubious. And it is putting us all at risk.
> One month before the Shadow Brokers began dumping the agency’s tools online in 2017, the N.S.A. — aware of the breach — reached out to Microsoft and other tech companies to inform them of their software flaws. Microsoft released a patch, but hundreds of thousands of computers worldwide remain unprotected.
The scandal is that they tried to keep this vulnerability to themselves until they realized it had been stolen. If they had disclosed it to vendors when they found it, there would have been that many more years for patches to be distributed.
Instead they not only kept it secret in the vainglorious belief they had the power to keep it secret, but wrote the exploit that was then leaked, providing a powerful off the shelf exploit now available to anyone. Sure, someone else could have done that. But it was our tax dollars that actually did it. I'd rather have my tax dollars patching vulnerabilities than writing weaponized exploits for them.
It comes down to this: either we're going to conduct adversarial signals intelligence like other major nations do, or we're not. If we are, we're going to stockpile exploits, because that's how modern SIGINT works.
I don't know anyone in the field, in the industry or academia, that thinks it's scandalous that NSA has vulnerabilities we haven't heard about.
With the exception of the Feds saying "that's too much" here and there to folks they have a bone to pick with.
I think you are just missing the entire discussion that this article is about.
A reasonable complaint about NSA is that the IAD (defensive) mission doesn't belong in the same agency as SIGINT or TAO or whatever they're calling it now. But that doesn't have much to do with this story; the SIGINT mission of any major government is going to (1) collect exploits and (2) almost never publish them, because that's how SIGINT is done.
NSA collecting exploits doesn't prevent anyone else from discovering and reporting the same vulnerabilities. So maybe your complaint is that we're not funding enough defensive vulnerability research?
My point is the NSA and foreign counterparts do a disservice to their citizens. I would see the value in publishing vulnerabilities, I don’t see value for taxpayers in what they are doing now.
Taking a minor look into history and we can easily find what happens with larger stockpiles of weapons if they don't get destroyed after wars (or cold wars). They get stolen or sold. They get used for new purposes which originally they never was intended for.
I find it a fair case to argue that countries who do not destroy their large stockpiles once the war end are to a degree responsible when the weapons are reused in later conflicts outside their border by people we would classify as bad actors.
No, that's the entire point of my example. The military's primary job is to defend us. It's a major failure when their weapons are used against US citizens.
If you accept the premise of the article, that these were "broken arrow" military weapons, sure, I see how you can get yourself into a tizzy over it. But you should not accept the article's premise, because that premise seems self-evidently broken.
Then political goals of their superiors.
Sometimes those two are reversed.
Then comes us.
It has been the go-to argument against regulations by the weapon industry that if we don't produce and sell the weapons then countries will just buy it from someone else. War and conflict will happen regardless where the purchased weapon was produced.
Bad actors will have a slight harder time if fewer people produced exploits and keep known vulnerabilities secret. If fewer people produce weapon the cost of war increase. Producing exploit will always have the cost of making it slightly easier for criminals to comit crime.
Here in Sweden it is a very hard sell. About 1/3 of the yearly government revenue from state own companies come from selling weapon. About 85% is sold directly to countries with ongoing wars, mostly in the middle east. If they didn't buy ours they would just go elsewhere.
If we must present it as an all of nothing, then yes. Making it slightly harder is a goal worth while. If we can get nuanced about it then a priority goal is to reduce the risk of bad actors getting their hands on weapon we produce. This can be achieved by stricter regulation, which for computer-based signals intelligence means greater funding to protect the existing weapons and using them less as each use involve risk. This would go against the cost of fewer and more costly computer-based signals intelligence operations.
Since we keep hearing about US developed exploits being used by bad actors it means that the US is not spending enough funding on protecting their weapons or use them so regularly that they end up in the wrong hands anyway. A small price to pay is to adjust those numbers, and having a political discussion about what those numbers should look like.
US politicians and citizens are right to ask whether the current strategy is the right one.
Saying that you can either have this sigint or no sigint is obviously a false dilemma.
In fact your whole original reply is some bizarre technical justification for what is obviously a military strategy which has harmed the US and other countries and that can be changed by the US.
We lead the arms race, in all categories.
Whenever an adversarial Nation buys new kit, missiles or heavy metal, it's to keep up, or keep out, the US.
China's rise in offensive technological process was undoubtedly in response to its knowledge (retrieved through hacking) of the extensive measures on US side.
We can set the pace. And that pace needs to slow down.
There's nothing stopping people assigning scary names to quite mundane issues, and after a couple of "boy who cried wolf" episodes, people stop paying as much attention...
I'm working on the defensive side, so we can actively watch new exploits via HTTP being developed, assigned a CVE-ID and splatter on the IDP system on our application clusters in very large numbers. Others of our edge systems are under constant, automated, attacks even utilizing personal information about employees. It's kind of scary, but normal.
But now things are escalating into meatspace, if I may use that term. Norsk Hydro got hit with ransomware and only the engineers on site managed to avoid the loss of the entire smelting plants. Now we have an entire city hit hard enough to be seriously impacted.
Combine this actual impact end-users have with a flashy name from a secretive agency like the NSA, and of course that name gets used.
And I guess personally, if I have to tolerate flashy names in order to get broader awareness towards the importance of cyber security, I'm not that opposed.
If a software has a feature usable in creative ways, and there are persons willing to exploit the software, they eventually will. Exploits and malware will be written due to human curiosity and greed.
And then the exploit is out there and generated, and then it'll be used. Attackers of e.g. Norsk Hydro or Baltimore use the malware in an aggressive, targeted way. A botnet uses the malware in an aggressive, spray-and-pray way. Pentesters are another thing.
However, if the malware hits the right thing, ugly things will happen. That's what I meant with "wild-west". Don't blame the gunsmith for bullets flying around. Build a solid wall to hide behind.
the way I understand it, most of the affected, including the paper, take it seriously because it's their "tax money at work."
I understand also that somebody at the receiving end of the said money is not satisfied that that topic is raised.
We don't know that.
Intelligence agencies put people everywhere.
Let's rework that analogy. If the NSA knows a trick to make Toyota pickup trucks explode, and they don't tell Toyota about the trick for years because they want to keep using it, and then eventually they leak the trick and suddenly everyone's Toyotas are exploding left and right, is that the NSA's fault?
Yes, yes it sure is.
I wouldn't go quite so far as to say the NSA was obligated to tell Microsoft (metaphorical Toyota) immediately about the exploit. For better or worse it's in America's interest for them to hack into foreign computers, and they take some risks as part of doing that. But they're 100% responsible for the downside of the risks they take.
“One month before the Shadow Brokers began dumping the agency’s tools online in 2017, the N.S.A. — aware of the breach — reached out to Microsoft and other tech companies to inform them of their software flaws. Microsoft released a patch, but hundreds of thousands of computers worldwide remain unprotected.”
In that case it was a 0-day until probably February 2017.
The critique I'd make of the NSA isn't that they developed this tool but that they spent time constructed the tool instead of time spent actually securing American infrastructure.
That's not their job though, is it? I'm not sure whose job it is, maybe the DHS or something like that. Admittedly their name might make you think it is, but I wouldn't expect a governmental agency to ever do anything but it's mandated job.
Actually, it is:
"The NSA is also tasked with the protection of U.S. communications networks and information systems." ,
 https://en.wikipedia.org/wiki/National_Security_Agency , Hopefully, it is clear that in-context I meant communications infrastructure
Some interesting rebuttal notes. Apparently it is the vulnerability that is being exploited, not eternalblue itself.
> This is a distinction without meaning. Infosec frequently uses the same name to refer to a vulnerability and a corresponding exploit. I don't agree with the framing of the story either, but calling it "fake news" is a serious accusation that goes too far.
It's a self inflicted wound by Baltimore.
Question is, what is the cost of actually maintaining their systems competently vs. the cost of the attack? Both are difficult to quantify, but if you factor in the likelihood of getting attacked I bet it's still cheaper in the long run to just run your IT dept fast and loose and let the chips fall where they may.
As a government entity, they are probably making the soundest decision based on budget. Disruption in services hurts the populace, not the government.
As an anecdotal aside, I once worked as a contractor for over a year for a state government entity, run by a young, ambitious, dept head who was all about the security and soundness of the software they used. But he needed a good sized budget, to convert buggy and insecure systems over to something more sound, and every single meeting with his superiors was about money. He argued so vehemently (I was in some of these meetings and he couldn't have been any more astute in his observations on the future of attacks) that eventually his superiors found a reason to fire him (using government bought software for personal use at home - for self education). And, no joke, literally all the work he and his team had done in the dept for years was just chucked when the next guy came in.
Government is about money, not security.
So maybe from a strategic point of view government security should be working to shore up the weaknesses of the societal model while maximizing the strengths, the best strategy for each model are opposites. So for the USA, I think it'd be better if nobody had any ability to hack anything and the government acted aggressively to maintain an unequal information gradient. Then the problems that come from short term incentives, less decisive/unified responses, and so on continue to get made up for by a major technological edge. Whereas for a polity like China in a world where information is smoothed out they can leverage their authoritarian governance for more advantage.
Which means the NSA has been doing the opposite of what they should, because for America in the digital domain "the best offense is a good defense". America has the most to lose from having all of its information generation/infrastructure (R&D, networking/governing systems etc) get taken and/or disrupted. Rather then thinking of digital security issues as weapons to be exploited against less technologically advanced enemies who by definition have far less to lose, they should have long since been thinking of them as big strategic risks and working to eliminate all of them as aggressively as possible, and to be a dependable source for best practices in general. I think maybe there is a basic mindset mixup in the leadership given the last 50 years and their military backgrounds, and that's really caused America to squander enormous amounts of strategic value (and goodwill as well for that matter). Very foolish and unfortunate, and I'm not sure if anyone in government is currently thinking about reworking the NSA into a role focused on actual national security.
Edit: consider how often keymat has leaked.
The only way key escrow would be remotely trustworthy would be if there were "hostages" to provide an intrinsic punishment for their failures or stonewalling of transparency.
And you think this because you have assurances from who? The NSA? Xi Jinping? MBS? Kim Jung Un? Diane Feinstein? Donald Trump? RSA Inc.? NIST? David Sternlight?
But see my answer to Gödel_unicode.
And stuff kept in vaults can be leaked too.
Sure the two things are different but the dynamics are the same. Information wants to be free, and it leaks.
Your argument can be turned around as well. The NSA tools were meant for use by a small specialized team under strict controls. Whereas key escrow has been put forward as a solution that would allow any podunk sheriff in any jurisdiction to apply for access to decrypt private information anywhere... virtually guaranteeing compromise with poor security practices, unlike what would be expected for the NSA tools.
This has been common "conspiracy theory" for at least a decade. And not just about Microsoft.
No telling how many people got copies or packet captures from its use in the YEARS it existed prior to it being patched.
NOT ONE of the candidates for US President has undertaken any effort to boost their own infosec. (Or if they have, they keep it quiet.)
What can they do? The same stuff we do in any SaaS business:
Rudimentary security training for everybody, including bigshots and candidates. (Podesta got phished, twice!)
Make sure their laptops and office computer equipment are up to patch levels and the malware detectors work.
Engage one of the large-scale email providers; they have topnotch dedicated infosec people, good spam traps, and a lot to lose if they visibly mess up.
Adopt strong multifactor authentication.
Hire compentent pentesters and remediate any vulnerabilities they find, fast.
Let their donors and the public know they're taking action (not WHAT action of course, just that they're on it.)
Governments should do the same for their constituents and taxpayers.
Now, maybe candidates will argue they don't have time for the extra security. But, in 2019, that argument shows they're unfit for public office. One candidate learned that the hard way in 2016. No more of this.
So yes, it is patched in the sense that the vendor did, and in the sense that the Microsoft customers should long, long have patched a vuln from 2017.
Which is nonsense. It was an exploit that was actively used for at least 5 years before Microsoft was informed about it. The "not a zeroday" is pretty close to doublespeak. There is nothing to sugarcoat here. It was a zeroday that was exploited for years and Microsoft wasnt informed until the very end. All the while millions of devices were vulnerable. I have to say I am having a hard time assuming good faith when people make such statements, here of all places.
A vulnerability is a zero day until the day the maker is informed about it. Its not an ambiguous definition.
This is a bad assumption. Others could have developed it independently, or intercepted it from NSA usage, and used it for years prior to the leak of the tools.
Hoarding 0days makes everyone less safe.
Eternal Blue exploits a vulnerability in unpatched Microsoft Windows software
NSA knew that EternalBlue was in the wild and possibly being used by other bad state actors, and just sat on it. For years. In case you are wondering whose side they're on.
The NSA was negligent in not immediately informing Microsoft after the Shadowbrokers announced their access to the NSA tools with clear proof (codenames, etc) on Reddit.
The article also discuses healthcare systems hacked by other exploits. This again was not caused by the computer virus, but by the fact that Microsoft vendor issued security updates were not applied to the systems.
Often there are security upgrades in hardware as well as software which means that the computer hardware needs to be upgraded as well.
As is the case with most of these security hacks, it is the failure of the agency to budget appropriately for equipment maintenance and having competent leadership that actually understands the importance of budgeting for and implementing security upgrades including upgrading to the latest version of the OS, in this case, Windows 10.