Hacker News new | past | comments | ask | show | jobs | submit login
In Baltimore and Beyond, a Stolen NSA Tool Wreaks Havoc (nytimes.com)
375 points by SREinSF on May 25, 2019 | hide | past | favorite | 152 comments



I'm unclear on why I'm meant to take this particular Windows SMB exploit so seriously when there's such a long history of comparable bugs that nobody ever referred to as "stolen cyber weapons". I think it might be almost entirely because of the name; "ETERNALBLUE" sounds much scarier than "CVE-2017-0144".

Don't get me wrong: a reliable modern Windows drive-by RCE is nothing to sneeze at. But if it wasn't this exploit, it'd be a different one --- will be a different one, likely not "stolen from NSA", when the next half-life of this bug is reached.

The phenomenon of a single bug having intense, distinctive utility is not new. In any couple of years, there will usually be a couple bugs like that --- a very popular RCE that's publicly known, and a very popular RCE that is somehow kept on the DL. Back when those RCEs were in things like UW IMAP, nobody wrote NYT stories about how the NSA had accidentally unleashed them on us like some filovirus from Zaire leaked from the CDC.

Meanwhile: seized on by North Korea, Russia, and China? Come on. We know roughly how much it costs to develop a reliable remote (Project Zero has written them up, what, a dozen times?). The smallest SIGINT agencies in the world can afford this kind of work out of petty cash. I'm sure none of them are going to look a gift bug in the mouth. But CVS-2017-0144 isn't fundamentally enabling them to do anything they couldn't have done themselves.

I think Mitre should just start assigning stupid NSA names to CVEs, so that people will take them more seriously.


It used to be the case during the Cold War that there was an unspoken, tacit understanding that assassination would not be used as a tool against the leadership of the great powers: assassins are cheap, so if it became established as a tool it would create a level playing field.

(There were exceptions, e.g. in the third world, but there was a big difference between a front-rank player (the USSR) murdering someone like Hafizullah Amin during the opening moves of an invasion, and something like the shootdown of Dag Hammarskjöld's plane during the Congo crisis: assassinating the UN Secretary General was a huge scandal at the time.)

Today, the use of weaponized CVEs by SIGINT agencies seems to have developed into a free-for-all: the big players were clueless about the potential for accidentally normalizing this practice, or the scale of the consequences as computers became ubiquitous and indispensable components of everything from hospital medicine carts to automobiles, and thereby leveled the playing field and quasi-legitimized it for non-state level actors.

And we're all going to be paying the price for at least a generation.


I am not sure how you are getting from SIGINT to assassination. Since at least WW2, and almost certainly long before, there has been no tacit agreement that the great powers wouldn't spy on each other. Quite the opposite, in fact.

Is your argument instead that things like ransomware attacks are more akin to sabotage than to intelligence gathering? In that case, I'd suggest keeping two things in mind:

1. The one well-attributed case of state-level ransomware attacks was to North Korea, the norms-violating exception that somewhat proves the rule.

2. The majority of ransomware attacks are carried out by bona fide private actors unaffiliated with any state, and ransomware is also a part of our current norm, and has been for many years.

It will be interesting to see what happens when China shuts the power grid in Omaha, Nebraska off --- a capability they almost certainly have. The fact that you don't see things like that happening today suggests to me that the norm (I believe) you're referring to remains intact.


I think what parent is trying to say is that there's a present (with different choices in the past) where Hacking Team et al. are illegal, and governments and corporations instead pay out mandated bounties for responsibly disclosed cybersecurity research.

I'd prefer that present. But it's certainly not the one we're living in.

And the NSA, Russian APT 28/29, Chinese APT 1 have responsibility. Not just for their own actions, but for creating a world in which similar actions by anyone are normal.


Everything NSA and the APT groups do now (in this arena) was done first by private actors, so, no, I don't think this argument works.


Chronology is immaterial to this argument.

What matters is whether the state, with its ultimate monopoly on force, sanctions a given method by its active use.


The state clearly does not have a monopoly on computer/network exploitation.


I was referring to the state's monopoly on the ability to punish.


Major powers set international standards, and failure to establish standards for cyber has produced a situation where exploitation of CVEs has become so normalized that there are almost zero repercussions for using them in destructive ways. Especially if you are a non major power or non state actor, there seems to be little incentive not to wreak damage and havoc.

The US and Europe have failed to clearly state the consequences for a major attack, instead, we muddle along assuming that that the implications will be enough. Even huge damage attacks by nation state actors such as the Sony North Korea hack seem to have gone unpunished.


Respectfully, that's pretty silly. People have been "wreaking havoc" with exploits for as long as there have been exploits. Nobody in the world was waiting for China's permission to do this kind of stuff. I don't think that's the previous commenter's argument.


No country "needed" permission for anything, ever. But there's a reason that the cold war didn't turn hot, and in part it was because the major powers, US and Russia, were able to set formal and informal international standards for intelligence, espionage and data collection, for themselves and for their allies. The same can be said with the arms control treaties, nuclear weapons, non-proliferation, etc.

That all seems to have gone out the window in the cyber realm, where there aren't any standards or consequences for nefarious actions.

This seems like a very basic and obvious observation.


The "nobody" that didn't need China's permission are PRIVATE actors. Taboo on assassination by against great power leaders did not stop Lee Harvey Oswald, and likewise a balance of deterrence between China and the US on active sabotage hasn't kept criminals from carrying out the same kinds of attacks for profit.


It's not "gone out the window in the cyber realm" - as you say, the major powers were able to set informal international standards for intelligence, espionage and data collection, for themselves and for their allies, and those standards definitely, absolutely permit all such actions of SIGINT, industrial espionage and even sabotage. If a weaponised exploit could affect something in 1980s, then it would get used by intelligence agencies with no qualms, there wasn't any "gentleman's agreement" or the like that would make them think twice about that. E.g. there's description of supposed events in 1982 https://en.wikipedia.org/wiki/At_the_Abyss that may or may not be true, but noone on USSR or USA side would claim that this action (if it happened) would have broken some Cold War taboo.

The basic observation is false - the standards haven't disappeared because (unlike major assasinations) there never were any informal standards that would prohibit this; and (unlike nuclear and missile arms) there have never been any formal treaties restricting this, nor are such standards or treaties likely to happen in the coming decades because multiple major powers believe that lack of such standards is beneficial to them.


I have made the opposite conclusion - large parts of critical infrastructure are vulnerable today and we shouldn't underestimate just how disruptive and destructive one could be without great expertise. It's a ticking bomb. Trivial example, but try browsing Shodan for a wakeup call. And that's just already publicly indexed opportunities.

We haven't seen anything yet.


> But there's a reason that the cold war didn't turn hot, and in part it was because the major powers, US and Russia, were able to set formal and informal international standards for intelligence, espionage and data collection, for themselves and for their allies.

It didn't turn hot because of mutual destruction threat.


The US has already done cyber attwto Russian banking system back in 2016, and even when not a great power, there have been serious claims of the Venezuelan electric failures being caused by US intervention, beyond that the Snowden leaks showed that the US has set in place mechanisms that could shutdown entire energy grids if they really wanted to


I thought the big deal with EternalBlue is that the Shadow Brokers released the NSA's hacking trove as open-source, thus opening it up to all sorts of people with minimal technical skills. There's a big difference between having a zero-day but requiring the technical chops to turn it into a working exploit, and downloading a file off the Internet and running it.

There's this huge pandora's box about opening up state-level weaponry to the general public. Ultimately, it undermines the nation state as an organizing principle. And that's the world we live in now.

(For that matter, all these attacks are credited to Russia or North Korea or Iran - but do we actually have any evidence that that's who's behind them? Couldn't it just as easily be a teenager in his bed in upstate NY who has compromised a server in Russia? $100K in Bitcoin ransom is peanuts for a nation-state and would even be peanuts for a number of Silicon Valley techies on this site, but it's a lot of money to a high school kid.)


> There's this huge pandora's box about opening up state-level weaponry to the general public. Ultimately, it undermines the nation state as an organizing principle. And that's the world we live in now.

I feel like this point is hardly ever appreciated. I don't think we quite live in that world (where's my open source nuke?), but we're always oscillating on the continuum if we take a long enough view of the situation. The extent to which power is distributed relies on the extent to which technology is distributed, but also the extent to which the distributed technology can be utilised by a relatively small group. A small group with equally good spear designs still loses to the larger group. A small group with equally good drones, nukes, viruses, and firewalls might be able to put up something closer to an even fight. A radical open sourcing of technology might lead to a breakup of the modern nation state into smaller entities. Until it does, or some sort of parallel structures form alongside the nation states which they are unable to actually police, I wouldn't call them undermined. The parallel structure scenario might be somewhat arguable today, but always has been; the line between organized crime and government is blurry. I don't think we've moved significantly far in that direction as a result of open source tools yet. If there ever comes a day when private defense contractors can accept Bitcoin without KYC and taxes, just rolling through the streets in a small division of tanks selling AK-47s to passing school children, I'll completely agree that the nation state has been undermined.


Imho, what keeps the nation-state from being undermined by the techno-anarchist crowd is the requirement to hold territory.

Any halfway competent person in a number of fields (cyber security, virology, chemistry) could probably "take over" a small town with a few buddies.

And then the US government surrounds it. Blockades it. Allows nothing in or out. And waits.

As the Palestinians, South Sudanese, Tibetans, Taiwanese, Hongkongers have found (with varying degrees of success), independence is only viable as a going concern if you have international allies and trade.


Having mentioned the Palestinians, it's only fair to include the Israelis. Especially because it was US support that almost guaranteed their independence. The British had been far more even-handed, back in the 20s to early 40s. But they were (in a way) defeated by the US in WWII, along with just about every other nation involved, except for the Soviets.


Agree completely. I think this is one of the reasons for so much interest in seasteading despite the high barrier of entry and maintenance compared to living on land (not that a group of ships can't be surrounded and sanctioned). Ultimately, I fear that joining the WMD club is a prerequisite to true self determination.


And the price of leaving the WMD club (eg Ukraine and Libya) is so high that there is no way a diplomatic solution to denuclearization


History shows any group that is disarmed by another group is about to get run over. The logic scales from the individuals of a nation to the nation itself.


As a small note, I think the word we're looking for in this thread is not "nation-state", but simply "state" i.e. sovereign country or polity.

Not all states are "nation-states". A "nation" (⋆) is specifically a group of people who are culturally homogeneous, and a "nation-state" (⋆⋆) is a sovereign state that is also culturally homogeneous. In most of these discussions we are talking simply about sovereign states, not nation-states specifically.

Someone else pointed this out onetime in another HN comment, and now "can't unsee".

(⋆) https://en.wikipedia.org/wiki/Nation (⋆⋆) https://en.wikipedia.org/wiki/Nation_state


That's certainly what "nation" meant a hundred years ago, and what it means in context of nationalist ideology.

I think that in modern English, "nation" and "nation state" mostly mean "sovereign country or polity". I think this usage is partly because obviously most uses of "state" are references to obviously-non-sovereign internal divisions of territory, whether in the US or Austria or Australia or Brazil or India, and including extremal cases like the states of Palau (most of which comprise less than a thousand people).

Probably another cause is that so few people take 19th-C / 20th-C nationalism seriously that there's no real need for the word "nation" in this context.

But if you're keen on the political-science definitions, note that states don't have to be sovereign. The word "country" won't do because they don't have to be sovereign either (e.g. England or Greenland). I think if you want to formally say "sovereign state", you have to say "sovereign state".


I think that in modern English, "nation" and "nation state" mostly mean "sovereign country or polity".

It's just very recent modern infosec and internet English and it's seeping out into all sorts of other places. Probably too late to stop it.

As a check, I just searched NYT's site for 'nation state' and other than its original non-redundant sense or in quotes, they don't use it, best I can tell.


> where's my open source nuke?

There are "open-source" (leaked) designs out there, I'm sure. Or at least, old designs. Maybe for something like the Hiroshima bomb.

But you'd need enriched uranium. And for that, you'd need a bunch of gas centrifuges. And a chemical plant to make the hexafluoride, and convert the enriched stuff back to uranium.

And that's why you don't have open-source nukes.


Another way to look at it is that if you have all the prerequisites for that (talent, staff, money, resources, physical space, etc) to do those things, there are plenty of other more profitable enterprises they could be leveraged for unless you specifically need a saber to rattle, which is usually only the dominion of a well-funded despot and/or Bond villain.



How Pepsi briefly became the 6th largest military in the world: https://www.businessinsider.com/how-pepsi-briefly-became-the...


This BlackHat keynote[0] makes the good point that cyberattacks by other countries, especially Russia (the 'bears') work fundamentally differently from the US. In the US, private companies aren't allowed to do any hacking whatsoever, and organized crime is swiftly prosecuted by the FBI. The NSA would often have a dozen meetings before changing course on an operation... the bureaucracy is slow and autonomy is non-existent.

In Russia, there is no extradition for hackers, and it's rare for someone to be prosecuted for ransomware. Malware writers and organized crime rings form a loose network with Kremlin officials. Attacks are done as favors, or even launched because the hackers think Putin would like it. Guccifer 2.0 is a prime example of this.. their espionage op against the DNC got blown, and 12 hours later they relaunched the op as a hacktivist campaign. It's altogether more casual and efficient.

As a bonus, you don't have to pay your hackers if you let them make their own profits... hence all the ransomware. $100k is a good haul for an enterprising Russian criminal.

0. https://youtu.be/gvS4efEakpY


Corrections:

> it's rare for someone to be prosecuted for ransomware.

Because of the choice between prison time with blackmail and other dirty deeds or working on the 3-letter organization. Russian law enforcement agencies are large criminal organizations themselves.

> Malware writers and organized crime rings form a loose network with Kremlin officials.

Those form a very tight network, former also with Lubyanka officials (location of FSB HQ).


There has been pretty strong attribution for NK in the past doing ransomware.

100k isn't a lot but you are talking about NK, a country that has it's diplomats selling meth and heroin


The reason is that there's a big difference between an exploit and a fully weaponized utility that gives you a root shell for a given IP address.

You almost never see these lying around the Internet. There is a lot of work involved in creating a tool that'll successfully identify and exploit every single build of an executable, with 99.9% reliability. There is lots of testing to be done, lots of tweaks required. Things that random individuals just don't do by themselves.


but his point is that we're not talking about individuals. The article breathlessly goes on about foreign states, you know the bad ones, who have used this to attack all over the world. Except as tptacek points out, these states can do this kind of work on a slow day, in office. I actually disagree with tptacek, I don't think its partially, or even at all because of the name. I strikes me as more fear mongering, part of a narrative thats meant to drive more defense spending. "Look at all these breaches, can't have this stuff happening, be afraid lowly citizen, and support your security state".


>According to three former N.S.A. operators who spoke on the condition of anonymity, analysts spent almost a year finding a flaw in Microsoft’s software and writing the code to target it. Initially, they referred to it as EternalBluescreen because it often crashed computers — a risk that could tip off their targets. But it went on to become a reliable tool used in countless intelligence-gathering and counterterrorism missions.

If it took the NSA a year to develop, then it can't be done "on a slow day".


I didn't say it could be done "on a slow day". I said it could be done "out of petty cash", in the context of a SIGINT agency. Google employs dozens of people who do this stuff as a hobby.

We don't have to wonder whether this is some space alien technology that only the NSA can develop; it's a reliable Windows remote in pre-Win8 SMB servers. Read a writeup on it; it's less complicated than most type confusion browser RCEs.


fair enough, my language, not yours. I was driving at a similar point however inept my language was.


I don't object to your language as a rhetorical flourish, because I think it's true that this level of exploit development is not all that rarified. I'm just saying, what I actually said is harder to knock down with a message board rebuttal. :)


"you know the bad ones" - Yes, we know. The top of the list would be: North Korea, Russia, USA, UK, China ...


Isn’t Metasploit pretty close?


It is more than pretty close; Metasploit has more offensive capabilities than simply popping a shell, which is what every exploit does by itself already.


> You almost never see these lying around the Internet.

This may have been true 15 years ago when GitHub didn't exist and people hoarded their code, but I would say it's the exact opposite now. You'd be hard-pressed now to find a viable public exploit that doesn't have fully functioning PoC code available in multiple languages through a simple search of the CVE on GitHub.


I fully agree with your position on the relative utility of this exploit vs any other.

However I do think there is benefit in looking at this as a portent of state-sponsored backdoors being inevitably stolen and used maliciously. If the NSA can’t protect their tools, who can?


Yes.

Incredibly, the German minister of the interior has recently announced his intention to force messengers like WhatsApp, Signal, etc. to decrypt messages when required by warrant.

Of course, they'll be able to control the backdoor better than the NSA, right. Right?

Only source in English I found quickly: https://www.bleepingcomputer.com/news/security/german-minist...


This is critically different. Part of the issue with intelligence agencies using exploits is that using an exploit often means sending it to somebody untrusted (often the target of the attack!)

c2 servers and machines used to deliver exploits are necessarily connected to the internet, and often are from low-tier commercial hosting services- c2.tao.nsa.gov is a bit too easy to attribute, as it turns out.

An IM backdoor could rely on a key kept in an HSM, across an airgapped network allowing only encrypted messages in one direction and decryptions in the other. Assuming the airgapped side only needs to decrypt something like a LEAF, communication over something like serial means almost zero attack surface would be exposed to anything reachable from the internet.

So, yes, you probably can control an IM backdoor significantly better than the NSA can control their offensive tools. Will the government manage that? Ehhhhhhh.

There are much better arguments against an IM backdoor. Saying "the government can't keep it secure" is especially problematic because if that's perceived as the anti-backdoor argument and the government comes up with a secure way to implement a backdoor...well, then, why not do it?


Theoretically possible, yes. But Snowden and the Shadow Brokers indicate that it is practically much harder.


Again, the Shadow Brokers stuff is "a staging server was compromised", which just isn't that related to the problem of securing an IM backdoor. Snowden leaked information he had access to, but measures like storing the key in an HSM (and not allowing it to be extracted) largely mitigate that threat.


Do you have somewhere I can read on the Shadow Brokers' compromised staging server? I was under the impression that actual source code was released (implying a much deeper break) but seems I was wrong, as no article I could find through Google mentions a source code leak -- but none mention a staging server either.


> Snowden leaked information he had access to

...which was...everything.


What?


Why would the serial protocol be less vulnerable than a network protocol?


Because-

it shouldn't need to touch almost any code in the OS- the UART can pretty much receive raw bytes. There's no signalling layer besides start/stop. The serial "stack" is tiny- as long as you don't somehow manage to involve the OS's terminal emulation layer, which should generally not be a concern- compared to TCP/IP.

there's no risk of accidentally leaving a service listening on the machine

it's incompatible with normal network connections- don't really have concerns about somehow connecting it to the Internet.

all network adapters on the machine can be disabled, rather than "only one network adapter"

But the main point is the reduction of attack surface going from a general purpose networking stack to a bidirectional stream of bytes.


Arguably it's harder to connect back to a C&C server if the serial-connected computer is airgapped, but other than that I can't think of anything.


Only the laws of physics and open-source E2E encryption stand in the way of that proposal.

The messaging companies could drop service to Germany. People could use a VPN to access messengers.


the cost of complying with the German legal requirements is probably much lower than losing 80 million, very wealthy users.

And also regardless of what one's opinion about the law is, I'd be sceptical about the idea of a company arbitrarily cancelling service provision to an entire nation because they disagree with a domestic law.


How would a company enforce the behavior of open-source client applications on millions of endpoints? If they can't comply with the law, they would be forced to stop service.


I don't think they could, these backdoor requests pretty much only apply to companies that have control over the client and server and can break the encryption if they want to. (after all anybody could theoretically already encrypt messages beforehand anyway)

I assume whatsapp could provide an api and an open source client but then again they've got a strong commercial incentive to keep it walled in. And anyway, how many users would really switch to a third party client only for encryption? I figure these laws will be relatively effective in practise.


There has been a shift in public awareness of surveillance, within the last decade.

The adoption of Signal (whose messaging protocol has influenced Whatsapp) shows that some users, including high-value targets, have moved to 3rd-party open-source clients.

If the goal is mass surveillance, then it would be enough to backdoor Whatsapp. But if the goal is to target a semi-competent adversary who can use a search engine, such a law would be entirely non-effective.

There are existing laws for targeting of data from specific endpoints, e.g. via iCloud, Google. Some of these laws span multiple countries. They cover data from many apps, not just messaging.


It is a journalistic and commentariat error to presume that such exploits are even faintly protectable. This requires being unclear on the concept on how these things work.


> According to three former N.S.A. operators who spoke on the condition of anonymity, analysts spent almost a year finding a flaw in Microsoft’s software and writing the code to target it.

So we spent significant government money finding and and developing an exploit for the CVE. Yes, someone else could have found it too/instead.

But what if instead, our government had spent that significant money finding the exploit... and getting it patched for everyone. Would that not have made us all safer?

We are choosing to fund an enormous "black hat" operation, when we could be funding an enormous "white hat" operation instead.

Theoretically, there is a government policy that it is the "default" to disclose any vulnerability found by national security, with a (secret of course) process to go through to argue that a vulnerability should instead be kept to be weaponized.

I am dubious to what extent that has actually changed practice at all. The times the vulnerabilities were kept undisclosed would of course be secret, but in theory all the times an NSA-discovered vulnerability was disclosed should be public record somewhere. I haven't seen such a list, not sure if anyone has tried to make one or if it's available at all. Theoretically the government policy is that disclosing vulnerabilities should be the standard and default behavior except in unusual circumstances. I am dubious. And it is putting us all at risk.

> One month before the Shadow Brokers began dumping the agency’s tools online in 2017, the N.S.A. — aware of the breach — reached out to Microsoft and other tech companies to inform them of their software flaws. Microsoft released a patch, but hundreds of thousands of computers worldwide remain unprotected.

The scandal is that they tried to keep this vulnerability to themselves until they realized it had been stolen. If they had disclosed it to vendors when they found it, there would have been that many more years for patches to be distributed.

Instead they not only kept it secret in the vainglorious belief they had the power to keep it secret, but wrote the exploit that was then leaked, providing a powerful off the shelf exploit now available to anyone. Sure, someone else could have done that. But it was our tax dollars that actually did it. I'd rather have my tax dollars patching vulnerabilities than writing weaponized exploits for them.


You're referring the the "vulnerability equities process" (VEP). I haven't read anything from anyone in the field that thinks the VEP is anything but a joke. In particular, the VEP's (many) skeptics point out that you can't use a vulnerability for a little while and then arrange to have it patched, because doing so makes it likely that your adversary will discover the systems you've broken into (simply by retrospectively looking through packet logs to see instances of the exploit running).

It comes down to this: either we're going to conduct adversarial signals intelligence like other major nations do, or we're not. If we are, we're going to stockpile exploits, because that's how modern SIGINT works.

I don't know anyone in the field, in the industry or academia, that thinks it's scandalous that NSA has vulnerabilities we haven't heard about.


If nobody in the field thinks it's scandalous that the VEP is a joke, then the field is corrupt.


This would be a more compelling rebuttal if it was responsive to the whole of my comment, and didn't just amount to saying "nuh-uh". I provided reasons why the VEP is a joke.


Corrupt, or simply humanity? Our worst as a society becomes more apparent the more things resemble a free and unregulated market, as the security scene largely is now.

With the exception of the Feds saying "that's too much" here and there to folks they have a bone to pick with.


Or it means that VEP processes desperately need updating, like much of technology used in government.


You don't see the difference between a random bug and one that a state agency discovered, stockpiled, then misplaced, and in any case held secret?

I think you are just missing the entire discussion that this article is about.


I do not, that is correct. Perhaps you could explain it to me.


A state organization IMO should protect the public interest. If anything the NSA should be releasing CVEs and protecting US companies by patching their systems. I don’t understand what service exactly does the NSA (and similar intelligence agencies elsewhere) provide to their public by hiding vulnerabilities from the public. Does the victim of a hack that the NSA could have crates pay taxes? What for???


More than half the NSA's actual, chartered job, and the part of NSA's job that precedes every other job to which it has been assigned, is to collect signals intelligence from foreign states. Modern SIGINT involves exploits. Publishing exploits, even after you're done using them, harms that mission. Does that appropriately answer the "what for???" question?

A reasonable complaint about NSA is that the IAD (defensive) mission doesn't belong in the same agency as SIGINT or TAO or whatever they're calling it now. But that doesn't have much to do with this story; the SIGINT mission of any major government is going to (1) collect exploits and (2) almost never publish them, because that's how SIGINT is done.

NSA collecting exploits doesn't prevent anyone else from discovering and reporting the same vulnerabilities. So maybe your complaint is that we're not funding enough defensive vulnerability research?


That could go a long way.

My point is the NSA and foreign counterparts do a disservice to their citizens. I would see the value in publishing vulnerabilities, I don’t see value for taxpayers in what they are doing now.


If some terrorist group managed to detonate a bomb on US soil, don't you think it would be more outrageous if were discovered that the bomb belonged to the US military?


I think his argument is that these bombs are readily available. US military-style bombs are not that easily available which is why it’s not a big deal. If you could practically get one of those things at the corner store, the fact that it was US military is more coincidence than real problem.


Back in 1945 a bunch of countries realized that they had unnecessary huge stockpile of bombs, mines, and chemical weapons after the war. It would had been a real shame if bad actors would pick them up, bad enough that most of it was dumped into the ocean.

Taking a minor look into history and we can easily find what happens with larger stockpiles of weapons if they don't get destroyed after wars (or cold wars). They get stolen or sold. They get used for new purposes which originally they never was intended for.

I find it a fair case to argue that countries who do not destroy their large stockpiles once the war end are to a degree responsible when the weapons are reused in later conflicts outside their border by people we would classify as bad actors.


> the fact that it was US military is more coincidence than real problem

No, that's the entire point of my example. The military's primary job is to defend us. It's a major failure when their weapons are used against US citizens.


The point of my comment, at the root of this thread, is that calling these things "military weapons" is misleading; there's no real difference between this Win8 SMB bug and any of a whole variety of bugs that were discovered and exploited by private actors just in these last 12 months alone.

If you accept the premise of the article, that these were "broken arrow" military weapons, sure, I see how you can get yourself into a tizzy over it. But you should not accept the article's premise, because that premise seems self-evidently broken.


The military's primary job is to protect themselves.

Then political goals of their superiors.

Sometimes those two are reversed.

Then comes us.


Exploit or not, unpatched or not, having SMB accessible from the Internet --- which seems to be implied here --- is seriously stupid. MS has (reluctantly?) published most of their SMB protocol specs and the immense complexity is really disturbing. It's not a protocol I'd trust for use on anything other than a (wired) LAN where the communicating machines are already very trusting of each other.


> if it wasn't this exploit, it'd be a different one --- will be a different one

It has been the go-to argument against regulations by the weapon industry that if we don't produce and sell the weapons then countries will just buy it from someone else. War and conflict will happen regardless where the purchased weapon was produced.

Bad actors will have a slight harder time if fewer people produced exploits and keep known vulnerabilities secret. If fewer people produce weapon the cost of war increase. Producing exploit will always have the cost of making it slightly easier for criminals to comit crime.


And to think, all we ask in return is that the United States cease doing computer-based signals intelligence, while the rest of the world continues to do it. A small price to pay for making it slightly harder for criminals to commit crime!


It is a hard sell. If we ask the weapon industry to stop selling weapons some other country in the world would get the money.

Here in Sweden it is a very hard sell. About 1/3 of the yearly government revenue from state own companies come from selling weapon. About 85% is sold directly to countries with ongoing wars, mostly in the middle east. If they didn't buy ours they would just go elsewhere.

If we must present it as an all of nothing, then yes. Making it slightly harder is a goal worth while. If we can get nuanced about it then a priority goal is to reduce the risk of bad actors getting their hands on weapon we produce. This can be achieved by stricter regulation, which for computer-based signals intelligence means greater funding to protect the existing weapons and using them less as each use involve risk. This would go against the cost of fewer and more costly computer-based signals intelligence operations.

Since we keep hearing about US developed exploits being used by bad actors it means that the US is not spending enough funding on protecting their weapons or use them so regularly that they end up in the wrong hands anyway. A small price to pay is to adjust those numbers, and having a political discussion about what those numbers should look like.


The US probably spends more on defensive security by orders of magnitude.


Except the "rest of the world" didn't shoot itself in the face with its own cyber weapons and had articles written about it.

US politicians and citizens are right to ask whether the current strategy is the right one.

Saying that you can either have this sigint or no sigint is obviously a false dilemma.

In fact your whole original reply is some bizarre technical justification for what is obviously a military strategy which has harmed the US and other countries and that can be changed by the US.


Excellent comment.

We lead the arms race, in all categories.

Whenever an adversarial Nation buys new kit, missiles or heavy metal, it's to keep up, or keep out, the US.

China's rise in offensive technological process was undoubtedly in response to its knowledge (retrieved through hacking) of the extensive measures on US side.

We can set the pace. And that pace needs to slow down.


But, exploits being produced or not doesn't change the fact that exploitable bugs exist in the first place. I would say it's preferable to have people making exploits so these bugs get patched sooner or later.


As much as people hate when vulnerabilities get sweet names like shell shock, heart bleed, dirty cow, and whatever else we come up with, you do have to admit that bit of marketing flare gets more eyeballs on it, and generally noticed and resolved much quicker. So I’m with you on this one: let’s get a marketing team embedded with mitre guys.. might actually be a solid win!


Unfortunately there's diminishing marginal returns for "named" vulnerabilities.

There's nothing stopping people assigning scary names to quite mundane issues, and after a couple of "boy who cried wolf" episodes, people stop paying as much attention...


To me it feels like a broader audience is starting to realize how much of an unlawful wild west the internet can be - and how much SIGINT and other entities abuse that. And how little preparations some networks have - networks with serious impacts when breached.

I'm working on the defensive side, so we can actively watch new exploits via HTTP being developed, assigned a CVE-ID and splatter on the IDP system on our application clusters in very large numbers. Others of our edge systems are under constant, automated, attacks even utilizing personal information about employees. It's kind of scary, but normal.

But now things are escalating into meatspace, if I may use that term. Norsk Hydro got hit with ransomware and only the engineers on site managed to avoid the loss of the entire smelting plants. Now we have an entire city hit hard enough to be seriously impacted.

Combine this actual impact end-users have with a flashy name from a secretive agency like the NSA, and of course that name gets used.

And I guess personally, if I have to tolerate flashy names in order to get broader awareness towards the importance of cyber security, I'm not that opposed.


Do you think the malware authors know, care, or can even tell the nature of their victims? A Windows box is a Windows box, whether it is connected to my printer or an industrial control system.


I don't think that's relevant.

If a software has a feature usable in creative ways, and there are persons willing to exploit the software, they eventually will. Exploits and malware will be written due to human curiosity and greed.

And then the exploit is out there and generated, and then it'll be used. Attackers of e.g. Norsk Hydro or Baltimore use the malware in an aggressive, targeted way. A botnet uses the malware in an aggressive, spray-and-pray way. Pentesters are another thing.

However, if the malware hits the right thing, ugly things will happen. That's what I meant with "wild-west". Don't blame the gunsmith for bullets flying around. Build a solid wall to hide behind.


Depends on the malware authors - while in many cases they're just looking for simple money through mass attacks on random machines, there certainly are may active attackers where the malware is explicitly targeted to particular victims, so they definitely know whom they're targeting, and they won't be sending payloads that brick a specific firmware version of some siemens ICS to the windows box connected to your printer.


> I'm unclear on why I'm meant to take this particular Windows SMB exploit so seriously

the way I understand it, most of the affected, including the paper, take it seriously because it's their "tax money at work."

I understand also that somebody at the receiving end of the said money is not satisfied that that topic is raised.


The tax money was spent enabling the NSA to hack into foreign computers, which is the SIGINT directorate's whole job. It sucks that their favorite bug got burned! It is a problem that it leaked, and developing a new pet bug ahead of schedule is an inefficient use of tax dollars. But NSA didn't make this bug, and it's not even unlikely that other people had it before it leaked, just in a different exploit package. Now that there's a reliable exploit attributable to NSA, everyone is going to use that one regardless.


Are you checzh or polish er just need to know. I am fan of both.


I'm from the south side of Chicago.


[flagged]


You'd have to ask my parents about that.


> But NSA didn't make this bug

We don't know that.


Memory unsafe languages are already the perfect generator of exploitable bugs with how easy they are to make exploitable mistakes in. Few exploits need more explanation than that. (It would be interesting if the popularity of memory unsafe languages like C/C++ was somehow encouraged by the NSA to keep the world exploitable, but I've never heard of any evidence of that.)


So, you think ETERNALBLUE was an inside job?


Always worth considering.

Intelligence agencies put people everywhere.


> “If Toyota makes pickup trucks and someone takes a pickup truck, welds an explosive device onto the front, crashes it through a perimeter and into a crowd of people, is that Toyota’s responsibility?” he asked. “The N.S.A. wrote an exploit that was never designed to do what was done.”

Let's rework that analogy. If the NSA knows a trick to make Toyota pickup trucks explode, and they don't tell Toyota about the trick for years because they want to keep using it, and then eventually they leak the trick and suddenly everyone's Toyotas are exploding left and right, is that the NSA's fault?

Yes, yes it sure is.

I wouldn't go quite so far as to say the NSA was obligated to tell Microsoft (metaphorical Toyota) immediately about the exploit. For better or worse it's in America's interest for them to hack into foreign computers, and they take some risks as part of doing that. But they're 100% responsible for the downside of the risks they take.


Imagine Toyota makes a faulty truck, and then they issue a recall, and there's headline news about the danger this issue poses, and truck drivers all over Twitter are pleading for people to fix their trucks, but you nevertheless continue driving around for 799 days without fixing it, and then your truck explodes. I know who I'm blaming.


Both points are true. Think about having giant holes in the sidewalk. On the one hand, if you walk into a giant hole in the sidewalk in broad daylight, that's pretty much your fault, and your friends will laugh at you. On the other hand, everyone understands that some people will inevitably walk into a giant hole in the sidewalk if it's there, and so we consider it negligent to dig one without putting up bright orange barriers around it. In part that's because some people are more vulnerable than others (maybe it's nighttime, maybe their eyesight isn't very good, maybe they're going fast on a bicycle), and in part that's because with a large enough group of people some of them are bound to be careless, distracted, or unlucky.


Microsoft built the truck, the NSA welded the bomb on the front, and then allowed that weaponized truck to be stolen off their parking lot.


Best analogy so far +1


NYT again buried the lede. This was never a 0-day. How is it that companies and governments still fail to install updates years later?!?

“One month before the Shadow Brokers began dumping the agency’s tools online in 2017, the N.S.A. — aware of the breach — reached out to Microsoft and other tech companies to inform them of their software flaws. Microsoft released a patch, but hundreds of thousands of computers worldwide remain unprotected.”


Of course it was a 0-day. It was one until the day that Microsoft released patch MS17-010 on the 12th of March 2017. So for around 5 years it was a 0-day that was used by the NSA and possibly other unknown actors.


0-day generally refers to the day the vulnerable party learns of the vulnerability, so it would be substantially earlier.


True, I was wrong about that. I looked at it from the perspective of the affected customers.

In that case it was a 0-day until probably February 2017.


Baltimore is one of the poorest and most dysfunctional cities in the US. I don't know the details but it seems logical that a place like this would be where the servers wind-up not updated.

The critique I'd make of the NSA isn't that they developed this tool but that they spent time constructed the tool instead of time spent actually securing American infrastructure.


> securing American infrastructure

That's not their job though, is it? I'm not sure whose job it is, maybe the DHS or something like that. Admittedly their name might make you think it is, but I wouldn't expect a governmental agency to ever do anything but it's mandated job.


That's not their job though, is it?

Actually, it is:

"The NSA is also tasked with the protection of U.S. communications networks and information systems." [1],

[1] https://en.wikipedia.org/wiki/National_Security_Agency , Hopefully, it is clear that in-context I meant communications infrastructure


How in the world is that up to DHS? Municipalities all run their own infrastructure


i kind of agree. but then again, in today's climate, is failing to install updates really a lede? i mean people have been not installing updates in a timely manner for decades now.


Yeah, this speaks more to the incompetence of local government than it does any kind of intelligence agency. Although the normalization of exploiting CVEs is a worrying trend.


https://twitter.com/ErrataRob/status/1132345806177144833

Some interesting rebuttal notes. Apparently it is the vulnerability that is being exploited, not eternalblue itself.


I find one of the responses pretty compelling:

> This is a distinction without meaning. Infosec frequently uses the same name to refer to a vulnerability and a corresponding exploit. I don't agree with the framing of the story either, but calling it "fake news" is a serious accusation that goes too far.

https://twitter.com/mehaase/status/1132366433491533824


I'm less interested in the accusations and more about the content.


Digging into this a bit more and reading between the lines of the nytimes article, it sounds to me like EternalBlue was indeed used as part of the attack chain, for lateral movement. It was not the only exploit used, nor would the attack have been wholly prevented without it.

https://twitter.com/nicoleperlroth/status/113232602124240076...


interesting! thanks for linking that


"The tool exploits a vulnerability in unpatched software..."

It's a self inflicted wound by Baltimore.

Question is, what is the cost of actually maintaining their systems competently vs. the cost of the attack? Both are difficult to quantify, but if you factor in the likelihood of getting attacked I bet it's still cheaper in the long run to just run your IT dept fast and loose and let the chips fall where they may.

As a government entity, they are probably making the soundest decision based on budget. Disruption in services hurts the populace, not the government.

As an anecdotal aside, I once worked as a contractor for over a year for a state government entity, run by a young, ambitious, dept head who was all about the security and soundness of the software they used. But he needed a good sized budget, to convert buggy and insecure systems over to something more sound, and every single meeting with his superiors was about money. He argued so vehemently (I was in some of these meetings and he couldn't have been any more astute in his observations on the future of attacks) that eventually his superiors found a reason to fire him (using government bought software for personal use at home - for self education). And, no joke, literally all the work he and his team had done in the dept for years was just chucked when the next guy came in.

Government is about money, not security.


One thing I didn't realize till I talked to someone who works in IT procurement for a populous county in KC was that they can capitalize buying servers, and finance via bonds, vs. using cloud providers, which comes out of their operational budget.


I think the national security establishment are applying pre-digital paradigms and thinking to cyberspace and as a result have gotten things almost entirely backwards. In the physical real world, "the best defense is a good offense" often has a lot of truth to it as in limited resources situations force concentration to defeat hostile forces can be fundamentally more effective. But when it comes to information which can be infinitely and losslessly copied it might make more sense to think of security in terms of information gradients and societal model. For a liberal democracy/market economy a lot of useful information gets generated, but it can be less effective at utilizing it and more leaky. Whereas (at least in examples so far in human history) centralized command authoritarian societies don't seem to as effectively come up with new stuff, but if offered the chance to take it from elsewhere can act with more of a long term vision (and of course with panopticons and massive restrictions on individual freedoms can more effectively insulate themselves).

So maybe from a strategic point of view government security should be working to shore up the weaknesses of the societal model while maximizing the strengths, the best strategy for each model are opposites. So for the USA, I think it'd be better if nobody had any ability to hack anything and the government acted aggressively to maintain an unequal information gradient. Then the problems that come from short term incentives, less decisive/unified responses, and so on continue to get made up for by a major technological edge. Whereas for a polity like China in a world where information is smoothed out they can leverage their authoritarian governance for more advantage.

Which means the NSA has been doing the opposite of what they should, because for America in the digital domain "the best offense is a good defense". America has the most to lose from having all of its information generation/infrastructure (R&D, networking/governing systems etc) get taken and/or disrupted. Rather then thinking of digital security issues as weapons to be exploited against less technologically advanced enemies who by definition have far less to lose, they should have long since been thinking of them as big strategic risks and working to eliminate all of them as aggressively as possible, and to be a dependable source for best practices in general. I think maybe there is a basic mindset mixup in the leadership given the last 50 years and their military backgrounds, and that's really caused America to squander enormous amounts of strategic value (and goodwill as well for that matter). Very foolish and unfortunate, and I'm not sure if anyone in government is currently thinking about reworking the NSA into a role focused on actual national security.


This is a very interesting take on the road we should have taken with respect to cybersecurity. I agree with what you said, but I think there's still scope for this sort of thinking in current nation states. It's a strategic mistake, but one that a generation more in touch with technology would probably not make as easily, so I expect that there's already people thinking this way in most governments, and that will only increase in the future.


You are absolutely correct, and the mindset change you describe will happen. However, and unfortunately, it will require some Pearl Harbor or 9/11 sized catastrophe, to make that point clear to everyone who needs to understand it.


“We can’t protect our own dangerous secret software tools, but trust us to protect your keys in escrow.”


You don't see a difference in protection posture between something which has to be used in the wild vs something which could be kept in a vault for it's entire lifetime? I'm not arguing that escrow is a good idea, I'm arguing that this argument is not why that's true.

Edit: consider how often keymat has leaked.


There isn't one really - if they had access to it the same logistical issues would come up no matter how much they pinky promise not to abuse it. The more it spreads the more vulnerable it is

The only way key escrow would be remotely trustworthy would be if there were "hostages" to provide an intrinsic punishment for their failures or stonewalling of transparency.


You really think the keys will stay in a vault?

And you think this because you have assurances from who? The NSA? Xi Jinping? MBS? Kim Jung Un? Diane Feinstein? Donald Trump? RSA Inc.? NIST? David Sternlight?


I mean, there is a big difference - you have to directly send your exploits to the enemy so that they'd work; you don't have to send any escrowed keys to them.


The enemy will have escrowed keys too.

But see my answer to Gödel_unicode.


Fun fact, the need for NSA exploits to be installed on an “enemy” (target) device had nothing to do with how they were leaked in this case.

And stuff kept in vaults can be leaked too.

Sure the two things are different but the dynamics are the same. Information wants to be free, and it leaks.

Your argument can be turned around as well. The NSA tools were meant for use by a small specialized team under strict controls. Whereas key escrow has been put forward as a solution that would allow any podunk sheriff in any jurisdiction to apply for access to decrypt private information anywhere... virtually guaranteeing compromise with poor security practices, unlike what would be expected for the NSA tools.


> EternalBlue was so valuable, former N.S.A. employees said, that the agency never seriously considered alerting Microsoft about the vulnerabilities, and held on to it for more than five years before the breach forced its hand.

This has been common "conspiracy theory" for at least a decade. And not just about Microsoft.


I feel that given the US government prosecution of the MalwareTech guy for use of an exploit package he wrote, but was not using, the US government should accept financial responsibility for the misuse of their own exploit.


I just want to mention how Tim Cook rightfully pushed back on government pressure to develop a tool to break into an accused terrorist’s iphone, citing the dangerous precident it would set, as well as the enevitable theft of said tool. if the NSA can’t fully secure its arsenal, who are they (government) to demand a private company to develop (and expect to secure) a tool that _everyone_ would want to get their hands on. alas, while the effort was noble, state sponsored actors have made this a moot point.


This is on the NSA. They decided not the tell the vendors about this and that makes them responsible. They failed their task which I thought was to keep the Unites States safe and secure.


They did tell the vendors about this, and the vendors actually released a patch too, all this before the tools were leaked.


The patch was released just one month before the exploits were leaked to the public, and the NSA only decided to act after they were certain that their tools had been extracted. But we still don't know when it was extracted from the NSA. If they would have brought this to Microsofts attention 5 years earlier (and without releasing an easily adaptable exploit for the vulnerability) a lot of damage could have been avoided.


*leaked publicly.

No telling how many people got copies or packet captures from its use in the YEARS it existed prior to it being patched.


Five years after the NSA discovered it.


You know what's bizarre? Amidst all this drumbeat of news about cybercrime trashing government, and with the clear evidence that the US 2016 turned, at least partially, on cybercrime:

NOT ONE of the candidates for US President has undertaken any effort to boost their own infosec. (Or if they have, they keep it quiet.)

What can they do? The same stuff we do in any SaaS business:

Rudimentary security training for everybody, including bigshots and candidates. (Podesta got phished, twice!)

Make sure their laptops and office computer equipment are up to patch levels and the malware detectors work.

Engage one of the large-scale email providers; they have topnotch dedicated infosec people, good spam traps, and a lot to lose if they visibly mess up.

Adopt strong multifactor authentication.

Hire compentent pentesters and remediate any vulnerabilities they find, fast.

Let their donors and the public know they're taking action (not WHAT action of course, just that they're on it.)

Governments should do the same for their constituents and taxpayers.

Now, maybe candidates will argue they don't have time for the extra security. But, in 2019, that argument shows they're unfit for public office. One candidate learned that the hard way in 2016. No more of this.


shouldn't it be patched by now?


Elsewhere in the thread it is said to never even have been a zero day, i.e. before it was publicly released, Microsoft was forewarned and released a patch.

So yes, it is patched in the sense that the vendor did, and in the sense that the Microsoft customers should long, long have patched a vuln from 2017.


>is said to never even have been a zero day, i.e. before it was publicly released, Microsoft was forewarned and released a patch.

Which is nonsense. It was an exploit that was actively used for at least 5 years before Microsoft was informed about it. The "not a zeroday" is pretty close to doublespeak. There is nothing to sugarcoat here. It was a zeroday that was exploited for years and Microsoft wasnt informed until the very end. All the while millions of devices were vulnerable. I have to say I am having a hard time assuming good faith when people make such statements, here of all places.

A vulnerability is a zero day until the day the maker is informed about it. Its not an ambiguous definition.


The dangerous assumption here is that nobody else ever got hold of the EternalBlue exploit outside of those they wanted to have it.

This is a bad assumption. Others could have developed it independently, or intercepted it from NSA usage, and used it for years prior to the leak of the tools.

Hoarding 0days makes everyone less safe.


That doesn't mean people haven't been hurt before it was patched


> The tool exploits a vulnerability in unpatched software

Eternal Blue exploits a vulnerability in unpatched Microsoft Windows software


Slightly off-topic, but does anyone know who makes those standing desks shown in the pic of the MSFT office?


Bed, Bath, Baltimore and Beyond

NSA knew that EternalBlue was in the wild and possibly being used by other bad state actors, and just sat on it. For years. In case you are wondering whose side they're on.


The Shadowbrokers announced their access to EternalBlue many months before Microsoft released a patch.

The NSA was negligent in not immediately informing Microsoft after the Shadowbrokers announced their access to the NSA tools with clear proof (codenames, etc) on Reddit.


Wreaking havoc on outdated computers...


Two years ago, in 2017, Microsoft distributed the security update to fix this problem. The issue has nothing to do with the NSA and everything to do with the City of Baltimore failing to keep their capital equipment, in this case computers, up-to-date applying security and other updates.

The article also discuses healthcare systems hacked by other exploits. This again was not caused by the computer virus, but by the fact that Microsoft vendor issued security updates were not applied to the systems.

Often there are security upgrades in hardware as well as software which means that the computer hardware needs to be upgraded as well.

As is the case with most of these security hacks, it is the failure of the agency to budget appropriately for equipment maintenance and having competent leadership that actually understands the importance of budgeting for and implementing security upgrades including upgrading to the latest version of the OS, in this case, Windows 10.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: