Hacker News new | past | comments | ask | show | jobs | submit login
NSA Director Says Agency Shares Vast Majority of Bugs It Finds (threatpost.com)
62 points by randomname2 on Nov 5, 2014 | hide | past | favorite | 59 comments



In the video of that discussion[1] he also says that NSA developed the Heartbleed patch after hearing about the vulnerability on April 7 and shared with the private sector on April 8.

Interesting to compare with timeline: http://www.smh.com.au/it-pro/security-it/heartbleed-disclosu...

[1] https://www.youtube.com/watch?v=yhwy2ZWi_y8



I'd say this is a case of non-technical people being told by coworkers the "issue was fixed and the patch is public" on April 8th [after OpenSSL patches it] and confuses it with his coworkers creating the solution.

I'm just depressed because I suspect many people will start claiming the NSA solved Heartbleed. :/


So, this is going to sound like I'm determined to find a reason to hate the NSA, but.. this doesn't make them look good either. It's the most accessible and widely-deployed memory disclosure bug of recent years, if not ever. Surely there are at least 1000 vulnerable (at the time) servers they'd specifically love to have this window into, for intelligence on the "bad guys." Surely they know the "bad guys" would love to look into and attack American servers the same way. Exploiting and defending this kind of thing is exactly what the NSA is supposedly for, but they're telling us they didn't know about it.

I guess all their good talent and money was tied up in domestic call metadata social graph analysis. But at least they had someone smart enough to do the trivial patch once the vulnerability was known![0]

P.S. I also notice they didn't mention "Shellshock". Why not?

[0] https://twitter.com/agl__/status/530004568784916480


So your argument is "If the NSA is any good, they must know about every vulnerability in the world before the private sector- otherwise they are incompetent"?


I don't believe the NSA are incompetent. I believe they are malevolent, in that they prioritize spying on American citizens (indirectly, of course) over finding and fixing these kinds of vulnerabilities. They expend vastly more on spying, cataloging, and archiving information about people than they do on figuring out when the software that runs the Internet is vulnerable.

If they are to exist on the taxpayer dime, they should serve the taxpayer first and foremost. They like to brag about this kind of thing, because it makes them look good (and maybe allows them to believe they are "good people", as I often see folks who defend the NSA say of them) but it isn't actually their priority, and it is not something they have a history of doing.

For example, this is the kind of thing the NSA actually spends significant money on: https://en.wikipedia.org/wiki/Utah_Data_Center

That same money would have employed a number of excellent security researchers, and the tools they need to help make the Internet a safer, more secure place, for all Americans.

In short, this kind of article is misdirection, and people in our industry should not be snowed by those tactics. We should know better, based on their historical behavior.


With the amount of money they spend, I would suggest that the US taxpayer should get some value out of it.

So while not every - at least a lot.


Right. I agree it's not reasonable to expect any one entity to know every possible vulnerability in the world. But with all the money and brains they have, how huge this particular target was (for both defensive and offensive purposes), and how high the stakes are (as they remind us whenever we challenge them), you'd think they'd have static code analysis flagging these kinds of things the moment they're checked in to a public repository of any notoriety, fuzzers running 24/7, "shadow" code review against things like OpenSSL by their best hackers, etc., at which point it does almost become incompetence that they either don't have that, or it failed. At least, I'm wondering what all that money is for.

No, it doesn't really prove incompetence, but on the other hand "we released a trivial patch for an issue the private sector already found and fixed"* doesn't prove competence either, or inspire any sympathy from me. These are the types of people who trot out imagery like "cyber Pearl Harbor" when they're telling us why they need to be granted more power to protect us, yet their go-to public example of fixing a critical vulnerability in the national digital infrastructure is laughably irrelevant.

* Depending on what time of day it was, the official fix may or may not have been publicly released when NSA did whatever they're claiming they did.


> Facebook and Microsoft donate $US15,000 to Neel Mehta via the Internet Bug Bounty program for finding the OpenSSL bug.

Hm. I didn't know that MS would be involved in this as well since they're not really affected by this.


They run a bunch of Linux servers (Azure), so they were affected.


There were extremely few instances where Microsoft was affected by Heartbleed and was running very few Linux servers (in noncritical functions). These very few instances were enumerated and closed almost immediately. There is only one team I know of that was affected - however a scan was done to confirm this.

Patching for customers in Azure can be done through normal OS channels, though patches can be forced and base images an be modified to close holes (this was done).


How come that MS runs Linux on some Azure serves?


MS's customers can run Linux on Azure. Although I'm sure somewhere out there MS must have had a Linux server affected by this..


This is correct. The number of Linux machines, even counting firmware and other devices, is extremely low.


Astonishingly, they cater to the needs of their customers and fully support linux.


Because some people want Linux and will pay them for Linux hosting


I've talked to some employees of TrueSec (the firm that just discovered the Yosemite "rootpipe" exploit) and they told me that many security research firms are "on retainer" with intelligence agencies. They didn't share specifics but they said that they have friends who are paid to share exploits with, say, British Intelligence and to not report it to the vendor with the exploit in it.

I assume this is why Stuxnet had so many zero-day exploits in it. The agencies behind it had security firms feeding them.

http://www.wired.com/2014/11/countdown-to-zero-day-stuxnet/


Most security research firms have no links whatsoever to intelligence agencies, but every security researcher in the world gossips about those links. Clearly some do, though.

I have no firsthand knowledge of how these connections work (the gossip I hear tends to involve firms proffering vulnerabilities to middleman "commercial" firms --- not ZDI, by the way --- but who knows?). But I'm skeptical of the idea that security research firms are a real feeder for vulnerability intel to NSA, because based on the people NSA spits back out into commercial industry, they appear to have a very, very capable internal research staff.


But isn't that exactly what a security researcher with links to intelligence agencies _would_ say ;)


Yes.

The market for 0days has been cooling off in recent years, but for a good decade there you could sell 0days, even mediocre ones for six digits. Nowdays you'll need a pretty good vuln for six digits, and something pretty stellar for seven (this isn't unheard of). ZDI, frsirt and others got into the game as middlemen. They allow(ed) you to not know who the final purchaser is and would allow you to sell 0days that may or may not be interesting to a government entity - in this case ZDI, etc would swallow the cost.

Edit: http://www.vupen.com/english/services/lea-index.php


I'm not sure if I agree with the ethics of selling security exploits, but it's a world I know very little about. You seem knowledgeable, is there any more information you could share about the business and culture of selling 0days? Like, how many people are doing that full time now (dozens or thousands?). Why do you think that the market slowed down, and do you think that most of the security hackers are trying to sell exploits to the responsible parties or do they expect most of them to go to government and other black hats?

Sorry for the questions, no contact info in your profile. Answers from anyone would also be appreciated.


The people who know don't talk and the people who talk don't know...


What is the least interesting vulnerability whose sale you have firsthand knowledge of that fetched more than $20,000?

Have you personally ever sold a vulnerability?


FWIW I don't agree with the assessment of the OP. While there are some security firms that do have contracts, the vast majority of NSA capability is internally developed (or developed under contract by defence contractors).

As for the "market assessment" I find it implausible. It seems to be based on the assumption that the demand for capabilities has decreased over time while the availability of good bugs has increased. This is at odds with reality.


I believe you on this a lot more than I believe anonymous employees of a firm known principally for giving a brand name to an Admin->Root privilege escalation bug.


More defence contractors than internal, I'd understood?


I've contacted independent brokers before, many of whom allegedly resell primarily to "the US government". When asked about hypothetical Tor 0day, they quoted a price of $150,000 before their 20% broker fee. So I'm not sure that 6 figures for most trivial vulnerabilities fits the market.

(And obviously, I don't have such a 0day to sell, so I can't prove that they would actually pay up.)


People believe a lot of weird stuff about vulnerability prices. For instance, any time a post hits HN about someone getting a $500 bounty for an XSS, there's always a post or two saying that's a rip-off compared to the tens of thousands of dollars it would fetch on the black market --- as if single-site XSS vulnerabilities with a half-life measured in minutes were worth huge amounts of money.


I commented on this back when Facebook paid out $30k for that RCE. I know the Facebook security team points to this comment all the time. Maybe more people should read it:

https://news.ycombinator.com/item?id=7106953


Oh we're not talking trivial bugs or single-site XSS.

Disappointed that 'mediocre' vulns got interpreted in this thread as 'trivial'.

Mediocre doesn't mean trivial, extremely scoped or useless. Mediocre means that it is for sensitive but not widely deployed software, for widely deployed software on default config but is post-auth or is not reliable, or it is reliable and yiels high auth but requires pairing with another vulns (i.e. memory disclosure) or extended recon (revision number, etc).

A MySQL bug affecting recent revisions that causes arbitrary file overwrites with semi-controlled content but that requires unprivileged (guest) auth would meet this criteria.

Apologies for the confusion with the word 'mediocre' - I figured people here would know.

In general organizations in the offensive world will pay more than those in the defensive world. This is not a hard and fast rule, but mostly it is the case that offensive network operations stand to gain more from the use of 0days than vendors stand to lose by not paying for the disclosure to patch them. It's not really a good calculus to use data from vendors sales to calculate the other.


A post-auth MySQL bug sold for five figures?! Why? How does anyone make money with that bug?


It's worth five figures to the buyer if they can make five figures or more of value from it.

Not speculating about nation states here but 'groups': making good money from post-Auth MySql RCE not totally absurd - Amazon, Rackspace, HP, Heroku and Jelastic all offer MySql-as-a-service, where you are given low privilege (maintained, geo-redundant, etc) account access to shared MySql instance. If there's more than five digits of business value stored in that database then a five digit exploit makes sense.

Or think about any of the (poorly written) bitcoin services out there that use some default phpAdmin creds for a database that also hosts their vault.


I would be quite happy with a $500 bounty for a trivial XSS bug. That pays for two days of work for me. :D


Six digits sounds about right for a Tor bug for one target depending on the specifics. The RCE bug used by the FBI recently against the Tor Firefox Bundle would have cost something similar, though the payload suspended the process where it could have resumed silently. It's not clear where that exploit was developed (my gut says in house but who knows?)


IIRC someone analyzed the payload and compared it to a Meterpreter and saw a lot of similarities. Could have been provided (hacked together?) by the person who sold the vuln.


Then where are the fixes? They spend untold billions per year... their visible bugfix output is very low, including in low level cryptographic domains that you might expect them to be power-houses.

The claim makes me think either that they're lying about sharing what they find (either intentionally or via institutional stupidity); or they're really inept and not finding much at all compared to much less well funded OSS developers and participants in industry.

It would be interesting for someone to setup a scorecard site to document NSA's infosec contributions.


Bug volume in crypto is also very low, and the "fixes" to major crypto bugs tend to take the form of entirely new constructions... which users are not happy to get from NSA (this was a problem even in the 1970s!)

So I'm not sure this is a valid critique.


Bug volume in crypto is extremely high. How many developers reuse IVs in stream ciphers? How many blindly use AES or somesuch other symmetric library and then build in no authentication whatsoever? How many antequated implementations of RSA are used in practice today (see recent Bleichenbacher flaw in NSS)? How many times are poor chaining modes for block ciphers chosen? How many implementations of [anything] fail on side cases (elliptic curves) or massively leak through side channels? How many DH-family protocols miss checks for identity inputs?

The answer is a lot.


You and I mean different things by "crypto vulnerabilities". I took the parent comment to mean things like the RC4 biases; like I said, things for which the "fix" would involve entirely new algorithms or constructions. An example of this kind of NSA disclosure would be the DES s-boxes.

Crypto software implementation vulnerabilities are very common, but the kinds of things you're talking about are most often found in obscure and/or serverside software. Look at the tempo at which bugs like the NSS e=3 bug are released; it's like once or twice a year.


I think implementation bugs are within the spirit of OP, especially provided the NSA claims to have provided an implementation fix for Heartbleed.

The sorts of bugs I'm talking about exist in client and popular software. As far as tempo is concerned this year alone has given us BERserk, gotofail, Android Master Key, OpenSSL fork(), Bitcoin's use of P256, GNUTLS X.509 parsing bug, the OpenSSL compiler optimization+processor family randomness bug, and others.

If we were to entertain OP's point maybe there would be a faster tempo if the NSA were helping out. :)


Sure, if this is what we mean by the kinds of cryptography bugs NSA is a powerhouse at, I'm sure they could be leaking more of them to industry.


It's not the NSA that is publishing cryptanalysis of proposed constructions with any frequency compared to industry/academia. Considering the number of mathematicians they employ and their focus on cryptography this is more than a little surprising.

But I did also mean that more broader than just construct attacks..., implementations of cryptosystems are often flawed in low level ways which people without special expertise are unlikely to notice... both from a design perspective (any of the great many protocol design flaws in TLS that have turned around an bitten us), or straight forward coding (e.g. it wasn't the NSA that reported reference implementations of Curve25519 had broken carry propagation).


There was the SHA0 -> SHA1 thing. Which mostly illustrates what you say here, of course, just seemed to deserve mention.


Well, there was that time a tor developer claimed the NSA was leaking him fixes... https://news.ycombinator.com/item?id=8210319


I assume they do share the majority of vulnerabilities they find, but keep the top 2% for their own use. By top 2% I mean the vulnerabilities that are highly unlikely to be discovered by other nations.

The other 98% of zero days that are easy enough to stumble upon by foreign cyber units might be better to disclose and get fixed.

If the NSA can find a bug relatively easy, then we can assume China(example) might be able to as well. Getting those bugs fixed is a big gain for national security. Although, it will boost security of all nations.


I don't buy it. I think he's just trying to make NSA look "useful" to private companies, so they support laws like CISA, where sharing between NSA and tech companies will be forced.


So they share a "vast majority," but still keep many to themselves because they don't think anyone else is persistent enough to find them ("How likely are others to find it?"), in addition to telling a lie about developing the fix for Heartbleed. Nothing new here.


Wasn't that developed by Adam Langley? (Google employee.)


It may or may not be true, having repeatedly lied in the recent past they have lost all credibility.


That must be an interesting decision to make. Presumably, they'd only keep quiet about bugs they were fairly confident that unfriendly governments weren't also likely to find and exploit. I assume they'd also launch a big honeypot, so if someone else started to exploit it they could could change strategies.

It's the New New Great Game.


They've been lying so much at this point I don't know what it matters what they say. They were so caught up in seizing power with no regard to the consequences that they forgot that Trust is most precious and fleeting and far more valuable than lying about your law-breaking.


So, the NSA are, to a person, lying sacks of shit. After their director's performance in front on congress -- the "least untruthful answer possible" -- don't trust a damn word.

So when they say they share, with whom? And what priority? Ooh, you found a documentation bug; is that the one you chose to share? The more severe a bug is, the more useful to them.

There's a million ways to parse this bullshit that come down to mean they're doing what they did all along but better at lying in public.


I want to be more specific about the concept of sharing.

There are two uses of "sharing" in the response. One was sharing of vulnerabilities. It was never clear who it was shared with. It could be with the DoD, with GCHQ, with private contractors under contract with the NSA, etc. and still be counted as "sharing."

The other is an example of sharing one patch, for the Bash vulnerability, with the private sector.

I think we are supposed to connect those two pieces of data, but there's no reason to do so.


The problem is... I DO NOT TRUST YOU.


lol, "vast majority". That's like telling your spouse "Most of the time, I don't cheat on you" full-disclosure: I didn't read the article.


Do they disclose them before or after they exploit them?


I'm sure they share them as soon as they discover someone else is aware of the same exploits.


Director of lying overreaching unconstitutional spy agency makes unprovable claim that it's not stockpiling digital weapons. I think I'll skip reading this one.


So how often do they share bugs? Has anyone seen it happen with any regularity?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: