I don't have the reverse engineering skills/IDA Pro license to verify this, but fwiw I trust and respect this person's skills.
But lets do a thought experiment.
1. How much would the NSA gain from pressuring Microsoft into backdooring (or as they say, "enabling") Windows, in terms of systems they could not access before that they can access now?
2. How much would it cost the NSA, in terms of effort, good will, and exposure to risk by the people at Microsoft who would know about the backdoor and may leak or abuse it? How bad would it be if the public got wind of it? How hard would it be to keep it secret over the years, especially as engineers moved around to other companies? Would they have to involve foreign nationals on the dev team? Could they be trusted not to warn their governments?
3. How many times could they abuse their backdoor before it was obvious Windows couldn't be trusted? When/if that happened, what would be the damage to the US economy, and to their ability to penetrate systems?
When I put myself in the shoes of DIRNSA and ask myself these questions, backdooring Windows (at least through official channels, like _NSAKEY supposedly is) seems like an insane proposition.
1 - Privileged access to the dominant consumer operating system, also used by many corporations likely to be targeted.
2 - Minimal effort cost. Good will cost seems like something NSA ignores. Exposure to risk seems minimal given the existence of NDA contracts.
3 - I think anyone who isn't deluded and/or a member of the "nothing to hide; nothing to fear" camp already knows you can't trust Windows. The damage to the US economy seems minimal in light of the Snowden leaks that implicate nearly every US-based technology company, and Microsoft is investing heavily into things like X-box to diversify their revenue streams. I don't think there'd be any fallout worth mentioning, tbh.
When I put myself into the shoes of the DIRNSA and ask myself these questions, backdooring Windows seems like an obvious "Yes".
_NSAKEY may very well not be a backdoor, but I find the suggestion that Windows doesn't contain one to be laughably naive.
Why would they backdoor Windows, when apparently they could just buy an exploit for $X00k? Its seems buying an exploit serves all those same factors, at a similar price range, while making it much harder to point a finger at the NSA when it eventually gets discovered.
Its probably a safe assumption that if someone is found using a backdoor in Windows, its probably the US Government that put it there. If its an exploit, its a hell of a lot harder to point that finger at anyone in particular.
Realistically, a backdoor is the worst option for the NSA. A backdoor would be known by the people who implemented it, who, assuming it's a cooperative venture, would most likely be at the company itself. A backdoor would also be most likely living in the real codebase, able to be discovered by others, and, if somehow it leaks, it'll point directly at the NSA.
An exploit does not live in the codebase, could be blamed on others, and will produce the same results.
That way when someone finds it, they could go "oops. thanks for pointing this vulnerability out for us. Will fix"
But NSAKEY isn't that.
It is worth it for the NSA to backdoor crypto because, if the implementation is solid and the keys can't be stolen, then no matter how many resources they pour into cracking it the math will remain inflexible.
There is no computer in the world with that kind of security. If it is smartly configured, they'll use 0day. If its airgapped, they'll compromise a sysadmin's computer and wait for them to connect to it (think Stuxnet.) If that isn't feasible, they'll walk into the data center and put malicious hardware into a PCI slot.
If we persist in thinking of the NSA as a boogeyman logging every packet and backdooring every OS, rather than discussing their real capabilities and motivations - what they are, what they should be - we will become paralyzed to act against them, they will continue to operate without meaningful oversight, and our rights to privacy and to secure software will languish.
Saying that the NSA backdoored windows is not a boogeyman type claim; it's exactly within their real capabilities and seems like a very plausible path for them to have taken.
Nor does it mean we can't fight against it. We can use OpenBSD and have a higher confidence that it's not backdoored.
The first step to reeling in the unchecked power of the NSA is not to claim that they would not have done such a simple thing but to realize exactly how atrocious the scope of their acts are - not to become paralyzed with fear, but to incite change.
Why? As a non-technical user, from my POV I'm simply trading my trust that NSA hasn't backdoored MS with trusting that your, or De Raadt's authority is meaningful. I can't review the source code I'm running (without a prohibitively large time investment), and as we saw with Heartbleed, the "many eyes" theory is flawed as well.
As an individual, non-technical user I have no reason to be anymore confident in OpenBSD than in Windows. At some point you have to rely on a chain of trust (or develop the silicon yourself) and I view the "NSA paid/forced MS" boogeyman just as likely as the "NSA paid/forced OpenSSL" to merge heartbleed. Am I to believe that the NSA gagged with thousand or so developers who work on windows, or just the 10 who manage OpenSSL?
The parent post has a very important point, and the history better aligns with what he/she said. The NSA didn't coerce Google into giving up user data - they simply took advantage of the fact that their inter-DC traffic was unencrypted and use their resources to attack that fact. It didn't take a secret court nor did it take a gag order. They experienced an attack that could have been done by anyone dedicated enough - government or blackhat - and its likely that keeping your software secure against such attacks is very effective at protecting user privacy.
I don't think Heartbleed counts as some sort of evidence against the "many eyes" paradigm. There are so many better bugs for that, as Heartbleed is really low hanging fruit. OpenSSL is a total nightmare. I've posted elsewhere about this at length - but in short OpenSSL is really an example of what a good program _shouldn't_ do. How a good program _shouldn't_ be written. There is a list of sins a mile long on http://opensslrampage.org/.
The truth is that there is no guarantee that Windows, Linux, or BSD are not backdoored by the NSA, GCHQ, or FSB. There's no guarantee you didn't get owned and Chuck Blackhat installed a backdoor on your computer. The real reason to use OpenBSD is because it's had less remote exploits in the past 15 years than Windows has had in the past year. The real reason to use Linux and BSD is because that software respects your freedom. If you don't care about things like software freedom or if you feel the security of Windows is "good enough" for what you're doing then of course you don't care about Linux and BSD.
> I have no reason to be anymore confident in OpenBSD than in Windows
Past statistics show that OpenBSD is safer. It's had far fewer security issues and has a much cleaner codebase.
If you don't place faith in past statistics then you're willfully ignoring the best means of predicting future behavior.
In addition, OpenBSD has far fewer lines of code, and the most reliable correlation with security holes is lines of code. Simply by having fewer LoC, OpenBSD is already statistically less likely to contain a security hole.
> chain of trust
Yeah, with microsoft your chain of trust is microsoft employees and the word of other people reverse engineering the code (e.g. the people who said the _NSAKEY thing was legit after reverse engineering a small portion of the code).
With OpenBSD your chain of trust includes me, the developers, and other eyes that have looked at the code. The "many eyes" theory is not flawed. It never stated that having many eyes eliminates all bugs, merely that it's better to have more eyes than fewer eyes and increases the chance a bug is noticed. There's no sane way to argue against that statement unless you turn it into a ridiculous strawman of "many eyes means heartbleed couldn't have happened QED".
> Am I to believe that the NSA gagged with thousand or so developers who work on windows, or just the 10 who manage OpenSSL
It's much easier to believe that the NSA could gag one or two of a thousand developers than one or two of 10. Believe me, you don't have to get all MS employees to futz windows security. Just getting one at random already gives you a decent probability of getting a kernel level exploit, and selecting five or so specific employees can get you a hell of a lot more.
> the "NSA paid/forced MS" boogeyman
Evidence in this post-Snowden era indicates the NSA has worked to backdoor commercial software. It's also quite possible heartbleed was an NSA inspired hole, though I don't think that would be a productive discussion to have.
If you read leaked NSA slides and look at what they have done (such as the Verizon MITM closet) then backdooring operating systems is not a bogeyman, it's quite reasonable.
You cite that they have intercepted data without the consent of the parties involved, but that ignores the fact that they also coerce parties as well; just because they have used the tactic you mention does not mean it's the only tactic they use.
If you're going to argue that BSD is no more secure than Windows and the NSA is not in fact using gag-orders and subverting software you'll need a heck of a better argument.
You drastically overestimate the general public's perception. I guarantee you that if you go down the street and ask random people if they trust Windows, they will say yes (or no, and say they trust OSX instead). In HN-land, sure, that can be assumed, but I highly doubt that viewpoint is shared outside tech circles.
"Microsoft said that the key's symbol was "_NSAKEY" because the NSA is the technical review authority for U.S. export controls, and the key ensures compliance with U.S. export laws."
No sense at all. But if you read between the lines, it's the key without which Windows couldn't be exported. So it's a key which at least allows weaker encryption outside of the US.
"The keys in question are the ones that allow us to ensure compliance with the NSA's technical review."
Very clear. It was there because otherwise Windows wasn't compliant according to the NSA.
Then only thing NSA could worry about commercial products was the same that made Lotus to implement what we read here.
Ask Schneier if he would now, after Snowden, react the same as in 1999 while writing about it.
At the same time as _NSAKEY, there was another bizarre mechanism in the '90s called "server-gated cryptography". US law was that exported cryptography could not be stronger than 40-bits, but there was an exception for financial organizations. The implementation was that certain CAs were trusted to verify whether their customer was in fact a financial organization (trusted in the sense that, if the browsers decided wrong, they were violating munitions control laws...) and could place a special extension in the certificate. If that certificate was present, export browsers would negotiate 128-bit cipher suites; otherwise they would only negotiate 40-bit cipher suites.
This mechanism, incidentally, blew up in our collective faces two weeks ago under the name "FREAK", and there was a lot of talk about whether the NSA's meddling was appropriate.
But where's the backdoor? In this case, it is the presence of certain CA keys that allows strong crypto, and their absence weakens it. _NSAKEY has the same goal, but it's just done in reverse. So calling the key a backdoor is not very meaningful, since in the SGC case, we'd have to call the absence of a key a backdoor.
This is a very different sort of thing from the Lotus escrow business in this article, where the software silently encrypts the data to a public key owned by the NSA. The Windows _NSAKEY is just a signing key, and in US versions of the software, _KEY is also allowed to sign all the same things. Nothing is ever encrypted to _NSAKEY.
Or, in other words, the presence of _NSAKEY in US versions of the software cannot possibly weaken anyone's security.
If there is a backdoor here at all, it is the entire system of export controls for crypto. (Which everyone knew about because it was literally the law, so calling it a "backdoor" is sorta like calling Wikipedia's edit-this-page button a "security vulnerability".) All of this was very different from the Lotus backdoor described in the article.
The catch 22 is not a catch 22, the whole system is a catch 22, therefore don't ever call the catch 22 the catch 22.
I was under the impression that the NSA created a public/private key pair and gave this public key to Lotus Notes to use.
It has long been speculated that NSA can factor 1024bit RSA (or DHE) using custom hardware, which is why in protocols like TLS and SSH the current recommendation is for keys, certificates and Diffie Hellman key exchange to be at least as strong as RSA-2048 (e.g. 256 bit elliptic curve crypto is strong enough).
0 - https://en.wikipedia.org/wiki/RSA_numbers#RSA-768
If you only have access to commodity hardware than GPU's would probably be better. Xeon Phi is also insanely cheap right now and you can get a 57 core card for under or just about 200$ but I don't have clear performance data for it, i know for BC mining it's comparable to R9 290/295X or so but with much lower power consumption, but i also suspect that due to its relative low market cap it's fairly poorly optimized atm.
NSA and large private organizations use most likely specially designed hardware rather than commodity hardware and surely not FPGAs.
For private individuals the most cost effective way to factor a single key these days is probably renting EC2 GPU instances (CUDA) from Amazon @ about 70 cents and hour you should be able to factor 512bit keys for 75-150$ (based on confirmed reports).
1024 bit might also be in reach however it will require a sizable budget.
Based on the current development of "auxiliary" processing components whether it's GPU based compute cards or more traditional but highly threaded processing cards ala Xeon Phi it would not surprised me if 1024 or even 2048 bit keys will be easy to factor before 2020.
My current bet is that 1024 will be achievable on EC2 or a similar service by late 2016 to mid 2017.
NIST has disallowed 1024bit since 2014, and based on it's previous deprecation most keys were factored within 2 years after it's final deprecation notice.
It's also quite important to point that there are quite a bit of "weak" RSA keys out there and there's a good chance that the NSA and similar organizations have the capability to factor certain keys probably upto and including 2048 bit.
tptacek said it's extraordinarily unlikely NSA can scalably factor RSA-1024 today .
Look at how difficulty increases in the Yafu with GGNFS benchmark on wikipedia .
1 - https://news.ycombinator.com/item?id=8844239
2 - https://en.wikipedia.org/wiki/RSA_%28cryptosystem%29#Integer...
NSA can virtually certainly target a specific, hardcoded 1024 bit key and break it. In fact, leaving out the cost and difficulty of recruiting the team to actually put the pieces together, the typical California venture capital firm has the resources to build a machine to do that today. Eran Tromer put the cost of such a machine in the single-digit millions, many years ago.
Apropos nothing: the gap between a 1024 bit key and a 2048 bit key is enormous. The thing that allows the NSA to meaningfully attack a 2048 bit key is likely to take RSA out altogether (and with it probably multiplicative finite field --- ie, "conventional" --- Diffie Hellman).
Also, there's the possibility that even if a product uses a 2048 bit RSA key it might have been weakened.
"As of 2010, the largest factored RSA number was 768 bits long"
I read this as "today, 1024 bit factorizations are possible"