It's common for people to suggest that the NSA is 20 years ahead of the private sector, but it's not clear how true this is. That number is commonly cited as a result of changes the NSA made to DES, which later suggested (and Don Coppersmith openly confirmed) they knew about differential cryptanalysis 20 years before Shamir published anything about it.
However, that was long before cryptography became as popular a research topic outside of government circles. There are now many other places to go to do that type of research, and based on the public payscale data available, almost all of them pay better than the NSA. Recent developments, such as their Dual Counter Mode proposal in 2001, suggest that they might not be as far ahead as they once were.
This could be little more than a way of making the purchase of a bunch of D-Wave boxes sound more compelling than it might really be at this time. The problem with secret organizations with no oversight is that we'll probably never know for sure.
After all, it would be fairly damning if the NSA were defending a high crypto budget by filling the document with hyperbole, just as it would be a trifle suspicious if they suddenly cut the department to a couple of well fed nerds and a years supply of sharp pencils and Mountain Dew.
The meagre sentences in this document seem to sit in the bureaucratic sweet spot of positive but vague.
Also, the proof-of-work is completed when you find a hash that begins with a specified number of zeros, and not when you reach a specific hash.
The Trusted Foundry Program (TFP) was established as a joint effort between Department of Defense and National Security Agency ... in response to Deputy Secretary of Defense Paul Wolfowitz’s 2003 Defense Trusted IC Strategy memo
- Program is administered by NSA’s Trusted Access Program Office (TAPO)
- DoD component resides in the Office of the Secretary of Defense, ASD R&E and is managed by Defense Microelectronics Activity (DMEA)
By the end of the program in FY2013, DoD will have invested >$700M to ensure access to microelectronics services and manufacturing for a wide array of devices with feature sizes down to 32nm on 300 mm wafers Program Provides National Security And Defense Programs With Access To Semiconductor Integrated Circuits From Secure Sources
Info on Fort meade: http://news.google.com/newspapers?nid=110&dat=19890417&id=Ye...
San Antonia former Sony plant: http://www.chron.com/news/houston-texas/houston/article/NSA-...
Twenty-five years of Moore's law later the price and complexity of an operational chip foundry has doubled similar to transistor count. The former Sony plant is 633,000 ft2, reportedly valued as $72M and being leased at $35/ft2.
I don't see any evidence they're using it for anything other than datacenter and office space.
How would you reasonably estimate the NSA's academic prowess when it comes to crypto/codebreaking? How does this compare to the state-of-the-art in academia?
I am less interested (but definitely so) in knowing or estimating more about how such a budget helps in deploying and building systems to collect/store intelligence information. I also wouldn't be surprised if they are fairly ahead of the curve in terms of zero-day exploits and the like.
The thing that most makes me curious is in terms of pure mathematics (like better factoring algorithms, better predictors of pseudorandomness, weaknesses in commonly used parameters or poor choice of certain elliptic curves, say).
There are few interesting pieces of information that would help
-- a rough headcount of people in positions comparable to post-docs and professors in the worldwide crypto/math community. I find it incredibly hard to imagine a mathematical breakthrough without significant training and continuous involvement in the research community. Do they get a fair share of the best mathematicians out there? Does the pay make it a fairly attractive destination?
-- whether or not they have an active "collaboration" and "internal publication" environment, once again, comparable to the dozen or so reputable conferences occurring yearly that allows for exchange of several interesting ideas across 100s of people with the express incentive in attending these conferences being discussion and collaboration.
-- how much we can extrapolate from just 2 instances (from what I know) of the NSA being a decade or more ahead of the curve (the DES S-box and public-key cryptosystem examples). What's a reasonable way, as outsiders, of starting to get a legitimate guess? Do we have more examples or at least hints of such possible examples?
(ps: I see after posting this that several other people have raised similar points in the thread. I'd love to learn more about why Bruce Schneier thought that the NSA was decades ahead in 1996 and whether he holds the same opinion in 2013)
The "intelligence community" sponsors a lot of math and fundamental physics work in academia and the national labs. The grants are laundered through an intermediary, so you don't know who and in what agency is interested in the work. I think of that visible side as one way for them to keep their foot in the door, to check the pace of advances on the unclassified side.
But you're really asking about the classified "shadow" of that open work, which by definition is hard to measure. I think one easy yardstick is "how much a mathematician makes" versus the intelligence budget. This leads me to think that they can hire a lot of mathematicians.
And let's face it, the subject matter is really cool. There's a reason that mathematicians throughout history have been interested in coding. Now imagine someone paying them to do it.
A senior academic I knew (in the information theory area) consulted at the NSA for several summers. His work was done in a Faraday cage, and that's all he would say about it. He was top-notch in his field, and he wouldn't waste his summers with fools. I think that there are lots of similar stories.
My working belief is that, in certain areas, they are way, way ahead of the open state of the art. Like with Keyhole, or tapping fiber optic cables, etc. Not everywhere, but they don't have to be ahead everywhere, just in a few places.
lets not forget that having friends in government intelligence (people in the shadow with strong soft powers) may be just helpful ...
That source says nothing about the NSA. Is it believed the NSA invented it earlier and didn't share with GCHQ?
Back in highschool one of my friends had an internship with the NSA. What he was doing is classified, but he is allowed to say that his work got published internally.
What Wired.com? Much of the email bouncing around the net and even internal networks, save for traversals like gmail-to-gmail, are NOT a portion of the encrypted traffic. It's important you report this correctly, because "the masses" are otherwise made unaware or unconcerned about the implications. "Oh, I'll just email you my passwords; I heard 'those guys' can read SMS."
It's the thought that counts, right?
"The NSA probably possesses cryptographic expertise many years ahead of public state of the art (in algorithms, but probably not in protocols) and can undoubtedly break many of the systems used in practise."
In 2010 former NSA technical director mentioned that they have been losing ground to their public counterparts during the last twenty years but that they would probably be still ahead of the public by "handful of years". 
I think we put too much emphasis on the single data point of GCHQ inventing pgp early (IIRC)
At least, that's how I'd do it, if I found myself a global superpower after WW2, thanks in large part to superior signals/crypto work, and didn't want any other emergent groups to surprise me from a "higher perch" of signals omniscience.
On the other hand, if you reveal the programs, you lose your job, get cut off from your professional colleagues, and likely go to jail.
If it was a backdoor you certainly can't count it among the NSAs successes. It is never used.
I don't know what organization spends the most money on cryptanalysis every year, but the NSA's gotta be near the top. It's reasonable to assume they've found important results that the public won't know of for several years.
It's also much easier to invent a cryptosystem than to break one.
I would not be surprised if the NSA had lost its advantage. In bureaucratic systems like this, where the target (advanced math) is best achieved by exorbitantly intelligent people getting lucky - the weight of a large number of mediocres can wind up becoming a drag instead of an asset.
Other interesting discussions about language usage on the AP blog - http://blog.ap.org/
Nothing really new, just a breif line mentioning groundbreaking capabilities with no explanation.
GS pay tops out around $117k. Is that how much we're paying for top level cryptography research? Do contractors like Snowden get to make $200k because that's just what their consulting firm bills for?
You might be able to do better than that all told on the outside. You might not.
It's "don't have to worry about too much" money (though with inflation it's less that than it used to be), but not "I'm rich" money.
The NSA was conducting operations which many people feel are out of line with the way the American government should operate. Hence, Snowden is generally viewed as a hero rather than a villain.
Even if it was morally justified to leak PRISM and XKeyScore, leaking a detailed breakdown of the budget for the entire intelligence arm of the American government seems dubious. Now every other country knows the lower bound of how much money America invests into intelligence. This document could very well be used as a justification in other countries to convince their politicians to dramatically increase their budget devoted to cyber ops / intelligence.
If you feel the PRISM and XKeyscore leaks were a good thing, you may want to consider whether this latest leak shares the same merits. It seems a difference in kind.
The algorithms rely on assumptions, and they're not at all future-proof.
One is about certain classes of mathematical problems being hard (in RSA, it's factoring numbers). We don't know whether they're hard (there's no proof; it's an "open problem").
Another is that random numbers selected in encryption are uniformly distributed and unpredictable. (In RSA, you pick two large prime numbers, p and q. If two people share a number, say my p is the same as your q, then we're both screwed. This particular assumption has been already been violated a bunch of times in the past twenty years; from the Debian OpenSSL thing, to the Android/Bitcoin thing.)
There are many other assumptions (Certificate Authorities can be trusted, etc.) that a paranoid person would have to worry about.
I think the new hotness is elliptic curve cryptography (e.g. ECDSA), but I don't understand it well enough to know if it's substantially better than the RSA implementations that are currently popular. I'd say what we have now is like a lock on the door -- it's enough to prevent the neighbour's kid from getting in, but not enough to stop a determined lockpick or the government.
HTTPS, for example, depends on both crypto algorithm implementation, SSL/TLS, the responsible Certificate Authority, your random number generator, your OS, your hardware, and, of course, much the same list for the people at the remote end.
The other part of the problem is that, IIRC, the NSA is one of the largest employers of crypto/number-theoretic mathematicians, and from the article, this program with a 35k headcount probably has a bunch of them. Between them, and compute clusters not implausibly denominated in acres, a teeny tiny little flaw might be enough, if they think you deserve the effort.
 On a tangent, has anyone explored the implications of a "give us some valid certs/signing keys for $whoever and lie to everyone who asks" NSL to one of their domestic CAs? Apart from the EFF SSL-observatory or someone else maybe noticing, of course.
I've been wondering if there's a public registry of certificate fingerprints somewhere to verify you're getting the cert the domain owner knows about.
Certificate Pinning (bundle your cert with Chrome/$browser)
HSTS (cache the cert you receive on this connection for $num days, bitch vocally if it changes)
Convergence (Dead?) / TACK (add an independent site-specific key to cross-sign the CA-provided certs, like pinning but more flexible)
And the more passive detection approach I mentioned like the SSL Observatory which looks for "unexpected" changes in certs.
To finally answer your question, no, I don't think there is any sort of list. Doing essentially that without any centralised bookkeeping (I mean, why trust those guys any more than the CAs? Not to mention it'd be hard to scale) is the plan.
DNSSec might have some sort of role in there, but I'm sufficiently hazy on how it works, and you're back to trusting your registrars/registries again anyway (see recent excitement at the NYTimes for why that's not such a great idea)
 https://www.eff.org/observatory (built into HTTPS Everywhere but disabled by default, IIRC)
Generating a new cert from a trusted CA would be caught by EFF's SSL observatory (an optional feature in the HTTPS everywhere extension) and similar efforts.
It would fail if used against a site that has its certificate's CA pinned in the browser, unless the NSA gets the CA private key for the right CA.
Therefore, if they do have CA root key(s), they wouldn't MITM all the ssl connections they can. They would use that capability sparingly.
But, speaking algorithmically, it is likely you can rely on a well-vetted symmetric algorithm like AES, used conservatively according to current best practices, to keep information secret during your lifetime.
Asymmetric algorithms are another matter. RSA relied on unproven assumptions that are turning out to be squishier than we might have hoped. ECC relies on assumptions that, for the moment, appear less squishy. I'm not a mathematician, so I can't have a truly informed judgement on how likely they are to remain unsquished, but the empirical history of public-key crypto means my confidence in ECC will remain well below my confidence in the likes of AES.
Weird. Why now? They (admin) refused to do that with WikiLeaks.
Edward Snowden, in Q&A session at The Guardian's homepage, 17 June 2013
Both from the front page of Hacker News. Both apparently potentially a violation of law (intel and copyright). This makes the third open post, "Spy Kids" , also on the front page, all the more premoniscient.
Who needs Kafka. Or Orwell. Or Huxley. It's all here.
Which of the 2 possible breaches do you fear reprisal for more?
I don't get it? What is he thinking he is doing?
Really!? You think that's probably what he did? You believe that it's more likely than not that he got all flustered with the USA's response, and sent a bunch of documents to Greenwald that he didn't want released?
That's just absurd.
Why? Because nobody knows exactly what he stole. The even crazier thing is, this could all be a huge misinformation campaign and nobody would notice because we're all
Snowden takes documents, gives them to press, and they release them. How do they verify their validity? Oh yeah, they can't since it's all top secret. What a grand one-way street this guy just built for a bullet proof story of his own liking.
I think it's strange nobody is questioning the veracity of the documents he's releasing. They just accept them as de facto truth.
The only verification is people saying, "leaked documents confirm XYZ." There's no way to verify the documents he's releasing are in fact - REAL. It's just people saying, "Oh yeah, we thought that was true, is now confirmed with these documents." Even though all the stuff he's releasing could be total fakes and no one would be the wiser.
Makes you wonder really. . .
Read the wikipedia article on the prism thing. It has lots of pointers to statements by officials that confirm the existance and scope of the program. Including officials that have lied before.
When we are able to attack, we must appear unable...
The amount of money you spend on one area of intelligence work versus some other area speaks volumes about your plans, priorities, capabilities, and so forth.
On the other hand, most modern encryption systems are based on the assumption that factoring is a difficult problem. Famously, quantum computers can factor numbers efficiently. So a large scale quantum computer could break current cryptosystems, but there are in fact cryptosystems (such as lattice cryptography) that are secure against quantum computers.
Or at least: (currently) not more vulnerable to quantum computer than to classical computers.