A team at Google pulled 20tb(!) of SWF files out of their crawl and fed them through a simple algorithm that determined the subset of 20,000 SWF files that exercised the maximum number of basic blocks in Adobe's Flash Player.
Then, using 2000 CPU cores at Google for 3 weeks, they flipped random bits in those 20,000 SWF files and fed them through an instrumented Flash Player.
Result: 80 code changes in Flash Player to fix security bugs from the resulting crashes.
This is great stuff; you can imagine a very well organized adversary spending the money on comparable compute resources, and even (if you stretch) obtaining the non-recoverable engineering time to build a comparably sophisticated fuzzing farm. But no other entity excepting perhaps Microsoft can generate the optimal corpus of SWF files to fuzz from.
DO PDF NEXT, GOOGLE.
You've got to ask yourself: in a year or so, if there are still regular updates for exploitable zero-day memory corruption flaws in Flash, even after Google exhaustively tests the input to every basic block in the player with the union of all file format features observed on the entire Internet, what does that say about the hardness of making software resilient to attack?
Well, we already know the answer to that question. The long-term solution is to build software out of safer building blocks. A good example is high-level programming languages. When you write C and use cstrings (or other unstructured "blocks-o-ram" data structures), you have to "get it right" every single time you touch a string. In a codebase the size of Flash's, this probably amounts to tens of thousands of possible bugs. But if you write it in a high-level language, the runtime implementor only has to get it right once -- and there's no way you can get it wrong, even if you want to. (The middle ground is something like better string handling functions; OpenBSD tried to do this with strl*, and there are libraries like bstrings that represent strings properly. But you can still do unsafe pointer math on these strings with not-much-benefit.)
The way it stands right now, it's cheaper to write software with the approach of "throw some trained monkeys at it and hope for the best". But in the future, we're going to need to do a lot more thinking and a lot less typing if we want to write secure software without performance penalties.
> Adobe patched around 400 unique vulnerabilities I had sent them in APSB11-21 as part of an ongoing security audit. Not a typo.
> Apparently that number was embarrassingly high, and they're trying to bury the results, so I'll publish my own advisory later today.
Whereas the blog post cites 400 unique crashes, 106 security bugs, and 80 code changes (the same numbers that Adobe used: http://blogs.adobe.com/asset/2011/08/how-did-you-get-to-that...).
Regardless of the exact numbers though, this is a supremely awesome feat of security engineering. It's very impressive.
Additionally, years ago a friend of mine who I'd lost contact with caught up with me and told me he found a cached copy of a website I'd taken down in his employer's equivalent to the Wayback Machine. His employer was a branch of the federal government. I know my anecdote doesn't prove anything, let alone come close to addressing the difficulty of crawling the web without anyone noticing (intercept all http traffic in transit?), but the fact remains that there are literally tons of computers doing something for the government.
And once you have the data, I'm sure there are very few, if any, restrictions on how you process it internally.
Redistribution is another matter, but Google doesn't seem to have done any of that.