Besides, end of support is not a hard deadline. Microsoft needs to produce patches for Extended Security Updates (ESU) customers anyway, and some of that can trickle down to ordinary users. Windows XP got an unexpected patch to fix a particularly serious vulnerability several weeks after the official EOL.
It would be remarkably convenient if Microsoft, having turned off support for Windows 7 and advocated an upgrade to Windows 10 that a lot of the hold-outs don't want because of the user-hostile changes, then mysteriously discovered a serious security flaw within hours of the cut-off. The optics would be terrible.
The reason they won't support they systems forever is the cost of doing so (and of course they want people to upgrade), but the cost of fixing a really bad security issue in an otherwise unsupported system every other year is probably fine.
Even if it's not a worthy successor to MS08-067.
"I myself have performed penetration tests in other countries such as China, and Russia where I was able to use MS08-067 to exploit systems running Windows systems with language packs that I was unable to actually read."
I wonder how many lines of code in crypt32.dll. Is it on the order of 7500 lines? If Microsoft spent a few man-years mathematically proving the correctness of that code, they could have the saved the world about 10,000 man-years.
Windows has a user base of 1 billion. A ballpark figure for proving the correctness of 7500 lines of very complex code is about 30 man years. If even 1% of the 1 billion Windows users and sysadmins has to spend a couple hours on things related to this patch, it works out to 9615 man-years of worldwide waste (based on an 8 hour workday and 260 workdays a year).
Had there been a wide-spread exploit, it could have cost the world millions of man-years.
 Professor Gernot Heiser, the John Lions Chair in Computer Science in the School of Computer Science and Engineering and a senior principal researcher with NICTA, said for the first time a team had been able to prove with mathematical rigour that an operating-system kernel—the code at the heart of any computer or microprocessor—was 100 per cent bug-free and therefore immune to crashes and failures. Verifying the kernel—known as the seL4 microkernel—involved mathematically proving the correctness of about 7,500 lines of computer code in a project taking an average of six people more than five years.
Proving that it has no bugs? That only works for the kinds of bugs covered by the proof. Your proof that it has no null pointer crashes tells us nothing about whether it has off-by-one errors. For each kind of bug, you need a different proof. Did your proof cover every category of bug? Almost certainly not. So strong claims like "100 per cent bug free" are almost certainly overstating things, no matter the credentials of the person making them. "100 per cent bug free for the categories of bugs we proved"? OK, but that's a significantly weaker claim.
For the 7500 lines (or however many) in crypt32.dll, you want to prove that there are no security attacks of any category possible. That becomes a harder and harder job as we keep discovering new categories of security attacks.
For example, you could do a bunch of spec proving on CPUs, but you wouldn't catch something like Spectre if your definition of correctness didn't include "no information leakage can happen through timings on branch prediction".
And even then! You need to have the right definition on that front!
Having specs and formally proven code helps to make sure your code isn't prone to a certain class of errors (just like bounds checking/lack of raw pointers prevents another class of errors), but it's not a magical catch-all. Especially if you are in a space like cryptography where you're trying to assert negatives (that end up being held up by assumptions around feasibility of certain things)
I don't think software should prove that some CPU from the future will have shortcuts that leak information, CPU guys should prove their hardware is also safe.
Which would be a good first step, but wouldn't protect against timing attacks or Spectre-like hardware frailties.
A solid start is better than nothing, but a false sense of security is worse than nothing.
This is a valid point. With formal verification, the buck stops with the model. Looking at existing projects like Sel4, though, it's clearly not a knock-down argument against formal methods.
> Your proof that it has no null pointer crashes tells us nothing about whether it has off-by-one errors. For each kind of bug, you need a different proof. Did your proof cover every category of bug? Almost certainly not.
This appears to be a guess at how formal methods work. It is wrong.
Formal methods are able to take an abstract model and 'refine' it into an executable program that implements the formal model.  This process is mathematically watertight (ideally at least). There is no room for language-specific bugs (such as buffer overflows) to creep in.
What you don't tend to get from formal methods, is a guarantee against side-channel attacks, such as timing-based attacks. This may be a real security concern. From a quick google search, this seems to be an active area of research.
Not necessarily; if your definition of security and correctness is sufficiently general and if you can prove that your program conforms to those definitions, you have effectively proven your program to be secure and correct for all bugs. Unless of course your definition of security and correctness was too narrow, which as you note is where the crux of the problem is.
But this is not a hopeless task, there is a lot of active research in building axiomatic definitions of what security and correctness mean, so in theory it could be possible to prove a program 100% correct if the axioms are self-evident.
hen there are things which you simply can't prove, eg. In crypto the code may be correct, yet it may still leak some side-channel information (such as timing). Also, nobody has proven that things like sha256 and so on are unbreakable.
But if core OS features or APIs (like the network stack) can be written in a theorem-proving language, huge benefits can be realized — the attack surface can be vastly reduced.
I find it hard to resist making a comment in those movies set in New York City where a character has a steel jacketed door and a million locks but I bet the wall is cheap gypsum board and studs on 18 inch centers. You could steal a lot of stuff around a door without even compromising the building structure. You just need the right sheet rock knife and a quiet hallway.
Someone told me about Microsoft having a data center in a leased building, and they had the forethought to fill the space above the false ceiling with motion sensors to prevent someone just getting a ladder and going over the top of the walls. People don't always think about these things.
If you have a model that works for certain patterns, people will begin to ask what situations it can't handle. The attacks will move to that space. They may even look at release notes and try things that were reported as fixed, similar to the way people are now doing analysis on updates for operating systems.
Will it keep lazy criminals out? Yes. But they're not all lazy.
Where it's more likely to help is that you'll find crashing bugs you may have missed, and you save some face by not having particularly naive bugs in your code. That alone may be worth the cost of entry. But it's 'safer', not 'safe'.
That doesn't sound right at all. Systems with atrocious security are more likely to be compromised than systems with good but imperfect security. Attackers do not have infinite resources.
I took the proposal on the table to mean that we systematically harden infrastructure code. If that's what we're doing then a rising tide will lift all boats. Except for the people who fall behind on upgrades. There is a period of chaos where your prediction will play out over and over, but eventually there's a new equilibrium where it's just different rocks to look under. Then people get bored again and they fall behind on upgrades, and now every time new functionality is added to the formal system, by definition people using the old version aren't getting that protection, so you'd look there first.
I think the critical question would be is how quickly does a formal system reach homeostasis? If the rate of bugs (or maybe you call them features?) is negligible after a brief flurry of initial activity, then most of my argument collapses in a divide by zero error.
As to priorities, I don't think penetration and debugging are fundamentally that different. And by that I mean, there's some fuzzy math you do regarding cost (to you) of checking a thing times the value of having that information. Yes, that means a ton of cheap scenarios will be tested very early. But ultimately, how valuable is breaking into a boring system? It's definitely not zero, because you've created another resource for yourself.
Doesn't code that runs on a lot of systems get far more attention than code that runs on a few? Wasn't that part of why OS X got away with fewer viruses? Not because it was profoundly harder to hack, but simply because fewer people cared?
When we take steps to eliminate classes of vulnerabilities, well, those vulnerabilities stop being a problem, and we're all more secure. For example, code written in Java or C# is immune from the buffer-overflow attacks, and use-after-free attacks, that continue to plague C code.
There was no counterweight built into this. Security was simply improved.
> Then people get bored again and they fall behind on upgrades, and now every time new functionality is added to the formal system, by definition people using the old version aren't getting that protection, so you'd look there first.
Failure to update packages causes security issues, yes. That doesn't mean that security improvements don't, well, improve security, overall. They absolutely do.
> If the rate of bugs (or maybe you call them features?) is negligible after a brief flurry of initial activity, then most of my argument collapses in a divide by zero error.
If they're using formal methods, they won't be adding bugs. (Discounting side-channel issues.)
I believe it's pretty unusual to see a company make non-verified changes to a codebase originally derived through formal methods. If they care about formal verification, they'll probably stick with it as they make changes.
> how valuable is breaking into a boring system? It's definitely not zero, because you've created another resource for yourself.
Sure - botnets.
> Doesn't code that runs on a lot of systems get far more attention than code that runs on a few?
Of course, but I'm not seeing what you're getting at here.
The only time anyone wrote a mechanical proof for my code, I was still finding security bugs a year later. One in particular stands out.
(note to bosses: never, ever ask the security guy "what's the worst that could happen?" the worst that could happen is always along the lines of "we go out of business and you go to jail", maybe add on "people could die," depending on domain. "How bad is this?" will get you more coherent answers)
Strictly speaking you don't - that's one of two approaches. Fortunately, the CompCert compiler already exists if you want to go the route you describe.
The alternative to use a non-verified compiler, but verify that the generated code corresponds to the model. This is the route the seL4 project took. They used GCC, but verified the correctness of the code it generated.
Crypt32 mostly concerns itself with X.509 certificates and I believe actually implements all that stuff (instead of delegating it elsewhere). I wouldn't be surprised if it contains considerably more code than 10k lines.
But how much more profitable could it have been? Is it possible Microsoft makes more money by not doing it?
We cannot expect companies to behave in the greater population's best interest unless there is some reward/punishment structure in place.
I think r00fus is right to look at the profit motive of the software supplier. Microsoft would have been climbing a steep curve of diminishing returns for basically no extra revenue.
Windows isn’t the monster revenue generator and in the server space it’s losing to free Operating Systems.
We ask for perfect security but we as an industry/market aren’t willing to pay the premium to go from “it mostly does what I want most of the time” to “life critical with near perfect security”.
That said, I don’t think regulation would work here and I suspect there would be a perverted incentive by the regulator if an issue was found (like in this case where government, military, and critical industry get early access to the fix).
That doesn’t mean that properly validating code so it at least doesn’t overflow its buffers is useless of course.
EDIT: As a starting point. Pure functional programming is one tool among the many in your toolbox. It should not be anyone's entire strategy.
Side channel is stuff like your multiplication code takes different amounts of time depending on the arguments, or your cache hit/miss patterns depend on arguments and thus can leak the argument.
Formal verification takes a tremendous amount of skill and effort. The most impressive formally verified operating system we have today is seL4, as you mentioned. Its functionality is very limited, though.
> A ballpark figure for proving the correctness of 7500 lines of very complex code is about 30 man years. If even 1% of the 1 billion Windows users and sysadmins has to spend a couple hours on things related to this patch, it works out to 9615 man-years of worldwide waste (based on an 8 hour workday and 260 workdays a year).
1. I doubt this idea is practical
2. Assuming it's possible, the cost would be enormous
3. The resulting system would likely be hard to change without breaking its formal guarantees
4. Microsoft would have to justify this cost on their own terms. If the code is already 'stable enough', they would do better to spend that money developing other features. Which of course they did.
> 100 per cent bug-free and therefore immune to crashes and failures
seL4 are careful to state that it does not offer guarantees against timing-attacks or other side-channel attacks.
The inertia and complacency of existing code is enormous, MS and other affluent vendors have been watching vulns in their codebases bubble up for decades and their customers tolerate it.
Big MS customers buy into it deeply: companies won't hire security professionals as CSOs who would ban insecure "industry standard" setups such as org wide AD domains and MS office attachments that malware most often spreads through.
Back in the day they helped make DES stronger against differential cryptanalysis, which was unknown at the time.
Modern NSA is more Black Hat than white Hat sadly
You have no idea if that's true or not. It's probably true, but neither of us have the information required to assert our opinions on the matter as fact.
Edit: People are asking why I assume it's not an RCE. That's a good question, but I am assuming Krebs on Security wouldn't report an RCE as a potential certificate validation bug. This quote in particular:
> Equally concerning, a flaw in crypt32.dll might also be abused to spoof the digital signature tied to a specific piece of software.
If crypt32.dll has a memory bug that can be exploited by feeding it an ill-formed certificate, that is wormable and orders of magnitude more severe, not equally concerning.
We'll know more tomorrow, however.
> is there a way to turn a spoofed certificate into a single-click pwn?
e.g. The victim clicks on a link to go to your website, their machine wants to validate the TLS cert you sent it, it calls into crypt32.dll to do that, it corrupts memory while handling your attacking cert, pwn?
We don't know enough (anything!) about the actual bug yet other than which DLL it's in.
You aren't supposed to get to pick very much of the document in the Web PKI. Rules forbid CAs from letting you write nonsense you made up into most places - they themselves get slightly more opportunity but "Let's attack a windows zero day" doesn't feel like a good use of control over a trusted CA. The biggest contiguous arbitrary chunk of data you get control over will be the public key, an RSA public key might be one kilobyte in size. So, conceivable but unlikely that you can squeeze a working exploit and payload in there.
An X.509 cert that isn't trusted in the Web PKI (ie hand rolled) is a better attack surface because you can write any amount of garbage without some busy body forbidding it, but now your problem is that a non-vulnerable client will see a problem immediately because it's untrusted which is weird. I don't look at the certificates on random web sites I visit... unless they don't work and then I might be curious.
I think code signing certs are a more likely vulnerability, unlike the Web PKI in practice this is down to Microsoft who have a very hands-off approach. I suspect you can get a lot more leeway to write crap into a cert for code signing and as a bonus it'll get seen by the vulnerable Windows PCs, because it's not as though Macs or Linux check the code signing certs on Windows software they can't run anyway.
But as you say we don't know anything, it could be a much more subtle but still dangerous bug.
For example suppose Microsoft's implementation of ECDHE misses a check on the parameters supplied by the other participant. This would make you vulnerable to invalid curve attacks when negotiating ECDHE. Of course a legitimate participant won't try to attack you, but you're doing ECDHE before you know the other participant's identity.
You might be right, but I don't think your version of "immediately" matches up with the reality of code. How many cert library function calls do you think happen before a TLS client is able to decide that there is no trusted path to the cert's authority and decide that it's "weird"?
Hundreds or thousands wouldn't surprise me -- you've got to parse a file format, follow links -- if the vulnerability's in one of those functions, then there's your exploit. You even have to go out to the filesystem, check whether the self-signed CA is installed in the OS trust store, which would make it trusted. It's really easy for me to imagine such a vuln somewhere in cert trust verification.
I agree that on an affected platform all bets are off, but deploying to a web server means you don't easily get to pick who your visitors are.
Agree with you that codesigning certs are a more likely problem space for this bug.
* client asks for cert
* you give it to them
* client tells you the page they want and their useragent
* if you think they're vulnerable based on what you've learned about them, you add <img src="https://vulnerable.subdomain/"> to the response.
Neat suggestion. Thanks! Agree we've moved well outside of tomorrow's likely actual vuln.
I'm also not convinced that the fields-allowed-by-CAs set is safe, especially when you consider the set of all browser-accepted CAs, it's historically been a fraught field.
Also with x.509 you have to decode the ASN.1 before you can even get to the signature, there have been historically been lots of vulnerabilities in ASN.1 parsers.
It would be a contradiction in terms. You can't validate the semantic content of a file before you've parsed it!
Imagine an actor like google running an intentionally-compromised TLS certificate on their web servers... perhaps even targeted at specific IP addresses or on certain schedules to avoid detection.
Certificates are an excellent initiation vector too. Especially, if one wanted to intentionally present a backdoor like this for national security reasons (all of the cryptographic primitives are already all right there for you to use to authenticate the cert).
Google doing that would be a great way for everyone involved to spend a long time in jail.
For what it's worth, you should read about Certificate Transparency, it tries to protect against this sort of problem. Browsers only accept certs that can prove they've been submitted to a public log, which means that their existence isn't being selectively exposed to some users only.
(They mainly act as prevention against rogue certificate authorities rather than a rogue cert owner, though.)
1) if there's a bug in the crypto library that creates RCE 2) if a mitm delivers a payload to an application (browser/plugin/etc) and that causes RCE. The former might cause NSA to get its panties in a wad.
But it might be much worse than RCE. It might be like Heartbleed, where it can leak key material and memory all day (on servers and clients) and you'd never know it because the leak leaves no trace on the system and just looks like normal requests. Perfect exploit for spying. (And of course it's possible they had that exploit in their back pocket, but someone else reported it to M$, so now they're CYAing)
RCE in Windows Doesn't mean control over digital signature validation. But RCE in that particular API might mean both
This looks like it could be abused to lead to code execution.
IIS crashing in crypt32.dll!CryptMsgClose() after a web request
"Finally removed all totally inactive and harmless old _NSAKEY references."
I wouldn't bet against something along those lines.