Hacker News new | past | comments | ask | show | jobs | submit login
Cryptic Rumblings Ahead of First 2020 Patch Tuesday (krebsonsecurity.com)
230 points by dustinmoris 9 days ago | hide | past | web | favorite | 93 comments





I wonder if Windows 7 will get this on its end of support day tomorrow.

I'm sure they set the EOL date to a Patch Tuesday for a reason. :)

Besides, end of support is not a hard deadline. Microsoft needs to produce patches for Extended Security Updates (ESU) customers anyway, and some of that can trickle down to ordinary users. Windows XP got an unexpected patch to fix a particularly serious vulnerability several weeks after the official EOL.


Presumably it will.

It would be remarkably convenient if Microsoft, having turned off support for Windows 7 and advocated an upgrade to Windows 10 that a lot of the hold-outs don't want because of the user-hostile changes, then mysteriously discovered a serious security flaw within hours of the cut-off. The optics would be terrible.


It most certainly will. They released a security patch for XP last year.

The reason they won't support they systems forever is the cost of doing so (and of course they want people to upgrade), but the cost of fixing a really bad security issue in an otherwise unsupported system every other year is probably fine.


Prediction: whatever this is will end up in pen-test reports for many years to come.

Even if it's not a worthy successor to MS08-067.


Not only obvious to me how 067’ and this is connected. Yet to save you a few sec: https://en.m.wikipedia.org/wiki/Conficker

"This is probably one of the easiest ways into a network if not the easiest way. Simply starting [xxxx] loading the module and giving it an IP address of a vulnerable Windows host will get you full administrative access to that system."

"I myself have performed penetration tests in other countries such as China, and Russia where I was able to use MS08-067 to exploit systems running Windows systems with language packs that I was unable to actually read."

https://blog.rapid7.com/2014/02/03/new-ms08-067/


It's a shame that mathematically proving correctness of code, even for extremely important code, is never done.

I wonder how many lines of code in crypt32.dll. Is it on the order of 7500 lines? If Microsoft spent a few man-years mathematically proving the correctness of that code, they could have the saved the world about 10,000 man-years.

Windows has a user base of 1 billion[1]. A ballpark figure for proving the correctness of 7500 lines of very complex code[2] is about 30 man years[3]. If even 1% of the 1 billion Windows users and sysadmins has to spend a couple hours on things related to this patch, it works out to 9615 man-years of worldwide waste (based on an 8 hour workday and 260 workdays a year).

Had there been a wide-spread exploit, it could have cost the world millions of man-years.

[1] https://en.wikipedia.org/wiki/Usage_share_of_operating_syste...

[2] https://www.schneier.com/blog/archives/2009/10/proving_a_com...

[3] Professor Gernot Heiser, the John Lions Chair in Computer Science in the School of Computer Science and Engineering and a senior principal researcher with NICTA, said for the first time a team had been able to prove with mathematical rigour that an operating-system kernel—the code at the heart of any computer or microprocessor—was 100 per cent bug-free and therefore immune to crashes and failures. Verifying the kernel—known as the seL4 microkernel—involved mathematically proving the correctness of about 7,500 lines of computer code in a project taking an average of six people more than five years.


Define "proving correctness". Proving that it does what it's supposed to? You need a formal spec as the starting point for that; how do you prove that the formal spec correctly describes what the software's supposed to do?

Proving that it has no bugs? That only works for the kinds of bugs covered by the proof. Your proof that it has no null pointer crashes tells us nothing about whether it has off-by-one errors. For each kind of bug, you need a different proof. Did your proof cover every category of bug? Almost certainly not. So strong claims like "100 per cent bug free" are almost certainly overstating things, no matter the credentials of the person making them. "100 per cent bug free for the categories of bugs we proved"? OK, but that's a significantly weaker claim.

For the 7500 lines (or however many) in crypt32.dll, you want to prove that there are no security attacks of any category possible. That becomes a harder and harder job as we keep discovering new categories of security attacks.


One might hope you have a formal spec of a cryptographic library.

While proving code matches a spec is very helpful, it's only as helpful as the spec being "correct" from a "does what we want it to do" angle.

For example, you could do a bunch of spec proving on CPUs, but you wouldn't catch something like Spectre if your definition of correctness didn't include "no information leakage can happen through timings on branch prediction".

And even then! You need to have the right definition on that front!

Having specs and formally proven code helps to make sure your code isn't prone to a certain class of errors (just like bounds checking/lack of raw pointers prevents another class of errors), but it's not a magical catch-all. Especially if you are in a space like cryptography where you're trying to assert negatives (that end up being held up by assumptions around feasibility of certain things)


>but you wouldn't catch something like Spectre i

I don't think software should prove that some CPU from the future will have shortcuts that leak information, CPU guys should prove their hardware is also safe.


Sure, but I believe the point was that e.g. timing attacks might not be part of your proof but still be a real threat.

Given the current state of security practices, I'd say we're lucky if they have unit tests.

> One might hope you have a formal spec of a cryptographic library.

Which would be a good first step, but wouldn't protect against timing attacks or Spectre-like hardware frailties.

A solid start is better than nothing, but a false sense of security is worse than nothing.


Is crypt32 supposed to be secure against timing attacks?


> how do you prove that the formal spec correctly describes what the software's supposed to do?

This is a valid point. With formal verification, the buck stops with the model. Looking at existing projects like Sel4, though, it's clearly not a knock-down argument against formal methods.

> Your proof that it has no null pointer crashes tells us nothing about whether it has off-by-one errors. For each kind of bug, you need a different proof. Did your proof cover every category of bug? Almost certainly not.

This appears to be a guess at how formal methods work. It is wrong.

Formal methods are able to take an abstract model and 'refine' it into an executable program that implements the formal model. [0] This process is mathematically watertight (ideally at least). There is no room for language-specific bugs (such as buffer overflows) to creep in.

What you don't tend to get from formal methods, is a guarantee against side-channel attacks, such as timing-based attacks. This may be a real security concern. From a quick google search, this seems to be an active area of research.

[0] https://en.wikipedia.org/wiki/Refinement_(computing)


> For each kind of bug, you need a different proof.

Not necessarily; if your definition of security and correctness is sufficiently general and if you can prove that your program conforms to those definitions, you have effectively proven your program to be secure and correct for all bugs. Unless of course your definition of security and correctness was too narrow, which as you note is where the crux of the problem is.

But this is not a hopeless task, there is a lot of active research in building axiomatic definitions of what security and correctness mean, so in theory it could be possible to prove a program 100% correct if the axioms are self-evident.


Note that even if you prove correctness of code, there still might be bugs in the compiler, so you will have to prove the compiler. Then there could be bugs in the modules / DLLs / dependencies in the OS, so more proving needs to be done. Then there might be bugs in the CPU. So it's not so easy. T

hen there are things which you simply can't prove, eg. In crypto the code may be correct, yet it may still leak some side-channel information (such as timing). Also, nobody has proven that things like sha256 and so on are unbreakable.


The goal isn’t to deliver a provably correct system - that’s effectively impossible.

But if core OS features or APIs (like the network stack) can be written in a theorem-proving language, huge benefits can be realized — the attack surface can be vastly reduced.


People will just go around.

I find it hard to resist making a comment in those movies set in New York City where a character has a steel jacketed door and a million locks but I bet the wall is cheap gypsum board and studs on 18 inch centers. You could steal a lot of stuff around a door without even compromising the building structure. You just need the right sheet rock knife and a quiet hallway.

Someone told me about Microsoft having a data center in a leased building, and they had the forethought to fill the space above the false ceiling with motion sensors to prevent someone just getting a ladder and going over the top of the walls. People don't always think about these things.

If you have a model that works for certain patterns, people will begin to ask what situations it can't handle. The attacks will move to that space. They may even look at release notes and try things that were reported as fixed, similar to the way people are now doing analysis on updates for operating systems.

Will it keep lazy criminals out? Yes. But they're not all lazy.

Where it's more likely to help is that you'll find crashing bugs you may have missed, and you save some face by not having particularly naive bugs in your code. That alone may be worth the cost of entry. But it's 'safer', not 'safe'.


> Where it's more likely to help is that you'll find crashing bugs you may have missed, and you save some face by not having particularly naive bugs in your code.

That doesn't sound right at all. Systems with atrocious security are more likely to be compromised than systems with good but imperfect security. Attackers do not have infinite resources.


Which also doesn't sound right.

I took the proposal on the table to mean that we systematically harden infrastructure code. If that's what we're doing then a rising tide will lift all boats. Except for the people who fall behind on upgrades. There is a period of chaos where your prediction will play out over and over, but eventually there's a new equilibrium where it's just different rocks to look under. Then people get bored again and they fall behind on upgrades, and now every time new functionality is added to the formal system, by definition people using the old version aren't getting that protection, so you'd look there first.

I think the critical question would be is how quickly does a formal system reach homeostasis? If the rate of bugs (or maybe you call them features?) is negligible after a brief flurry of initial activity, then most of my argument collapses in a divide by zero error.

As to priorities, I don't think penetration and debugging are fundamentally that different. And by that I mean, there's some fuzzy math you do regarding cost (to you) of checking a thing times the value of having that information. Yes, that means a ton of cheap scenarios will be tested very early. But ultimately, how valuable is breaking into a boring system? It's definitely not zero, because you've created another resource for yourself.

Doesn't code that runs on a lot of systems get far more attention than code that runs on a few? Wasn't that part of why OS X got away with fewer viruses? Not because it was profoundly harder to hack, but simply because fewer people cared?


> There is a period of chaos where your prediction will play out over and over, but eventually there's a new equilibrium where it's just different rocks to look under.

When we take steps to eliminate classes of vulnerabilities, well, those vulnerabilities stop being a problem, and we're all more secure. For example, code written in Java or C# is immune from the buffer-overflow attacks, and use-after-free attacks, that continue to plague C code.

There was no counterweight built into this. Security was simply improved.

> Then people get bored again and they fall behind on upgrades, and now every time new functionality is added to the formal system, by definition people using the old version aren't getting that protection, so you'd look there first.

Failure to update packages causes security issues, yes. That doesn't mean that security improvements don't, well, improve security, overall. They absolutely do.

> If the rate of bugs (or maybe you call them features?) is negligible after a brief flurry of initial activity, then most of my argument collapses in a divide by zero error.

If they're using formal methods, they won't be adding bugs. (Discounting side-channel issues.)

I believe it's pretty unusual to see a company make non-verified changes to a codebase originally derived through formal methods. If they care about formal verification, they'll probably stick with it as they make changes.

> how valuable is breaking into a boring system? It's definitely not zero, because you've created another resource for yourself.

Sure - botnets.

> Doesn't code that runs on a lot of systems get far more attention than code that runs on a few?

Of course, but I'm not seeing what you're getting at here.


Well, more details about it are out. It looks like the library worked correctly, but in this case the devs forgot to check something (that the curve had the right params). No amount of writing it in a "theorem proving" language can save you if you don't have the actual specification for the problem itself right.

I suppose that with enough time and attention one could build a proof system that accounts for things like Rowhammer and timing attacks but are we even anywhere close to that right now?

The only time anyone wrote a mechanical proof for my code, I was still finding security bugs a year later. One in particular stands out.

(note to bosses: never, ever ask the security guy "what's the worst that could happen?" the worst that could happen is always along the lines of "we go out of business and you go to jail", maybe add on "people could die," depending on domain. "How bad is this?" will get you more coherent answers)


You could go further. There are a lot of bugs in provers like Coq themselves for which developers create their own proofs of correctness [1]. More precisely: the paper which is linked implements a type checker for the core language of Coq which is proven correct in Coq.

https://www.irif.fr/~sozeau/research/publications/drafts/Coq...


> you will have to prove the compiler

Strictly speaking you don't - that's one of two approaches. Fortunately, the CompCert compiler already exists if you want to go the route you describe.

The alternative to use a non-verified compiler, but verify that the generated code corresponds to the model. This is the route the seL4 project took. They used GCC, but verified the correctness of the code it generated.


Microsoft has been actively working on several mathematical validation projects, including an HTTPS library... https://project-everest.github.io/. So I don't think they are avoiding formal verification just to be cheap.

> I wonder how many lines of code in crypt32.dll. Is it on the order of 7500 lines? If Microsoft spent a few man-years mathematically proving the correctness of that code, they could have the saved the world about 10,000 man-years.

Crypt32 mostly concerns itself with X.509 certificates and I believe actually implements all that stuff (instead of delegating it elsewhere). I wouldn't be surprised if it contains considerably more code than 10k lines.


X.509 and TLV is gnarly. So many edge cases. Then there are encoding formats and many other things. It’s 50kloc easy. The file size is like 500k. A lot happens in there for sure.

> If Microsoft spent a few man-years mathematically proving the correctness of that code, they could have the saved the world about 10,000 man-years.

But how much more profitable could it have been? Is it possible Microsoft makes more money by not doing it?

We cannot expect companies to behave in the greater population's best interest unless there is some reward/punishment structure in place.

Enter regulation.


Lots of people in this thread are complaining that a formal proof is not reasonable and will not catch all classes of errors.

I think r00fus is right to look at the profit motive of the software supplier. Microsoft would have been climbing a steep curve of diminishing returns for basically no extra revenue.

Windows isn’t the monster revenue generator and in the server space it’s losing to free Operating Systems.

We ask for perfect security but we as an industry/market aren’t willing to pay the premium to go from “it mostly does what I want most of the time” to “life critical with near perfect security”.

That said, I don’t think regulation would work here and I suspect there would be a perverted incentive by the regulator if an issue was found (like in this case where government, military, and critical industry get early access to the fix).


There actually is regulation, though. As I understand it, in theory GDPR could now fine Microsoft 1% of revenue if they've behaved catastrophically negligently at keeping user secrets secure.

You can prove some code to be correct as to some specification but it is pretty much impossible to define the specification to be ‘secure’ forever. Not only do unexpected problems pop up, such as the cpu vulnerabilities, also threat models change and it may turn out your specification solidly protects something that doesn’t matter at all.

That doesn’t mean that properly validating code so it at least doesn’t overflow its buffers is useless of course.


What does it mean to prove the correctness of code like this. For example, is it possible to prove that no side channel attack exists in a function or library? What form does such a proof take? How do you express, mathematically, that no disclosure of data is possible in response to arbitrary requests and function calls? Are there any recent major vulnerabilities that would have been prevented if the code had been formally proven first? Heartbleed? Spectre? Meltdown?

Heartbleed was found (after the fact) by a simple fuzzer. The ability of fuzzers has massively increased in recent years.

Fuzzers are great! I have fuzzed one or two mission critical high exposure libraries in my time. But they are a totally different thing than formal verification.

Well, to start, if you write pure functional code, then you can reason about the code without having to worry about side effects.

EDIT: As a starting point. Pure functional programming is one tool among the many in your toolbox. It should not be anyone's entire strategy.


Bold assertion in the face of side channel and timing attacks.

Is crypt32 supposed to be secure against side channel and timing attacks?

Side effects != side channels.

Side channel is stuff like your multiplication code takes different amounts of time depending on the arguments, or your cache hit/miss patterns depend on arguments and thus can leak the argument.


Usually you don't really verify the correctness of existing code; you rewrite it in a language that makes it possible to make guarantees that you care about. In addition, Microsoft needs to allocate their time: which components should be verified? Not all of them can be verified practically yet.

Microsoft is already using quite a bit of money doing this for example they have made the language F* which have been used to write a verifiable implementation of https.

https://www.fstar-lang.org/ https://project-everest.github.io/


I'm not saying that Microsoft isn't doing good work in this area; it's just that it's impractical to pick an old component that's recently found to have vulnerabilities in it and try to blame Microsoft for not having the foresight to write it in a formally verified way.

Keep in mind that we are discussing code meant to run cryptography operations, I'm pretty sure that kind of stuff should be high up on the priority list.

> If Microsoft spent a few man-years mathematically proving the correctness of that code, they could have the saved the world about 10,000 man-years.

Formal verification takes a tremendous amount of skill and effort. The most impressive formally verified operating system we have today is seL4, as you mentioned. Its functionality is very limited, though.

> A ballpark figure for proving the correctness of 7500 lines of very complex code[2] is about 30 man years[3]. If even 1% of the 1 billion Windows users and sysadmins has to spend a couple hours on things related to this patch, it works out to 9615 man-years of worldwide waste (based on an 8 hour workday and 260 workdays a year).

1. I doubt this idea is practical

2. Assuming it's possible, the cost would be enormous

3. The resulting system would likely be hard to change without breaking its formal guarantees

4. Microsoft would have to justify this cost on their own terms. If the code is already 'stable enough', they would do better to spend that money developing other features. Which of course they did.

> 100 per cent bug-free and therefore immune to crashes and failures

seL4 are careful to state that it does not offer guarantees against timing-attacks or other side-channel attacks.


Microsoft is involved in this: https://mitls.org/ - but no idea if they get into certificate or if the verification focuses on just the tls wire protocol.

The inertia and complacency of existing code is enormous, MS and other affluent vendors have been watching vulns in their codebases bubble up for decades and their customers tolerate it.

Big MS customers buy into it deeply: companies won't hire security professionals as CSOs who would ban insecure "industry standard" setups such as org wide AD domains and MS office attachments that malware most often spreads through.


"Beware of bugs in the above code; I have only proved it correct, not tried it." - Donald Knuth

Do I understand it right that the implication is that NSA helped find or patch bugs in MS crypto libraries? If so that's incredibly useful.

It's part of their mission.

Back in the day they helped make DES stronger against differential cryptanalysis, which was unknown at the time.


When they feel like, most of the time however they hold back vunerablities so they can exploit them for their own purposes.

Modern NSA is more Black Hat than white Hat sadly


>When they feel like, most of the time however they hold back vunerablities so they can exploit them for their own purposes.

You have no idea if that's true or not. It's probably true, but neither of us have the information required to assert our opinions on the matter as fact.


Pretty sure several leaks, and their own admissions have confirmed they do in fact hold back most of the time

That has been confirmed this morning in the linked article.

Ah! - does this explain the weirdly specific message that came out from GCHQ last week?

https://www.telegraph.co.uk/news/2020/01/12/gchq-warns-not-u...


Not sure if related. Support ended, so it's a good recommendation whether some new attack is revealed or not. Especially when new patch for a newer version can reveal unpatched issues in older versions. What do you think is oddly-specific about it?

Just the emphasis on banking in particular. "Don't use it" is good advice. The banking element struck me as potentially hinting at crypto in particular, though I agree they may have just chosen a particularly sensitive area as an example.

Scary, but not an RCE so threat is limited. It would mean malicious actors could possible create spoofed signatures on malware or possibly websites (EV certificates?). Am I missing something or is there a way to turn a spoofed certificate into a single-click pwn? As I understand it, users would have to download a malicious payload or click a malicious URL to be exposed.

Edit: People are asking why I assume it's not an RCE. That's a good question, but I am assuming Krebs on Security wouldn't report an RCE as a potential certificate validation bug. This quote in particular:

> Equally concerning, a flaw in crypt32.dll might also be abused to spoof the digital signature tied to a specific piece of software.

If crypt32.dll has a memory bug that can be exploited by feeding it an ill-formed certificate, that is wormable and orders of magnitude more severe, not equally concerning.

We'll know more tomorrow, however.


Wait, how do you know it's not an RCE? Memory safety flaws in a DLL become RCE all the time.

> is there a way to turn a spoofed certificate into a single-click pwn?

e.g. The victim clicks on a link to go to your website, their machine wants to validate the TLS cert you sent it, it calls into crypt32.dll to do that, it corrupts memory while handling your attacking cert, pwn?

We don't know enough (anything!) about the actual bug yet other than which DLL it's in.


Certificates in the Web PKI (so they'd be trusted) aren't likely to be a good basis for an exploit.

You aren't supposed to get to pick very much of the document in the Web PKI. Rules forbid CAs from letting you write nonsense you made up into most places - they themselves get slightly more opportunity but "Let's attack a windows zero day" doesn't feel like a good use of control over a trusted CA. The biggest contiguous arbitrary chunk of data you get control over will be the public key, an RSA public key might be one kilobyte in size. So, conceivable but unlikely that you can squeeze a working exploit and payload in there.

An X.509 cert that isn't trusted in the Web PKI (ie hand rolled) is a better attack surface because you can write any amount of garbage without some busy body forbidding it, but now your problem is that a non-vulnerable client will see a problem immediately because it's untrusted which is weird. I don't look at the certificates on random web sites I visit... unless they don't work and then I might be curious.

I think code signing certs are a more likely vulnerability, unlike the Web PKI in practice this is down to Microsoft who have a very hands-off approach. I suspect you can get a lot more leeway to write crap into a cert for code signing and as a bonus it'll get seen by the vulnerable Windows PCs, because it's not as though Macs or Linux check the code signing certs on Windows software they can't run anyway.

But as you say we don't know anything, it could be a much more subtle but still dangerous bug.

For example suppose Microsoft's implementation of ECDHE misses a check on the parameters supplied by the other participant. This would make you vulnerable to invalid curve attacks when negotiating ECDHE. Of course a legitimate participant won't try to attack you, but you're doing ECDHE before you know the other participant's identity.


> will see a problem immediately because it's untrusted which is weird

You might be right, but I don't think your version of "immediately" matches up with the reality of code. How many cert library function calls do you think happen before a TLS client is able to decide that there is no trusted path to the cert's authority and decide that it's "weird"?

Hundreds or thousands wouldn't surprise me -- you've got to parse a file format, follow links -- if the vulnerability's in one of those functions, then there's your exploit. You even have to go out to the filesystem, check whether the self-signed CA is installed in the OS trust store, which would make it trusted. It's really easy for me to imagine such a vuln somewhere in cert trust verification.


I meant on unaffected platforms, say, Firefox on Linux. It'd be weird if I came to, say, Hacker News and now it had this untrusted cert (which is actually a worm attacking Windows PCs) instead of a normal one.

I agree that on an affected platform all bets are off, but deploying to a web server means you don't easily get to pick who your visitors are.


Just load a hidden image from a subdomain, silently fails if on an unaffected platform.

And you could also avoid even creating the iframe if the User Agent isn't Windows.

Oh yes, that's true. (Though now I wonder if cert negotiation can be dependent on useragent under SNI. It could certainly be dependent on TCP fingerprint characteristics.)

Agree with you that codesigning certs are a more likely problem space for this bug.


I don’t think this is ultimately the vulnerability, but even without SNI, it’s arbitrary to dynamically link to a file, script, png, etc... hosted on a sub domain with the bad cert.

Oh yeah! So it's like:

* client asks for cert

* you give it to them

* client tells you the page they want and their useragent

* if you think they're vulnerable based on what you've learned about them, you add <img src="https://vulnerable.subdomain/"> to the response.

Neat suggestion. Thanks! Agree we've moved well outside of tomorrow's likely actual vuln.


Ugh man. I seen custom rolled where it frys to follow links building cert Chain validation. This sounds like a nightmare

Do we know that MS TLS implementation parses self-signed certs only after successful validation? Many implementations offer an option of showing their contents (eg to see what CA it claims to be signed by), or offer the user a choice to accept it, or log identifying fields from the cert for diagnostic purpouses. Same for the CA server offered CA chain.

I'm also not convinced that the fields-allowed-by-CAs set is safe, especially when you consider the set of all browser-accepted CAs, it's historically been a fraught field.

Also with x.509 you have to decode the ASN.1 before you can even get to the signature, there have been historically been lots of vulnerabilities in ASN.1 parsers.


> Do we know that MS TLS implementation parses self-signed certs only after successful validation?

It would be a contradiction in terms. You can't validate the semantic content of a file before you've parsed it!


Yeah, I should have said "before it processes any other fields". The ASN.1 parse of course comes first.

You mentioned code signing certs. What are your thoughts for document signing certificates? They embed a public copy into the document itself often not trusted..

I'm going to be disappointed if all of this hype is for non-code execution

This would be a terrifying outcome if certificate-based RCE.

Imagine an actor like google running an intentionally-compromised TLS certificate on their web servers... perhaps even targeted at specific IP addresses or on certain schedules to avoid detection.

Certificates are an excellent initiation vector too. Especially, if one wanted to intentionally present a backdoor like this for national security reasons (all of the cryptographic primitives are already all right there for you to use to authenticate the cert).


I'd more more worried about either hacked sites, or a country with something like China's Great Cannon doing that than a legitimate US company like Google.

Google doing that would be a great way for everyone involved to spend a long time in jail.


> Imagine an actor like google running an intentionally-compromised TLS certificate on their web servers... perhaps even targeted at specific IP addresses or on certain schedules to avoid detection.

For what it's worth, you should read about Certificate Transparency, it tries to protect against this sort of problem. Browsers only accept certs that can prove they've been submitted to a public log, which means that their existence isn't being selectively exposed to some users only.

(They mainly act as prevention against rogue certificate authorities rather than a rogue cert owner, though.)


> Am I missing something or is there a way to turn a spoofed certificate into a single-click pwn?

1) if there's a bug in the crypto library that creates RCE 2) if a mitm delivers a payload to an application (browser/plugin/etc) and that causes RCE. The former might cause NSA to get its panties in a wad.

But it might be much worse than RCE. It might be like Heartbleed, where it can leak key material and memory all day (on servers and clients) and you'd never know it because the leak leaves no trace on the system and just looks like normal requests. Perfect exploit for spying. (And of course it's possible they had that exploit in their back pocket, but someone else reported it to M$, so now they're CYAing)


This is what I’m guessing

If the matching of the signatures is bypassed because of the ability to manipulate the underlying control flow of the API via something like ROP, it could be used for many things.

RCE in Windows Doesn't mean control over digital signature validation. But RCE in that particular API might mean both


We’ll find out tomorrow. Even if it’s just spoofing a valid signature on a exe I’d still call that pretty serious. Might get a piece of malware masquerading as a legit background program past a lot of SOC analysts. I’d bet there’s more to it though.

Also kernel modules.

My money is on it to disable support for sha1 in old products; maybe a bigger break than the prefix attack currently known.

> Equally concerning, a flaw in crypt32.dll might also be abused to spoof the digital signature tied to a specific piece of software. Such a weakness could be exploited by attackers to make malware appear to be a benign program that was produced and signed by a legitimate software company.

This looks like it could be abused to lead to code execution.


It's an "airtight hatchway", guys!

No, crypt32 runs in user mode too.

Isn't the timing of this a bit of a coincidence with the announcement of the new SHA1 collision exploit?

I know what I'll be doing tomorrow evening...

Perhaps it's related to this? https://github.com/dotnet/aspnetcore/issues/13706

IIS crashing in crypt32.dll!CryptMsgClose() after a web request


The NSA prepping PR damage control already?

"Finally removed all totally inactive and harmless old _NSAKEY references."

I wouldn't bet against something along those lines.


What is the nsa’s role in this and what kind of key reference do you mean?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: