
Cryptic Rumblings Ahead of First 2020 Patch Tuesday - dustinmoris
https://krebsonsecurity.com/2020/01/cryptic-rumblings-ahead-of-first-2020-patch-tuesday/
======
sys32768
I wonder if Windows 7 will get this on its end of support day tomorrow.

~~~
kijin
I'm sure they set the EOL date to a Patch Tuesday for a reason. :)

Besides, end of support is not a hard deadline. Microsoft needs to produce
patches for Extended Security Updates (ESU) customers anyway, and some of that
can trickle down to ordinary users. Windows XP got an unexpected patch to fix
a particularly serious vulnerability several weeks after the official EOL.

------
CiPHPerCoder
Prediction: whatever this is will end up in pen-test reports for many years to
come.

Even if it's not a worthy successor to MS08-067.

~~~
zrth
Not only obvious to me how 067’ and this is connected. Yet to save you a few
sec:
[https://en.m.wikipedia.org/wiki/Conficker](https://en.m.wikipedia.org/wiki/Conficker)

~~~
acqq
"This is probably one of the easiest ways into a network if not the easiest
way. Simply starting [xxxx] loading the module and giving it an IP address of
a vulnerable Windows host will get you full administrative access to that
system."

"I myself have performed penetration tests in other countries such as China,
and Russia where I was able to use MS08-067 to exploit systems running Windows
systems with language packs that I was unable to actually read."

[https://blog.rapid7.com/2014/02/03/new-
ms08-067/](https://blog.rapid7.com/2014/02/03/new-ms08-067/)

------
computator
It's a shame that mathematically proving correctness of code, _even for
extremely important code_ , is never done.

I wonder how many lines of code in crypt32.dll. Is it on the order of 7500
lines? If Microsoft spent a few man-years mathematically proving the
correctness of that code, they could have the saved the world about 10,000
man-years.

Windows has a user base of 1 billion[1]. A ballpark figure for proving the
correctness of 7500 lines of very complex code[2] is about 30 man years[3]. If
even 1% of the 1 billion Windows users and sysadmins has to spend a couple
hours on things related to this patch, it works out to 9615 man-years of
worldwide waste (based on an 8 hour workday and 260 workdays a year).

Had there been a wide-spread exploit, it could have cost the world millions of
man-years.

[1]
[https://en.wikipedia.org/wiki/Usage_share_of_operating_syste...](https://en.wikipedia.org/wiki/Usage_share_of_operating_systems)

[2]
[https://www.schneier.com/blog/archives/2009/10/proving_a_com...](https://www.schneier.com/blog/archives/2009/10/proving_a_compu.html)

[3] Professor Gernot Heiser, the John Lions Chair in Computer Science in the
School of Computer Science and Engineering and a senior principal researcher
with NICTA, said for the first time a team had been able to prove with
mathematical rigour that an operating-system kernel—the code at the heart of
any computer or microprocessor—was 100 per cent bug-free and therefore immune
to crashes and failures. Verifying the kernel—known as the seL4
microkernel—involved mathematically proving the correctness of about 7,500
lines of computer code in a project taking an average of six people more than
five years.

~~~
bouncycastle
Note that even if you prove correctness of code, there still might be bugs in
the compiler, so you will have to prove the compiler. Then there could be bugs
in the modules / DLLs / dependencies in the OS, so more proving needs to be
done. Then there might be bugs in the CPU. So it's not so easy. T

hen there are things which you simply can't prove, eg. In crypto the code may
be correct, yet it may still leak some side-channel information (such as
timing). Also, nobody has proven that things like sha256 and so on are
unbreakable.

~~~
TimTheTinker
The goal isn’t to deliver a provably correct system - that’s effectively
impossible.

But if core OS features or APIs (like the network stack) can be written in a
theorem-proving language, huge benefits can be realized — the attack surface
can be vastly reduced.

~~~
hinkley
People will just go around.

I find it hard to resist making a comment in those movies set in New York City
where a character has a steel jacketed door and a million locks but I bet the
wall is cheap gypsum board and studs on 18 inch centers. You could steal a lot
of stuff _around_ a door without even compromising the building structure. You
just need the right sheet rock knife and a quiet hallway.

Someone told me about Microsoft having a data center in a leased building, and
they had the forethought to fill the space above the false ceiling with motion
sensors to prevent someone just getting a ladder and going over the top of the
walls. People don't always think about these things.

If you have a model that works for certain patterns, people will begin to ask
what situations it can't handle. The attacks will move to that space. They may
even look at release notes and try things that were reported as fixed, similar
to the way people are now doing analysis on updates for operating systems.

Will it keep lazy criminals out? Yes. But they're not all lazy.

Where it's more likely to help is that you'll find crashing bugs you may have
missed, and you save some face by not having particularly naive bugs in your
code. That alone may be worth the cost of entry. But it's 'safer', not 'safe'.

~~~
MaxBarraclough
> Where it's more likely to help is that you'll find crashing bugs you may
> have missed, and you save some face by not having particularly naive bugs in
> your code.

That doesn't sound right at all. Systems with atrocious security are more
likely to be compromised than systems with good but imperfect security.
Attackers do not have infinite resources.

~~~
hinkley
Which also doesn't sound right.

I took the proposal on the table to mean that we systematically harden
infrastructure code. If that's what we're doing then a rising tide will lift
all boats. Except for the people who fall behind on upgrades. There is a
period of chaos where your prediction will play out over and over, but
eventually there's a new equilibrium where it's just different rocks to look
under. Then people get bored again and they fall behind on upgrades, and now
every time new functionality is added to the formal system, by definition
people using the old version aren't getting that protection, so you'd look
there first.

I think the critical question would be is how quickly does a formal system
reach homeostasis? If the rate of bugs (or maybe you call them features?) is
negligible after a brief flurry of initial activity, then most of my argument
collapses in a divide by zero error.

As to priorities, I don't think penetration and debugging are fundamentally
that different. And by that I mean, there's some fuzzy math you do regarding
cost (to you) of checking a thing times the value of having that information.
Yes, that means a ton of cheap scenarios will be tested very early. But
ultimately, how valuable is breaking into a boring system? It's definitely not
zero, because you've created another resource for yourself.

Doesn't code that runs on a lot of systems get far more attention than code
that runs on a few? Wasn't that part of why OS X got away with fewer viruses?
Not because it was profoundly harder to hack, but simply because fewer people
cared?

~~~
MaxBarraclough
> There is a period of chaos where your prediction will play out over and
> over, but eventually there's a new equilibrium where it's just different
> rocks to look under.

When we take steps to eliminate classes of vulnerabilities, well, those
vulnerabilities stop being a problem, and we're all more secure. For example,
code written in Java or C# is immune from the buffer-overflow attacks, and
use-after-free attacks, that continue to plague C code.

There was no counterweight built into this. Security was simply improved.

> Then people get bored again and they fall behind on upgrades, and now every
> time new functionality is added to the formal system, by definition people
> using the old version aren't getting that protection, so you'd look there
> first.

Failure to update packages causes security issues, yes. That doesn't mean that
security improvements don't, well, improve security, overall. They absolutely
do.

> If the rate of bugs (or maybe you call them features?) is negligible after a
> brief flurry of initial activity, then most of my argument collapses in a
> divide by zero error.

If they're using formal methods, they won't be adding bugs. (Discounting side-
channel issues.)

I believe it's pretty unusual to see a company make non-verified changes to a
codebase originally derived through formal methods. If they care about formal
verification, they'll probably stick with it as they make changes.

> how valuable is breaking into a boring system? It's definitely not zero,
> because you've created another resource for yourself.

Sure - botnets.

> Doesn't code that runs on a lot of systems get far more attention than code
> that runs on a few?

Of course, but I'm not seeing what you're getting at here.

------
jorblumesea
Do I understand it right that the implication is that NSA helped find or patch
bugs in MS crypto libraries? If so that's incredibly useful.

~~~
gruturo
It's part of their mission.

Back in the day they helped make DES stronger against differential
cryptanalysis, which was unknown at the time.

~~~
syshum
When they feel like, most of the time however they hold back vunerablities so
they can exploit them for their own purposes.

Modern NSA is more Black Hat than white Hat sadly

~~~
skyyler
>When they feel like, most of the time however they hold back vunerablities so
they can exploit them for their own purposes.

You have no idea if that's true or not. It's probably true, but neither of us
have the information required to assert our opinions on the matter as fact.

~~~
syshum
Pretty sure several leaks, and their own admissions have confirmed they do in
fact hold back most of the time

------
Angostura
Ah! - does this explain the weirdly specific message that came out from GCHQ
last week?

[https://www.telegraph.co.uk/news/2020/01/12/gchq-warns-
not-u...](https://www.telegraph.co.uk/news/2020/01/12/gchq-warns-not-use-
windows-7-computers-banking-email-tuesday/)

~~~
viraptor
Not sure if related. Support ended, so it's a good recommendation whether some
new attack is revealed or not. Especially when new patch for a newer version
can reveal unpatched issues in older versions. What do you think is oddly-
specific about it?

~~~
Angostura
Just the emphasis on banking in particular. "Don't use it" is good advice. The
banking element struck me as potentially hinting at crypto in particular,
though I agree they may have just chosen a particularly sensitive area as an
example.

------
AaronFriel
Scary, but not an RCE so threat is limited. It would mean malicious actors
could possible create spoofed signatures on malware or possibly websites (EV
certificates?). Am I missing something or is there a way to turn a spoofed
certificate into a single-click pwn? As I understand it, users would have to
download a malicious payload or click a malicious URL to be exposed.

Edit: People are asking why I assume it's not an RCE. That's a good question,
but I am assuming Krebs on Security wouldn't report an RCE as a potential
certificate validation bug. This quote in particular:

> Equally concerning, a flaw in crypt32.dll might also be abused to spoof the
> digital signature tied to a specific piece of software.

If crypt32.dll has a memory bug that can be exploited by feeding it an ill-
formed certificate, that is wormable and orders of magnitude more severe, not
equally concerning.

We'll know more tomorrow, however.

~~~
cjbprime
Wait, how do you know it's not an RCE? Memory safety flaws in a DLL become RCE
all the time.

> is there a way to turn a spoofed certificate into a single-click pwn?

e.g. The victim clicks on a link to go to your website, their machine wants to
validate the TLS cert you sent it, it calls into crypt32.dll to do that, it
corrupts memory while handling your attacking cert, pwn?

We don't know enough (anything!) about the actual bug yet other than which DLL
it's in.

~~~
tialaramex
Certificates in the Web PKI (so they'd be trusted) aren't likely to be a good
basis for an exploit.

You aren't supposed to get to pick very much of the document in the Web PKI.
Rules forbid CAs from letting you write nonsense you made up into most places
- they themselves get slightly more opportunity but "Let's attack a windows
zero day" doesn't feel like a good use of control over a trusted CA. The
biggest contiguous arbitrary chunk of data you get control over will be the
public key, an RSA public key might be one kilobyte in size. So, conceivable
but unlikely that you can squeeze a working exploit and payload in there.

An X.509 cert that isn't trusted in the Web PKI (ie hand rolled) is a better
attack surface because you can write any amount of garbage without some busy
body forbidding it, but now your problem is that a non-vulnerable client will
see a problem immediately because it's untrusted which is weird. I don't look
at the certificates on random web sites I visit... unless they don't work and
then I might be curious.

I think code signing certs are a more likely vulnerability, unlike the Web PKI
in practice this is down to Microsoft who have a very hands-off approach. I
suspect you can get a lot more leeway to write crap into a cert for code
signing and as a bonus it'll get seen by the vulnerable Windows PCs, because
it's not as though Macs or Linux check the code signing certs on Windows
software they can't run anyway.

But as you say we don't know anything, it could be a much more subtle but
still dangerous bug.

For example suppose Microsoft's implementation of ECDHE misses a check on the
parameters supplied by the other participant. This would make you vulnerable
to invalid curve attacks when negotiating ECDHE. Of course a legitimate
participant won't try to attack you, but you're doing ECDHE before you know
the other participant's identity.

~~~
cjbprime
> will see a problem immediately because it's untrusted which is weird

You might be right, but I don't think your version of "immediately" matches up
with the reality of code. How many cert library function calls do you think
happen _before_ a TLS client is able to decide that there is no trusted path
to the cert's authority and decide that it's "weird"?

Hundreds or thousands wouldn't surprise me -- you've got to parse a file
format, follow links -- if the vulnerability's in one of those functions, then
there's your exploit. You even have to go out to the filesystem, check whether
the self-signed CA is installed in the OS trust store, which would make it
trusted. It's really easy for me to imagine such a vuln somewhere in cert
trust verification.

~~~
tialaramex
I meant on unaffected platforms, say, Firefox on Linux. It'd be weird if I
came to, say, Hacker News and now it had this untrusted cert (which is
actually a worm attacking Windows PCs) instead of a normal one.

I agree that on an affected platform all bets are off, but deploying to a web
server means you don't easily get to pick who your visitors are.

~~~
ikeboy
Just load a hidden image from a subdomain, silently fails if on an unaffected
platform.

~~~
Thorrez
And you could also avoid even creating the iframe if the User Agent isn't
Windows.

------
hoseja
It's an "airtight hatchway", guys!

~~~
throwaheyy
No, crypt32 runs in user mode too.

------
jqueryin
Isn't the timing of this a bit of a coincidence with the announcement of the
new SHA1 collision exploit?

------
annoyingnoob
I know what I'll be doing tomorrow evening...

------
yread
Perhaps it's related to this?
[https://github.com/dotnet/aspnetcore/issues/13706](https://github.com/dotnet/aspnetcore/issues/13706)

IIS crashing in crypt32.dll!CryptMsgClose() after a web request

------
JackRabbitSlim
The NSA prepping PR damage control already?

"Finally removed all _totally inactive and harmless_ old _NSAKEY references."

I wouldn't bet against something along those lines.

~~~
zrth
What is the nsa’s role in this and what kind of key reference do you mean?

~~~
di
[https://en.wikipedia.org/wiki/NSAKEY](https://en.wikipedia.org/wiki/NSAKEY)

