Twitter post explaining it:
Blog post explaining it:
Security release announcement is here:
After a previous vulnerability we changed our defaults to require schannel for release 4.8, which protects against the CVE-2020-1472 problem, but admins could turn off this protection to work with older/less secure products.
Further hardening of Samba is currently taking place to protect our users from the bug.
If it's not a Domain Controller then this is irrelevant.
Either of the targets you mentioned are more the symptom than the root cause: management setting up bad incentives. If you have a charge management process which takes a month to approve updates, the problem is not the sysadmin. If years of skimping means that the operators are afraid to patch because they’ll be punished if it breaks things and they don’t have a robust testing process, the problem is not the sysadmin.
Every 3 months for at least the last 18 months, they've ended up with no updates/patching for a month. Friend informs compliance and risk manager, compliance and risk management scream at accounts payable to get the outsourcer paid in full to get this month's security update done, and the circle of life repeats.
It's been a giant pain having spent years trying to get organizations to accept the need and learn to do this stuff reliably only to have the primary source of misery (Microsoft) repeatedly start biting them in the ass again for what should be best practices.
Meanwhile in the same timeframe most BSD and Linux releases have not only gotten their core software updates down to a science, they've also managed to build workflows that can include huge swathes of 3rd party open source and commercial software, which is so hilariously awful on windows that multiple companies build businesses around doing it.
Microsoft laid off all their QA staff in 2014, so it's hardly surprising. If anything, it's a wonder that it's not much, much worse than it is now.
This is a really tricky problem in government because the pay scales can be very hard to change. For example, the U.S. federal scale has hard caps - the GS scale max is currently $170k, which might not sound that bad but historically the higher-level positions were senior and relatively limited, so it’s not like you can just effortlessly bump all of your developer positions up to the highest grade without hitting budget caps and other people being upset that someone outside of IT needed 25 years of experience and managing a bunch of people to get to the same rank as you’re proposing to offer to non-entry level developers. That probably means you’re hiring people at lower levels which are more like entry level pay.
A few years back they actually had to try to have a chance of hiring good infosec people but that requires a lot of political wrangling even if everyone agrees that it’s a good idea. (I know someone who got tired of waiting - jumped to a well known tech company for a cool 200% raise)
a) Deploying a fix and (unlikely) some, perhaps all, related systems fail until they are fixed internally or
b) Not deploying a fix and have your servers owned by an adversary?
The state of infosec is still that bad and unfortunately most consumers can’t know of these problems let alone choose to opt out. Right now much of the cost of a breach is borne by the end users who didn’t choose the poor level of security the organization implemented, and I am increasingly of the opinion that it’s better to bring down your organization’s IT infrastructure than to suffer a catastrophic breach. Because if the pain if borne by the internal IT teams more than the end user (who again often has no knowledge or no choice), eventually the company will be forced to implement better processes.
As long as the real cost of a breach is paid for by end users, organizations have very little incentive to improve.
What the CISA memo does is change that dynamic: now most of the government has instructions to act and it’s thus personally riskier not to act promptly.
... or who's sole purpose is to run software that only runs on Windows Server 2008 Service Pack 2 without any more recent updates. (And which also need to be on a network that allows untrusted access. Perhaps unsurprisingly, this was Huawei WiFi management software. I'm _so_ glad I'm no longer on the hook for anything to do with that project/client...)
See step 3a in the following link for a guide on how to set the registry key. https://support.microsoft.com/en-ca/help/4557222/how-to-mana...
An attacker would need to do initial intrusion first, and then discover the vulnerable server. This means there is a lot less pressure on getting these things patched immediately
That’s a bold (and incorrect) statement. What you meant to say is “hopefully (and in most but not all cases) these servers are not publicly facing”. Because I’ve worked with an alarmingly high number of organizations where their internal AD absolutely is public facing.
So what happens is, you're supposed to fill out a bunch of bytes as proof of who you are, and then a bunch of bytes that represent stuff like seconds since the start of the Unix epoch. If you can't do this, NetLogon figures you aren't really who you say you are.
And the exploit is: Fill everything out with all zeroes. This will succeed one time in 256 on average.
The reason why is complicated and somewhat interesting, but this stupidest possible exploit is what you get at the end of that complicated rationale.
I've written previously on HN that it stands out how terrible Microsoft is at cryptographic design. If there's an opportunity to roll your own and do it badly, in a Microsoft product that's what you should expect. Google has good people (it doesn't always use them, but most often it does) for this stuff, and Apple most often seems to accept that it doesn't have good people so it'll not roll its own but just use things that already exist; but Microsoft does this over and over.
In this particular case they took AES (seems fine) and an inappropriate but in principle secure cipher mode (CFB8) and then they... fixed the IV as all zeroes even though the definition of CFB is clear that you need to use a random IV.
A bit more detail on what happens next: "So with an all-zero IV and plaintext plus a randomly chosen key, you will end up with an all-zero ciphertext 1 in 256 times on average. [In other words] roughly once in every 256 times the server would randomly concoct a session key for which the correctly-encrypted version of their all-zero ClientChallenge would itself be all zeros." Quoted from a detailed and nicely illustrated article about the bug.
Do they? Wasn't "goto fail" a bug in their SSL code? That one wasn't even a bug that a reviewer would need to reason about the logic of the code to see.
This Microsoft vulnerability isn't a bug in the code, it's a design mistake, if you implement exactly what Microsoft's design document says for NetLogon, one time in 256 all zeroes lets you in. By design. Stupid stupid design.
If Microsoft had specified that the IV has to be set to 0102030405060708090A0B0C0D0E0F00 this vulnerability doesn't exist. Is there a different vulnerability? Maybe, it's hard to see how to attack it. But with all zeroes, just send all zeroes, what could be easier?
It would take a lot to convince me that my position is false.
Think about what that would take: You'd have to explain to me, at length, that no-no-no, Microsoft cares deeply about the security of their customers, they hire professional cryptographers, and that they keep up with the best practices in ciphers, protocols, and configuration defaults.
So let us see what kind of uphill battle that would be:
In 2020, Microsoft products have all of the following current cryptographic problems.
- There is no support for TLS 1.3, either on the server or client.
- HSTS is very hit & miss, with only Windows Server 2019 adding partial support here and there.
- Until very recently, you'd have to jump through hoops to enable TLS 1.1 and 1.2. The operating system had the capability for years, but... Microsoft chose not to enable it. Explain that one without resorting to: "This helps the NSA hack everyone that doesn't know what they are doing."
- Across a forest trust, RC4 is the default cipher.
- If you try to enforce AES ciphers for Active Directory, you'll break some forms of single-sign-on from Azure AD!
- If you use ECC certificates, you're stuck with the handful of now very thoroughly legacy curves. Don't even dream of support for elegant, modern, secure ciphers like Curve25519!
- Keep in mind that Microsoft first implemented ECC ciphers and the associated "modern" Key Storage Provider (KSP) in 2006 for Vista. So you would think that only 2003-era software would still require the legacy CryptoAPI system, right... right? You'd be wrong: Notably, you can't have elliptic curve certificates with: NDES, AD FS, SQL Server, SCCM until very recently, and in fact just about every Microsoft product except for IIS. Which I remind you still can't do TLS 1.3. In Windows Server v2004.
- Azure Key Vault can't issue anything but RSA certificates from third-party CAs.
- Azure's disk encryption similarly refuses to use ECC, and has to use RSA for disk encryption.
- You can't get free certificates in Azure from Let's Encrypt or Microsoft themselves. Because a 1KB file in 2020 should still cost $50 a year, am I right? Otherwise how would VeriSign make their billions!?
- Certificate Services has had nearly zero new features in like a decade. Microsoft could have revamped the web interface, added certificate transparency support, SQL Server database support, PowerShell commands that... do... things, or anything really. Anything at all.
- There's no replacement for AD CS for that matter. It's clearly legacy, but Azure AD or Intune have no replacement. Enjoy your 2000-era interfaces and methodologies. You seriously have to write an INI file and put it into C:\Windows before installation to configure it!
Prove me wrong.
But you have no reason to believe a teapot orbits the sun somewhere. There is no reasonable way to believe that a teapot could have "gotten there". No space programs launching teapots just for laughs. Space programs in general being far too expensive for someone to launch something without government oversight.
My point is that the NSA does exist. They do degrade cryptographic algorithms, either through national security letters or simply bribery. The Dual_EC_DRBG fiasco happened. It really happened. Private United States based organisations do cooperate with these programs, either willingly or because they are forced to.
Now, ask yourself: What would it look like if Microsoft was -- hypothetically -- cooperating with the NSA?
Since Windows is so widely used, any weakness in its crypto would be a problem for the US itself! There's no separate "export version" any more.
(Which reminds me: "Export-grade crypto". Remember that? That happened too. That was not a "conspiracy"! That was law! Recently.)
Back to my point: how would you degrade the crypto but protect US interests?
Well, one method would be to have strong crypto in the software, disable it by default, and mandate that all US government organisations turn the strong crypto on. Simply rely on IT administrator lazyness and tight budgets of most organisations to ensure that 99% of the world outside of US Government remains on the weak sauce.
Exhibit A: FIPS mode.
Exhibit B: TLS 1.1 and 1.2 available but off by default.
Exhibit C: AES for Active Directory available but off by default.
Now do you get it? It looks suspicious.
It's one thing to accuse a neighbour randomly of murder. It's entirely another thing if you see them putting a shockingly large and heavy rolled up carpet in the boot of their car at three o'clock in the morning.
We have some 2008r2 hosts that are in the process of shutting down I had to gulag off.
Not so fun though for those who still are on Server 2008 though. (Yes, that is still a thing unfortunately.)
How much would someone get for getting this done?
I've been in this situation. Getting a $1000 budget approval took months and was _tough_.
CISA also doesn't have authority over DoD or IC systems, let alone the aforementioned budgetary authority to make them do it. No, it doesn't make any sense to apply a lesser standard to systems that are more sensitive. Like I said, government-grade security.
During SHA-1 deprecation for example, almost all the trouble was with the financial sector. The way the bogus issuance that led to discovering problem at StartCom / WoSign was detected begins with a financial services company that is desperately trying to get a SHA-1 certificate issued after it's too late and finds WoSign will back date the certificate for some undisclosed amount of money. Even some of the Symantec / Crosscert stuff comes back to the Korean banking and financial sector, (in the district south of the river in Seoul which we'd anglicize "Gangnam" yes that Gangnam...). And lots of "We must have RSA kex" was the financial sector too.
You've probably got a chip card as credit card or debit card, but even though that chip is relatively a technological heavyweight (compared to things like your employee badge that authorises access to the shared office printer, or a public transit card) the crypto in it is... not so hot, and the surrounding infrastructure built by financial companies is awful.
And the chip card doesn't actually secure the thing you care about - your money - it only secures the thing the issuer cares about, tying your transaction to you. Actual financial transfers are done entirely on a trust basis like it's still the 19th century, the card just presents authorization which is optional.
the problem with this is that nowadays suppliers are stronger in defending themselves from such blames than they are in writing good software.
furthermore, the software being sold is very pricey and the cost of customising it often exceeds the cost of the software itself.
then, 9 times out of 10 the issues lie in the customisations, and that doesn’t surprise, as the institution will be using most the custom parts as those are the ones they need most. here the vendors have even more grip in demonstrating that it’s the customer requirements’ fault, and not their crappy software, that confuses the concepts of “database” and “application server”.
I try to counter this by writing solid, robust software using open source components, giving back when possible, perfectly filling the company’s requirements, well integrated and reasonably cheap to maintain. slowly, this can help in abandoning vendors and building in-house know-how.
I find this crucial because the quality of software sold by some vendors is very very low, and getting worse.
"We have tried to avoid paragraph-length footnotes in this book, but X has defeated us by switching the meaning of client and server. In all other client/server relationships, the server is the remote machine that runs the application (i.e., the server provides services, such as database service or computational service). For some perverse reason that's better left to the imagination, X insists on calling the program running on the remote machine "the client." "
That's also why once in a while somebody discover a completely unexpected vulnerability on it that, although simple nobody thought about it before. Things are so non-intuitive that it's hard even to talk about them.
Yet, somehow it works. And works quite well. I imagine the authors of that book are quite annoyed by how Unix evolve to work really well, and still avoided fixing any fundamental problem.
There may be a point to arguing that X should use an Remote Desktop model (like Windows or VNC) instead of an Window Server model, but the terminology is correct for how X works.
Clients -- the individual applications you run -- connect to the display server to draw themselves. (so, e.g., xeyes, xlogo, Firefox, etc. are the clients).
The fact that you're interacting with it makes it also feel like a kind of client. But technically, the clients are asking X to allow them interact with you. Though one of the things you may do with X is control launcher software that ends up starting more clients which interact with you through X.
Is this another problem that could have been fixed a long time ago?
Yes. The bug was documented in the public documentation, not even hidden in the closed source code.
Actually, they have until Monday.
Edit: thanks for downvoting a serious question.
Not sure why it shuold be REHL, there are linux counterparts for most bundled software you mention so a comparison against a base linux system + e.g. samba/vnc/nginx/... still seems fair. All of those have had (sometimes severe i.e. root escalation) vulnerabilities in the past, but answering the question whether it's 'as frequent' is pretty hard to answer. I would also guess towards 'no' though.