Hacker News new | past | comments | ask | show | jobs | submit login
Windows Server vulnerability requires immediate attention (cisa.gov)
425 points by ohjeez 36 days ago | hide | past | favorite | 104 comments

Whitepaper for CVE-2020-1472: https://www.secura.com/pathtoimg.php?id=2055 [PDF]

"Microsoft provided patches for CVE-2020-1472 in August"

I missed this on reddit though, so more coverage is a good thing. Need to make sure all our managed servers aren't behind on any updates first thing monday.

As this is a protocol level vulnerability, older versions of Samba were also affected.

Security release announcement is here:


After a previous vulnerability we changed our defaults to require schannel for release 4.8, which protects against the CVE-2020-1472 problem, but admins could turn off this protection to work with older/less secure products.

Further hardening of Samba is currently taking place to protect our users from the bug.

Does this mean that any server that uses Samba, windows or not, is vulnerable?

Looks like it, there is also a security issue released for Ubuntu: https://usn.ubuntu.com/4510-1/

Only if you are running the Samba DC functionality, which is non-default. Also, you would have needed to specifically disable the protection that was added by Samba in 2018.

If the server provides Domain Controller capability then yes, it implements NetLogon, Microsoft's NetLogon protocol is vulnerable to this attack by design and so Samba in that scenario is potentially vulnerable and has the same mitigation strategy that Microsoft applied.

If it's not a Domain Controller then this is irrelevant.

This refers to a vulnerability that was patched in August; any systems that are still unpatched are over a month behind. In general, most security patches (for any software that's in use) are urgent; once a patch is out, some adversaries are going to reverse-engineer the patch to find out what the bug was, and mass-exploit targets that haven't patched. Any server which is that far out of date on its patches is either in need of a sysadmin, or has a sysadmin who's being negligent. There is no excuse.

You’re missing the biggest reason this is relevant: enterprise IT shops with strict change management processes amd, especially in government, years of austerity budgets cutting resources for both sysadmins and rigorous testing.

Either of the targets you mentioned are more the symptom than the root cause: management setting up bad incentives. If you have a charge management process which takes a month to approve updates, the problem is not the sysadmin. If years of skimping means that the operators are afraid to patch because they’ll be punished if it breaks things and they don’t have a robust testing process, the problem is not the sysadmin.

I have a friend who works at a place where all their IT is outsourced, who's accounts payable stretches every invoice out to 120 days, and who's outsourcing company downs tools on all 90+ day delinquent accounts.

Every 3 months for at least the last 18 months, they've ended up with no updates/patching for a month. Friend informs compliance and risk manager, compliance and risk management scream at accounts payable to get the outsourcer paid in full to get this month's security update done, and the circle of life repeats.

I feel these organizations that have a process that prevents critical fixes have a broken process... you either have to be ok with having your servers compromised eg data stolen or leaking user data or you have to be ok accepting that sometimes the engineer fixing a bug, adding a new feature might mess something up. I am inclined to believe a bit more to the side of move fast break things is bette than move so slow you get pwned... but sort of a delicate balancing act...

I studied IT security quite a lot, and implement Windows patches for dozens of companies. While you are technically right, Microsoft releases broken patches _constantly_. If we pushed out every single patch the moment they were released, we would constantly be down and fighting fires. Most small and mid-sized companies don't have hacking campaigns run against them most times. Given this, it just doesn't make sense to push out every single patch immediately. Microsoft's patches are a whole lot more stable when they're a couple months old.

This has been a real problem again in the Windows 10 era. By around 2008, Microsoft seemed to have finally gotten their patch process cleaned up to the point that if you were only taking security patches, they generally installed cleanly and mostly didn't break random things. By about 2016 this has backslid and now Windows 10 seems intent on large scale combined updates and constant servicing stack updates that with undocumented consequences.

It's been a giant pain having spent years trying to get organizations to accept the need and learn to do this stuff reliably only to have the primary source of misery (Microsoft) repeatedly start biting them in the ass again for what should be best practices.

Meanwhile in the same timeframe most BSD and Linux releases have not only gotten their core software updates down to a science, they've also managed to build workflows that can include huge swathes of 3rd party open source and commercial software, which is so hilariously awful on windows that multiple companies build businesses around doing it.

> "This has been a real problem again in the Windows 10 era. By around 2008, Microsoft seemed to have finally gotten their patch process cleaned up to the point that if you were only taking security patches, they generally installed cleanly and mostly didn't break random things. By about 2016 this has backslid and now Windows 10 seems intent on large scale combined updates and constant servicing stack updates that with undocumented consequences."

Microsoft laid off all their QA staff in 2014, so it's hardly surprising. If anything, it's a wonder that it's not much, much worse than it is now.

It’s partly broken process - my point being that the people at the top are more to blame than the sysadmin - but also that this is more expensive than people like to admit. You either need to accept lower security/reliability or spend more on staff, capacity, and licenses. Lots of places try to cut that corner and it’ll seem to work until, as Warren Buffet likes to say, the tide goes out.

This is a really tricky problem in government because the pay scales can be very hard to change. For example, the U.S. federal scale has hard caps - the GS scale max is currently $170k, which might not sound that bad but historically the higher-level positions were senior and relatively limited, so it’s not like you can just effortlessly bump all of your developer positions up to the highest grade without hitting budget caps and other people being upset that someone outside of IT needed 25 years of experience and managing a bunch of people to get to the same rank as you’re proposing to offer to non-entry level developers. That probably means you’re hiring people at lower levels which are more like entry level pay.

A few years back they actually had to try to have a chance of hiring good infosec people but that requires a lot of political wrangling even if everyone agrees that it’s a good idea. (I know someone who got tired of waiting - jumped to a well known tech company for a cool 200% raise)


Many vulnerable organizations do not have "engineers who fix bugs", they have teams of accountants and bookkeepers who run Excel and Xero, or teams of lawyers and paralegals who run Word, or medical practices or marketing firms or chemical wholesalers, or or or... The nearest thing most of them have to "an IT department" is the admin person who liaises with their outsourced IT provider and the manager who signs of on the bills every month.

Isn’t this the sort of issue where Defense in Depth comes in? You don’t want to rely on a secure LAN, but having a secure LAN _and_ a hardened server reduces your attack surface in the case of a 0-day

This is exactly that sort of issue.

You also need to have someone who will be able to articulate this and be heard.

My wife just started working for a city (~1M pop) government. Her work computer is running Windows 7.

There is worse out there, last year I had a customer ask about installing my software on a Windows 2000 server.

I wonder how that is still possible? When I order servers the OS is typically included as a line item. This leads me to believe there are companies out there running 20yo server platforms on 15yo hardware. The markup for replacement parts for equipment that old is insane.

Net net that's probably more secure over the course of the past 5 years than upgrading, and certainly less buggy

> You’re missing the biggest reason this is relevant: enterprise IT shops with strict change management processes amd, especially in government, years of austerity budgets cutting resources for both sysadmins and rigorous testing.

What's worse?

a) Deploying a fix and (unlikely) some, perhaps all, related systems fail until they are fixed internally or

b) Not deploying a fix and have your servers owned by an adversary?

Both are low probability/high impact. I don't expect a typical company to expose its domain controller to the WAN. And if the domain controller is down because of a botched update, pretty much everything else in the organisation is down. Not clear to me which one is worse.

You’re absolutely right in every regard, I just want to throw in a little flavor from my experience as a security consultant. I’ve worked with state governments where we had to tune out alerting of failed logins on their domain controller because the public login for their public facing site was backed directly by their internal Active Directory server and we were seeing thousands of failed login alerts every day.

The state of infosec is still that bad and unfortunately most consumers can’t know of these problems let alone choose to opt out. Right now much of the cost of a breach is borne by the end users who didn’t choose the poor level of security the organization implemented, and I am increasingly of the opinion that it’s better to bring down your organization’s IT infrastructure than to suffer a catastrophic breach. Because if the pain if borne by the internal IT teams more than the end user (who again often has no knowledge or no choice), eventually the company will be forced to implement better processes.

As long as the real cost of a breach is paid for by end users, organizations have very little incentive to improve.

Thanks, I feel that’s an unpopular opinion for purist, but very real for most people’s day-to-day .

I'm not sure the second one is so low probability. While the domain controller is not exposed to an external network, it is still exposed to the workstations.

Good on you for framing this in a realistic way.

I’d second cm2187’s general “it depends” and also note that in the environment I was describing this decision isn’t happening in a vacuum. The policy was probably set 15 years ago when someone updated a printer driver and after the production systems were back up someone chewed out the IT manager and said their job was on the line if it happened again. Now you need the CIO to approve updates and have a lengthy delay before touching core infrastructure. Sure, you can request an emergency waiver but that’s a lot of work, it’s frowned upon, and sitting on this patch probably won’t cause problems since this happens fairly often…

What the CISA memo does is change that dynamic: now most of the government has instructions to act and it’s thus personally riskier not to act promptly.

You assume these organizations are filled with developers. From working for government, state higher ed, and health care for over 20 years, I can tell you, they are not. They are filled with techs who know how to call vendors (or are required to by leadership). So, when Epic (the EMR) system breaks and doctors and nurses can’t look up the info for an unconscious patient and the patient ends up dying, then yeah, the IT shop takes forever to patch things.

There are operating system patches that break third party software releases. Some of those vendors take more than a month to release a patch.

That is the excuse but I don't think many people are even getting to testing the releases. The August update broke my software in a fairly fundamental way. I've been amazed by how few support issues that has caused so far. I can only conclude most of my customers neither upgraded nor tested that release so far.

This is where context sensitive mitigation come in to play, managing risk and balancing it against business priorities, keeping all stake holders in the loop and part of the decision.

> Any server which is that far out of date on its patches is either in need of a sysadmin, or has a sysadmin who's being negligent.

... or who's sole purpose is to run software that only runs on Windows Server 2008 Service Pack 2 without any more recent updates. (And which also need to be on a network that allows untrusted access. Perhaps unsurprisingly, this was Huawei WiFi management software. I'm _so_ glad I'm no longer on the hook for anything to do with that project/client...)

The patch available doesn’t patch the issue for non domain joined machines or Linux hosts the last time I checked. To do that you need to manually set a registry key.

See step 3a in the following link for a guide on how to set the registry key. https://support.microsoft.com/en-ca/help/4557222/how-to-mana...

The more you have to manage the more lax you can get and overlook things. We switched rmm's this year, and had our service manager quit in the middle of it, so there might be some holes in our reporting, and things like this are a reminder of the need to double check and stay on top of things, if nothing else.

Most security patches don't have nearly the security implications that this one does. A lot of the time it may be for functionality you don't use or for some really extreme threat model. This is a widely used component that can be broken using a very realistic threat model.

These servers are not at all publicly facing. Hence, being vulnerable doesn't lead to immediate compromise.

An attacker would need to do initial intrusion first, and then discover the vulnerable server. This means there is a lot less pressure on getting these things patched immediately

> These servers are not at all publicly facing

That’s a bold (and incorrect) statement. What you meant to say is “hopefully (and in most but not all cases) these servers are not publicly facing”. Because I’ve worked with an alarmingly high number of organizations where their internal AD absolutely is public facing.

Patching is not enough! You also need to change a registry key as explained in the Twitter thread.

Here's the description for CVE-2020-1472: An elevation of privilege vulnerability exists when an attacker establishes a vulnerable Netlogon secure channel connection to a domain controller, using the Netlogon Remote Protocol (MS-NRPC), aka 'Netlogon Elevation of Privilege Vulnerability'.


This is an amazing bug.

So what happens is, you're supposed to fill out a bunch of bytes as proof of who you are, and then a bunch of bytes that represent stuff like seconds since the start of the Unix epoch. If you can't do this, NetLogon figures you aren't really who you say you are.

And the exploit is: Fill everything out with all zeroes. This will succeed one time in 256 on average.

The reason why is complicated and somewhat interesting, but this stupidest possible exploit is what you get at the end of that complicated rationale.

I've written previously on HN that it stands out how terrible Microsoft is at cryptographic design. If there's an opportunity to roll your own and do it badly, in a Microsoft product that's what you should expect. Google has good people (it doesn't always use them, but most often it does) for this stuff, and Apple most often seems to accept that it doesn't have good people so it'll not roll its own but just use things that already exist; but Microsoft does this over and over.

In this particular case they took AES (seems fine) and an inappropriate but in principle secure cipher mode (CFB8) and then they... fixed the IV as all zeroes even though the definition of CFB is clear that you need to use a random IV.

> This is an amazing bug. They took AES (seems fine) and an inappropriate but in principle secure cipher mode (CFB8) and then they fixed the IV as all zeroes

A bit more detail on what happens next: "So with an all-zero IV and plaintext plus a randomly chosen key, you will end up with an all-zero ciphertext 1 in 256 times on average. [In other words] roughly once in every 256 times the server would randomly concoct a session key for which the correctly-encrypted version of their all-zero ClientChallenge would itself be all zeros."[1] Quoted from a detailed and nicely illustrated article about the bug.

[1] https://nakedsecurity.sophos.com/2020/09/17/zerologon-hackin...

An exploit plus 3 different poor security practices... insane.

> Apple most often seems to accept that it doesn't have good people so it'll not roll its own but just use things that already exist

Do they? Wasn't "goto fail" a bug in their SSL code? That one wasn't even a bug that a reviewer would need to reason about the logic of the code to see.

There are exceptions, but also that's an implementation bug.

This Microsoft vulnerability isn't a bug in the code, it's a design mistake, if you implement exactly what Microsoft's design document says for NetLogon, one time in 256 all zeroes lets you in. By design. Stupid stupid design.

If Microsoft had specified that the IV has to be set to 0102030405060708090A0B0C0D0E0F00 this vulnerability doesn't exist. Is there a different vulnerability? Maybe, it's hard to see how to attack it. But with all zeroes, just send all zeroes, what could be easier?

I am now convinced that Microsoft is purposefully degrading the quality of the cryptography at the behest of the NSA.

It would take a lot to convince me that my position is false.

Think about what that would take: You'd have to explain to me, at length, that no-no-no, Microsoft cares deeply about the security of their customers, they hire professional cryptographers, and that they keep up with the best practices in ciphers, protocols, and configuration defaults.

So let us see what kind of uphill battle that would be:

In 2020, Microsoft products have all of the following current cryptographic problems.

- There is no support for TLS 1.3, either on the server or client.

- HSTS is very hit & miss, with only Windows Server 2019 adding partial support here and there.

- Until very recently, you'd have to jump through hoops to enable TLS 1.1 and 1.2. The operating system had the capability for years, but... Microsoft chose not to enable it. Explain that one without resorting to: "This helps the NSA hack everyone that doesn't know what they are doing."

- Across a forest trust, RC4 is the default cipher.

- If you try to enforce AES ciphers for Active Directory, you'll break some forms of single-sign-on from Azure AD!

- If you use ECC certificates, you're stuck with the handful of now very thoroughly legacy curves. Don't even dream of support for elegant, modern, secure ciphers like Curve25519!

- Keep in mind that Microsoft first implemented ECC ciphers and the associated "modern" Key Storage Provider (KSP) in 2006 for Vista. So you would think that only 2003-era software would still require the legacy CryptoAPI system, right... right? You'd be wrong: Notably, you can't have elliptic curve certificates with: NDES, AD FS, SQL Server, SCCM until very recently, and in fact just about every Microsoft product except for IIS. Which I remind you still can't do TLS 1.3. In Windows Server v2004.

- Azure Key Vault can't issue anything but RSA certificates from third-party CAs.

- Azure's disk encryption similarly refuses to use ECC, and has to use RSA for disk encryption.

- You can't get free certificates in Azure from Let's Encrypt or Microsoft themselves. Because a 1KB file in 2020 should still cost $50 a year, am I right? Otherwise how would VeriSign make their billions!?

- Certificate Services has had nearly zero new features in like a decade. Microsoft could have revamped the web interface, added certificate transparency support, SQL Server database support, PowerShell commands that... do... things, or anything really. Anything at all.

- There's no replacement for AD CS for that matter. It's clearly legacy, but Azure AD or Intune have no replacement. Enjoy your 2000-era interfaces and methodologies. You seriously have to write an INI file and put it into C:\Windows before installation to configure it!

An alternative explanation: Microsoft execs cannot be convinced that the amount of money it would cost to remediate these security risks would not be recouped in extra sales.

An alternative alternative explanation: Microsoft execs were convinced by the NSA to stop spending money stopping government surveillance.

Prove me wrong.

Prove that no teapot, too small to be seen by telescopes, orbits the Sun somewhere in space between the Earth and Mars.

My argument is entirely facetious, of course.

But you have no reason to believe a teapot orbits the sun somewhere. There is no reasonable way to believe that a teapot could have "gotten there". No space programs launching teapots just for laughs. Space programs in general being far too expensive for someone to launch something without government oversight.

Etc, etc...

My point is that the NSA does exist. They do degrade cryptographic algorithms, either through national security letters or simply bribery. The Dual_EC_DRBG fiasco happened. It really happened. Private United States based organisations do cooperate with these programs, either willingly or because they are forced to.

Now, ask yourself: What would it look like if Microsoft was -- hypothetically -- cooperating with the NSA?

Since Windows is so widely used, any weakness in its crypto would be a problem for the US itself! There's no separate "export version" any more.

(Which reminds me: "Export-grade crypto". Remember that? That happened too. That was not a "conspiracy"! That was law! Recently.)

Back to my point: how would you degrade the crypto but protect US interests?

Well, one method would be to have strong crypto in the software, disable it by default, and mandate that all US government organisations turn the strong crypto on. Simply rely on IT administrator lazyness and tight budgets of most organisations to ensure that 99% of the world outside of US Government remains on the weak sauce.

Exhibit A: FIPS mode.

Exhibit B: TLS 1.1 and 1.2 available but off by default.

Exhibit C: AES for Active Directory available but off by default.

Now do you get it? It looks suspicious.

It's one thing to accuse a neighbour randomly of murder. It's entirely another thing if you see them putting a shockingly large and heavy rolled up carpet in the boot of their car at three o'clock in the morning.

Since I haven't seen posts, it's TCP 135+something up in the ephemeral range. 135 is the manager, which opens a second port you talk to. That second port is where netlogon happens is my understanding.

We have some 2008r2 hosts that are in the process of shutting down I had to gulag off.

Script to test vulnerable domain controllers here: https://github.com/SecuraBV/CVE-2020-1472

There was a patch released last Tuesday, wasn’t it?

Not so fun though for those who still are on Server 2008 though. (Yes, that is still a thing unfortunately.)

The patch is only necessary for domain controllers isn't it? If you're running Windows Server 2008 as DC, I don't want to know what your clients run.

Windows 10

I'm trying to get off! It's not easy! These apps won't run on a later version of Windows Server, they're too old! Tell my boss to give me more budget to get things done properly! :(

Why would that affect what the DC server is? Usually those are run as dedicated servers.

Haha, I was referring to the app server that's still running 2008. I guess maybe my comment wasn't relevant, as it's not a DC. Just thought I'd throw my frustration out there. :)

> Tell my boss to give me more budget to get things done properly

How much would someone get for getting this done?

Getting the app upgraded or changed to not need 2008 is the hard part. If the app could run on a later version of Windows, this would be really easy. I'm sure you can appreciate how difficult that might be. For custom apps, it could be a lot of code to update. For vendor apps, that app would also probably need to be upgraded to a newer version because the app is also end of life for some years now. In our case, it's a vendor app. No time and money to fix this situation, so we just keep it running and pray.

Usually $0.

I've been in this situation. Getting a $1000 budget approval took months and was _tough_.

Back in August.

Exploits for this have been floating on Github for at least a week already. The vulnerability game has become pretty fast paced nowerdays..

It I see it correctly, it's about "August 2020 security update" which was published 11 Aug 2020, more than a month ago.

Correct. And "government-grade security" isn't a compliment. CISA definitely has some people who want to do the right thing, but they need budgetary control over other agencies to make it happen. (Speaking as someone who worked for the federal government for 15+ years, the most effective means of persuasion would be travel and reserved parking enforcement for Senior Executive Service members.)

CISA also doesn't have authority over DoD or IC systems, let alone the aforementioned budgetary authority to make them do it. No, it doesn't make any sense to apply a lesser standard to systems that are more sensitive. Like I said, government-grade security.

The financial sector is also terrible.

During SHA-1 deprecation for example, almost all the trouble was with the financial sector. The way the bogus issuance that led to discovering problem at StartCom / WoSign was detected begins with a financial services company that is desperately trying to get a SHA-1 certificate issued after it's too late and finds WoSign will back date the certificate for some undisclosed amount of money. Even some of the Symantec / Crosscert stuff comes back to the Korean banking and financial sector, (in the district south of the river in Seoul which we'd anglicize "Gangnam" yes that Gangnam...). And lots of "We must have RSA kex" was the financial sector too.

You've probably got a chip card as credit card or debit card, but even though that chip is relatively a technological heavyweight (compared to things like your employee badge that authorises access to the shared office printer, or a public transit card) the crypto in it is... not so hot, and the surrounding infrastructure built by financial companies is awful.

And the chip card doesn't actually secure the thing you care about - your money - it only secures the thing the issuer cares about, tying your transaction to you. Actual financial transfers are done entirely on a trust basis like it's still the 19th century, the card just presents authorization which is optional.

what I find most frustrating is that finance institution buy extremely overpriced and brittle software from renowned vendors not because of technical excellence but for risk management: they want a supplier to blame when something goes wrong.

the problem with this is that nowadays suppliers are stronger in defending themselves from such blames than they are in writing good software.

furthermore, the software being sold is very pricey and the cost of customising it often exceeds the cost of the software itself.

then, 9 times out of 10 the issues lie in the customisations, and that doesn’t surprise, as the institution will be using most the custom parts as those are the ones they need most. here the vendors have even more grip in demonstrating that it’s the customer requirements’ fault, and not their crappy software, that confuses the concepts of “database” and “application server”.

I try to counter this by writing solid, robust software using open source components, giving back when possible, perfectly filling the company’s requirements, well integrated and reasonably cheap to maintain. slowly, this can help in abandoning vendors and building in-house know-how.

I find this crucial because the quality of software sold by some vendors is very very low, and getting worse.

I've used Linux so much, I initially read the title as "Window (singular) Server Vulnerability...", leading me to wonder what was wrong with Xorg this time...

Isn't Xorg technically the Xwindows client?

A certain Don Hopkins has written about this:

"We have tried to avoid paragraph-length footnotes in this book, but X has defeated us by switching the meaning of client and server. In all other client/server relationships, the server is the remote machine that runs the application (i.e., the server provides services, such as database service or computational service). For some perverse reason that's better left to the imagination, X insists on calling the program running on the remote machine "the client." "



Things are complicated when the remote computer calls your desktop. But that's how X works.

That's also why once in a while somebody discover a completely unexpected vulnerability on it that, although simple nobody thought about it before. Things are so non-intuitive that it's hard even to talk about them.

Yet, somehow it works. And works quite well. I imagine the authors of that book are quite annoyed by how Unix evolve to work really well, and still avoided fixing any fundamental problem.

Is the whole book full of misunderstandings like that?

There may be a point to arguing that X should use an Remote Desktop model (like Windows or VNC) instead of an Window Server model, but the terminology is correct for how X works.

No, Xorg (or at least the component of Xorg I think you're thinking about) is the server -- specifically, the display server.

Clients -- the individual applications you run -- connect to the display server to draw themselves. (so, e.g., xeyes, xlogo, Firefox, etc. are the clients).

Xwindows is a server in the same sense that a network printer (or print server) is a server: it sits there waiting to receive network connections from clients, and it allows whoever connects to do stuff which is tangible to the user.

The fact that you're interacting with it makes it also feel like a kind of client. But technically, the clients are asking X to allow them interact with you. Though one of the things you may do with X is control launcher software that ends up starting more clients which interact with you through X.

Technically it is the X Window System. :)


X Windows? Don't you mean "The Window System named X?"

j3th9n 36 days ago [flagged]

How long has this vulnerability been known but kept quiet for use in the NSA's library of exploits? EternalBlue was based on a Windows exploit they knew about but withheld from Microsoft so they could continue to exploit it. That deliberate withholding of information has cost many organizations a lot of money:



Is this another problem that could have been fixed a long time ago?

> Is this another problem that could have been fixed a long time ago?

Yes. The bug was documented in the public documentation, not even hidden in the closed source code.

In general, when your OS or system software (Database, Webserver, etc.) vendor releases a patch for an exploitable vulnerability, this requires immediate attention.

A patch is a global vulnerability alert.

> We have directed agencies to implement the patch across their infrastructure by Monday, September 21

Actually, they have until Monday.

1472 patched here too https://blog.0patch.com/ https://0patch.com/ Great for old systems where updating breaks stuff

This is from a month ago, if you patched your DCs in August you are good.

Sorry for my ignorance, but do Linux servers have critical vulnerabilities like this, and as frequently?

Edit: thanks for downvoting a serious question.

Generally no, for many reasons. I think the most important one here is that in a Windows server there’s a lot of software bundled together (AD services, the SMB server, IIS, RPC stuff like this netlogon interface, WMI, RDP, many others) that you can generally expect to find on any Windows server, giving you a wealth of targets and potential ways to exploit how different components interact. A base Linux system has relatively few services (just SSH in simple distributions) so to begin targeting Linux the same way you’d have to decide to go after RHEL or something, narrowing an already tiny share of servers down even further. Other reasons include the dominance of Windows outside the tech industry (which makes it a more interesting and lucrative target for vulnerability researchers and exploit developers), lack of comparable functionality to AD for Linux servers, and probably higher standards for code quality and cryptography in highly scrutinized open source applications like MIT Kerberos.

A base Linux system has relatively few services (just SSH in simple distributions) so to begin targeting Linux the same way you’d have to decide to go after RHEL or something

Not sure why it shuold be REHL, there are linux counterparts for most bundled software you mention so a comparison against a base linux system + e.g. samba/vnc/nginx/... still seems fair. All of those have had (sometimes severe i.e. root escalation) vulnerabilities in the past, but answering the question whether it's 'as frequent' is pretty hard to answer. I would also guess towards 'no' though.

Yeah, there are applications that are very common, but still exploits against them won’t be as universally applicable as Windows ones. You’re likely to find a Linux server with either Samba or nginx or some VNC software but all at once is less common, and there’s a lot of variety with web and VNC servers.

Thank you very much for your explanation.

Linux servers have vulnerabilities that are just as critical but usually less frequently. One that comes to mind is shellshock. https://en.wikipedia.org/wiki/Shellshock_(software_bug)

I like the question (& answers), thanks for asking

Wow, the hospital my wife works at was hit with a ransomware attack today, I wonder if this is the vector the attacker used.

If they had a domain controller with ports opened to the internet they would have been hit some other way already (not to mention suffering from constant account lockouts from random brute forcing), so this would probably not be the initial vector. Once the exploits are more polished this will make things easier for ransomware to escalate privileges but they already had effective ways to get to domain admin that work on most networks.

They are slow off the mark here

Microsoft really can't catch a fucking break this year, can they?

Their stock has done mighty fine. Before anyone says if its down now check how much its gone through YTD. I still feel like their stocks highly undervalued considering where their competitors stock is at.

It's 2020 for Microsoft just like it is for the rest of us.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact