Hacker News new | past | comments | ask | show | jobs | submit login
Linux is Most Used OS in Microsoft Azure – over 50 percent of VM cores (build5nines.com)
351 points by crpietschmann 13 days ago | hide | past | web | favorite | 332 comments

I'm interested to know what the majority of the Windows machines are used for.

I can't think it'd be anything other than 'big enterprise' applications, where there seems to be a lot of bespoke stuff built specifically for Windows Server (e.g. finance software, healthcare systems, marriage registration, taxi licensing, nursing home management, etc).

Unfortunately this sort of stuff is fairly rare to be available in the open-source world, because there is such little demand for it by individuals.

The wackiest one I've come across is a CMS (Crematorium Management System). If anyone wants to create an open-source one with me, please get in touch...!

You don't see it on HN much but the Windows world is still massive, and there are still plenty of ASP.NET folk around. I'd suppose you only have to take half a footstep outside the "pure Internet services" sphere to start bumping into lots of Windows, in pretty much any direction you happen to step.

There are also whole industries where Windows has a particularly strong influence, e.g. (from experience) some parts of finance, petroleum, television, and I imagine things like render farms or other specialist graphics work that might have grown out of a once Windows-only desktop software cottage industry.

I guess it's also important to say, much as you might find a zealous love for UNIX inside many (computer) engineering-led organizations, similar loyalty to Windows exists in many more (locally under-represented) industries for reasons that are less likely to be understood around here.

Yeah it's huge. My industry is 95% .NET. Lot of [LARGECO] use Outlook, which means the entire Office stack is dragged into use, which makes LDAP the logical choice, and Windows management of it, and then the rest of the tooling because why not...

I think its a SMB thing. Basically windows owns organizations with less than 10-30 people. And there are a _LOT_ of those, and they do a _LOT_ of custom development for their little specialization. Its the excel sheets that grow into full blown applications, or the vm.net one off utilities someone hacked together that grow into the backbone of a thriving small business. Even the generic catagory SMBs (dentists, car repair, etc) end up mostly being windows shops because the owner gets quickbooks or whatever and runs the whole thing from a windows pc in the office.

I entirely disagree with the notion that Windows is strongest in SMB. Active Directory + GPO + various third-party device management stuff, plus the depth of Office-related integrations, and Microsoft's absolutely massive track record for supporting all kinds of uses, put Windows way ahead for large-scale compliance and auditing.

On the side of deploying and managing workstations, I would take Windows every single time over MacOS or Linux. For real, some combination of Apple Business Manager and JAMF is supposed to go head-to-head against AD+GPO? On Linux--Centrify, or something from Red Hat, that might well be more costly than Windows licenses (guessing here)?

No way around using windows as a private medical practice. Too much bespoke software.

Yep. I don't think I've ever seen a computer at any of the healthcare services I've used that wasn't either ancient and running some really old bespoke software, or wasn't some flavour of Windows running slightly less old bespoke software.

> and I imagine things like render farms or other specialist graphics work that might have grown out of a once Windows-only desktop software cottage industry

Much render farm/graphics work was once Mac territory but given Apple's ridiculous pricing and lack of server options besides stacked Mac Minis or support for NVidia stuff many have shifted away. Truly sad...

I see it's size relative to locked-in technology from the past or what individuals already know.

Currently, a WPF developer, since that was the technology chosen by the previous developer, and cannot stand working with Windows 10. People praise Visual Studios but under heavy development, crash often. It is not designed for coding/typing. Visual Studio Code is better at that and has only been around a short amount of time.

.NET / C# is useful but frameworks like Entity are layered bloat where standard SQL is so much faster and cleaner to utilize. Moving from Entity to standard SQL reduced start-up time by 10 seconds in one application. All the benefits the Entity has is only if the provider for your DB back-end has those ability coded with-in it.

I actually miss being an embedded and full-stack developer writing C, C++, go, typescript, JavaScript ... Even writing cross-platform QT5 applications in Linux and only using Mac or Windows to build and package the binaries is more pleasant.

A big part is that Linux / BSD is focused around development where Windows is a second had citizen. From the register which prevents simple backing up and restoring configuration settings fro application settings versus a config.ini file or such, which can be stored in git. Or the directory System32 = 64 bit DLLs versus SysWow64 = 32 bit DLLs. Inability to use very useful commands such as rsync and lsof or even piping through SSH. Or forced to keep useless applications around/running/installed such as XBOX or Cortana.

Example: `echo -e "#include <stdio.h>\nint main() { printf(\"Hello, world\"); return 0;}" > hello.c && gcc hello.c && ./a.out`

Even the desktop environment in Windows lacks usefulness and takes more steps then ones in Linux. Windows keeps on adding bloat to each new version. Why do I need to have the ability to run 16bit applications around if I'm never going to run 16bit applications?

These are some of the reasons I only deploy Linux VMs on Azure. Lean and only retains what you need with simple backup means and deployment.

In the mid 2000s I was working on scientific computing software running on RHEL workstations for a huge oil company. One day our customer inside this huge oil corp told us the entire worldwide company was standardizing on Windows for desktops and workstations. He said not to worry right away, because it would take them years to rip his team's lab full of Linux workstations out of his hands. I'm sure that was a massive, traumatic, and expensive transition that they would not repeat while the institutional memory of it remained, so I imagine they're still on Windows today.

That's rough... Unfortunately Windows authentication and access mgt software rules the world... I remember a lot of issues in a few environments with macOS over the years.

At least with wsl2 I get the Linux goodness and decent docker support finally, hopefully it can roll out of insiders in into GA soon so everyone can get their hands on it.

> I imagine things like render farms or other specialist graphics work

Eh... outside of the broadcast motion graphics group, small personal/business farms, or educational institutions this is largely a majority Linux area.*

*I'm speaking from the perspective of public cloud farms and the feature film industry (for VFX and Anim). You can't escape Windows for select software and plugins, but if it's not a strict requirement by the software, then it's a business choice for other reasons (i.e. existing environment, staff, etc) because you will from a technical standpoint operate at a loss using Windows.

It's the same as infrastructure and server side.. from my corner I see a limited horizon over which little exists and only a few things over there might use Windows. Think of any industry: the first for whatever reason that came to mind was architecture. Lots of firms on ArchiCAD and that'll never change, there's no reason to. The value is not the tool or the platform underlying it, or anything else an Engineer(tm) will understand, the value is all in the buildings costing 1000x more than the 'tiny windows utility' that just happens to contribute to the result, kinda like QuickBooks or calc.exe.

It's a bunch of actual people designing buildings that cost so much more than a Windows license doing Real Things that have some lingua franca interface that has no reason to go Linux and probably never will, because in that business they're too busy with actual priorities to ever even discuss how the status quo could change. There are 10,000 industries like this, and probably 10,000 people to each of those who simply don't care, because to them the computer is and will remain only a tool. That's the world of Windows we can't see.

It's different for those of us in Anim/VFX. Windows poses a serious business cost, not from licensing, software, or administration. The vast majority of images these days are still rendered with CPU engines. Windows incurs a 15-30% time increase per frame compared to using Linux or macOS. When your frames take on average an hour minimum to over 24 hours, with thousands of frames to render, that adds up significantly in terms of iteration, directorial changes, and making deadlines.

Once you get beyond small teams doing the work, this starts to play in massively. Pretty much all medium+ sized studios have their artist pipeline on Linux, reserving Windows or macOS for tools like Adobe, ZBrush or Marvelous Designer that don't have a Linux build yet. And where I'm at, the operating system itself doesn't even matter to the artists. All they need to know is "this button opens terminal, this command puts me in the right show, this command launches the hub" and from there they're in our GUI pipeline. Their environment is the DCC, not the OS. For all intents and purposes it's completely abstracted away from them.

That's not to say there aren't those that take advantage, creating custom commands and utilities to share with others. But for the most part the fact that it's not Windows or macOS doesn't bother them.

Yes. I know Windows inside and out. I can easily navigate the UI. It does the job I need it to. I don’t want to spend my time inefficiently by toiling around in a Linux console due to some strange affinity towards Linux. I have a job to do and Windows is an effective tool I know and can use to execute the job.

If you're going solo or as part of a small team it makes sense. If you have a team that can abstract that away from your so you only have to worry about your work, then whether or not you know the OS itself means nothing. We can bypass the terminal altogether if we wanted, it has nothing to do with personal wishes or desires and everything to do with maximum performance and how the industry (film/anim/vfx) evolved.

Until an update trashes your computer that is. Or you end up spending your time inefficiently by toiling around google results for some weird issue that popped up after a forced update.

Linux updates also cause issues.

It bears repeating the the majority of software development happens on Windows (and I say that as a Mac guy).

I'm glad that the next ".Net Core" will be ".Net 5" and hopefully once people start upgrading (those who resisted "Core" will finally come over), should see Linux as a target option more readily. Of course commercial libraries and dragging them by the teeth will take longer.

Most of what I've written in the past 5+ years either targets or runs in linux/docker. It's been a challenge pulling the teams up to spead.

Was the challenge of pulling the teams up to speed efficient? Was it worth all the communication overhead and training for the benefits Linux has over Windows? What are those benefits anyways?

For the most part, nobody really cares or notices... our CI/CD and test environments run Linux/Docker but for the most part, the devs don't notice... it's still a lot of SQL and the UI is React+Redux+Material-UI.

I still get the occasional question when something breaks. That said, it would be nice if more understood some of the underlying stuff... I've had to champion a lot of the changes myself.

Yep, definitely - I've been working in Windows/ASP.NET for 10 years. More on the sysadmin side of things now, but it's still big enough to have a career.

> I'm interested to know what the majority of the Windows machines are used for.

I have IT experience in the "mom and pop" small-medium business world but am pretty Linux illiterate.

I can buy a copy of Windows Server 2019, install the ActiveDirectory role, create a domain, and domain-join every computer that business has, and start doing all kinds of simple-but-effective policies like folder redirection+shadow copies, define password policies, start checking various compliance boxes, formalize a file share permission structure, enable Bitlocker, etc etc etc. Also, apps for nearly any business exist on Windows.

Do Linux equivalents for any of this even exist? If they do, is there a large pool of competent admins you can hire for cheap in every non-megacity?

The shift currently is from physical servers being owned by SMB to _the same architecture_ in the cloud. As the cloud admin market saturates, you'll see the % of Linux machines in the cloud drop and drop as SMB migrates their familiar setup to the commoditized cloud market.

The bottom line is your average non-Silicon Valley user can't smoothly change between Outlook 2010 and Outlook 2013. If you think a switch to Linux is possible in any timeframe less than decades, you are out of touch with the common computer user and their business needs.

EDIT: i use SMB as an abbreviation for small-medium business https://en.wikipedia.org/wiki/Small_and_medium-sized_enterpr...

Most of what you are talking about was implemented in DCE, and there were ports to Linux but it never became popular. Microsoft certainly lifted a fair amount of this code.


Interestingly, SMB is about to switch to Google's QUIC UDP, which is a great improvement.

I think the GP meant SMB = "small and medium-sized business" rather than SMB = Server Message Block (the Windows file-sharing protocol)

woops! i surely did, thanks! edited.

The number of servers that are used for basic filesharing and Domain authentication are dwarfed by the number of servers used for applications and databases. Linux has can manage a Windows Domain, but it doesn't support all the GPO's etc...

Windows can also manage linux authentication, but likewise is missing parts for advanced usage.

> The number of servers that are used for basic filesharing and Domain authentication are dwarfed by the number of servers used for applications and databases

while this is probably true, it entirely misses the point.

it's only true because a very, very small number of megacompanies consume huge amounts of servers; a more accurate number for this discussion would be server per company. Nearly every business larger than a few people has a need for a windows server (for basic filesharing and Domain authentication, not databases or custom apps) and a windows app for their business that can be filled nearly off-the-shelf.

the cost-benefit of trying to start a linux setup is not even close to worth it.

Why is it surprising?

SMEs tend to buy cheap laptops, that come with "free" Windows Home. (I don't that that version can even be added to a domain) Then they start expanding and just stick with "what works", until it doesn't.

But I see this dropping drastically in the next decade, as PC replacements(read even cheaper Chromebooks and tablets) take more hold and web takes over even spreadsheets.

Maybe not on Azure but on-prem?

Those workloads should be moved to Intune, it's a heck of a lot cheaper for SMBs ($20 for Windows+Office 365+Intune) and not hard to set up. You probably shouldn't be spinning up a new AD forest unless you have an app that needs it, cloud directories are pretty much fine.

...what? isnt intune an MDM platform? regardless, there are some pretty sizable monthly costs when you already own the competing infrastructure - "$20 for Windows+Office 365+Intune" is not accurate.

are you referring to azure AD? thats basically what i said originally - on-prem deployments are being replaced by cloud infrastructure. the point is that its a windows architecture, which is what the person i was replying to didnt understand/know.

Sorry, I meant $20 per user per month.

But it should be cheaper when it comes to refresh time (for your on-prem AD), unless you have to pony up for Enterprise (for compliance and enhanced MFA, which is more like 40 dollars per month), or if you have software which relies on on-prem AD. And Intune manages Windows systems using a combination of MDM profiles and GPO (https://docs.microsoft.com/en-us/mem/intune/configuration/ad...).

> Sorry, I meant $20 per user per month. But it should be cheaper

I understood you:

The cheapest Enterprise Mobility + Security license is ~$105 per year.

A Windows user CAL is ~$10 per year. To be on-prem requires a physical server; if a $10k server lasts 6 years, thats ~$1700 a year to do more than AD but let's assume 100% of costs are for AD. Also, the Windows Server license (depending on purchase method) is ~$100 per year.

Up until 18 users, the server investment is more expensive. [18 * 10] + 100 + 1700 = 1980 versus [18 * 105] = 1890. At everything above 19 users, EMS is more expensive, 1990 vs 1995. Note that this comparison is very slanted in favor of EMS licenses. A $10k server is more expensive than most small business have and 100% of the cost isn't actually for hosting AD (domain controllers are generally lightweight and non-resource intensive). In reality, it becomes cheaper at something closer to 8 users, which covers most small businesses that are more than just a personal laptop and a point-of-sale terminal/CC machine.

There's absolutely _millions_ of .NET web apps that target 'old' .NET framework out there. Migrating from .NET to .NET Core isn't trivial for most projects and can be very very challenging for others, to the point where it's probably not worth even attempting it. These apps will still be running for years or decades and Azure has by far the best Windows cloud experience.

>>The wackiest one I've come across is a CMS (Crematorium Management System).

If anyone is looking for something to open source: Meet manager. Since the 90s it has been the go-to software for running competative swimming meets (timing, heats organization etc). It's a niche market but is alway a very broad and steady user base. (Similar software is also used for track and field meets.)


On a first skim of I thought you were saying there was crematorium software called "Meat Manager". I'm glad I was wrong.

We'll just call the repo MM, and let people figure it out themselves.

I don't have actual numbers obviously, but I'd imagine that a lot of Xbox multiplayer game backends and probably all of Xbox Live is running on Azure Windows servers, and Xbox is a pretty big platform by number of users.



This Xbox service runs on Kubernetes: https://azure.microsoft.com/en-us/blog/building-xbox-game-st... (I work on Azure, focusing on Linux and open source technologies)

At least some of the Halo games used Orleans as part of the multiplayer backend:


Orleans can run on .NET Core now, but I don't believe .NET Core existed when those backends were first created. So they would've been running under the full .NET Framework on Windows.

The keyword that seems to be missing from both of those links is "Windows". Seems they are running on Azure alright, but that they are using Windows doesn't seem to be so sure.

I was on the Windows platform for 15 years and moved over to Linux about 6 years ago. One thing I miss from Windows Server is that it was very stable, updates never broke anything. If you got any updates that is. I think most people use Windows server because they are locked into Microsoft technology.

I switched to Linux, and most importantly open source technology - in order to get out of the Windows ecosystem. I switched from an ASP stack to Node.JS, but it feels Microsoft is still stalking me, buying NPM, Github, releasing their own JavaScript flavor, releasing their own "open source" software bundled with proprietary add-ons, meanwhile suing Linux users for patent breach.

Similarly, I ran away from the Microsoft world because everyone was just 'drinking the koolaid'. I got out, into Python and django, and the problem was similar: everyone was drinking just a different koolaid.

ASP.NET MVC C# with EF has got to be one of the most solid web frameworks I've ever used. Open-source fanatics refuse to listen about why it's great... even though it's open source now.

I think I've kinda protected myself from that by working across multiple operating systems and different stacks. I think a lot of devs would do well to try different frameworks, different ways of programming (myself included).

I think EF might get a bad rap because it's pretty closely tied to MS SQL Server - and if you don't know exactly what you're doing it will write some pretty hideous queries for you.

Should give Dapper a try, much cleaner imho... Though one of the projects I'm on is no abstraction at all.. most of the calls are via sprocs and using JSON for data input/output from the db directly.

edit: by no abstraction, we have like 4 db methods that have a different generic type interface, but that's it across hundreds of different SP calls at the API later.

I've been an exclusive Microsoft developer since 1996. I still develop on a Windows computer to this day, mostly Node, Python, and C# .Net Core. and had no problems deploying everything on Windows servers. Then I discovered the cloud and moved into positions where the cost of infrastructure could easily be assigned to my implementations and I had to answer for the cost.

Once you add Windows to any implementation a few things happen. You pay more for licensing, you pay a lot more for resources (you can do a lot with 128MB/0.25 vCPU VM with Linux), and it is a much slower for VMs to scale in and out for elastic workloads like bringing instances up and down based on queue length.

I tried Windows server a while back for a need but it felt like a bad joke compared to Linux when the RDP one day stopped responding for no reason (had a few occasions of it) and I couldn't do anything but reboot killing the GUI app that was running in it.

On Linux l, I never had a problem with ssh unless I made a mistake in network configuration and doesn't suddenly block my access like that.

Windows also takes larger amount of resources compared to Linux and running a 1GB memory instance meant it could run out of memory anytime freezing the whole experience without doing much of anything on it. Not sure why 1GB instance even exists as it's quite useless.

I feel like I'm doing a job inside ssh session but feels I'm being dragged down with mouse click achieving very little per second. On Linux, automation can be done anyway you like but on Windows, that's just not the way it's meant to be with random GUI apps. No idea how to back up configs of those or invoke actions programmatically.

And you have to pay for the license.

Windows is a consumer OS by design.

I went for a while of bumping into lots of job opportuntities using the Microsoft stack, and thought perhaps it would be good for my career to learn a bit.

But the barrier to entry seemed so high at the time - you needed MSDN subscriptions, paid IDEs, and paid servers. All out of my reach as a self taught junior developer.

I don't know how or where everyone learnt it all, except as graduate developers at BigCorp?

Most Microsoft technology is completely free for non professionals. PC came with a Windows CD or the OS was per-installed already. You could install IIS (ASP server, l.limited to 5 users) from the Windows CD. I used the classic notepad.exe to code. It was kinda standard to buy the Office package, which came with a file based database. There where plenty of free tutorials on the web, and IIS also had some examples. And there where services that would host your web app for free. If you wanted to do "real" programming you had to buy books and a compiler which was super expensive.

Moving to Linux was a new world opening, that I had only seen glimpses of before. Using apt-get to install an app was mind blowing. Getting used to the command line took a while, but now I do everything from the terminal, except some GUI apps and the browser. Still have to look up most commands and read man pages though.

I think my experience might have been crippled a bit by having had a mac as my main PC since university, meaning less easy access to a non-locked down Windows installation.

Much the same way as Windows users probably are not big objective C developers...

It just felt like for a server language like .NET that your personal laptop shouldn't really matter much.

In that space, you'd mostly have needed to use Mono and MonoDevelop, depending on the time-frame.. after MS bought Xamarin, Visual Studio for Mac is actually a descendant of MonoDevelop

Windows license is still $199 for us Mac users ;)

And even if you're buys a laptop, then you must pay extra for the "pro" version of Windows to actually install IIS. (Last time I tried installing IIS on Win7Home - it just told me that my OS wasn't supported)

I have IIS on my Windows 10 Home Surface 6. I have encountered no problems at all; I did not buy it with this in mind, but now it's doing all my dev work. Biggest problem is that Teams uses 600 MB to sit idle so I never have any memory to do anything interesting.

I've never had to pay for my own MSDN, IDE or server while I was in the Microsoft development ecosystem. In many cases, my employer would pay for those things, and it was usually generously flexible - i.e. I could build Windows machines and install Office at home off those licenses. The Community edition of Visual Studio has been sufficient for quite some time, though I'm sure there are cases where it is not.

Anyone running a copy of Windows could run a free copy of SQL Server (Express) and SQL Server Studio, IIS Express and the .NET Framework using VS Community to build MVC apps. You would use the Web Platform Installer for just about any of those things.

Between https://dotnet.microsoft.com/apps/aspnet (previously https://asp.net) and https://www.microsoft.com/web/downloads/platform.aspx it's generally very easy to dip your feet in the waters.

(All that being said, I'm now in the NodeJS ecosystem. It's... interesting!)

You don't have to pay for anything to learn the Microsoft stack.

- Visual Studio Community Edition is full featured and free

- SQL Server Express is free

- You really don't need IIS but it is free. With C# you can run on top of Kestral.

This is true now but it wasn't when I started doing dot net development ~10 years ago. If it wasn't for my employer covering the costs I wouldn't have been able to do it.

There were free editions but they were crippled versions of the paid ones with no extensions, etc. The free versions were generally unloved and had annoying quirks.

At the time you needed to add a Resharper license in there too to be decently productive which was yet another overhead.

I disagree that Resharper was required to be "decently productive". Resharper turned your Honda of a Visual Studio into a Lexus, and Java developers (who were already used to Lexuses of their own) appreciated it... but you can still drive around with a Honda. Similarly, the limits on extensions in Express editions wouldn't hinder learning .NET development, or even working on most projects.

However, VS Express itself took a while to appear. Back when .NET first appeared, the only thing that was free was the .NET Framework itself (including command line compilers), and online MSDN documentation. Eventually, we got SharpDevelop.

But, well, there's free, and then there's free. In my home country, you could buy a bundle of CDs with complete VS 2002 + MSDN distribution on the black market for around $20. Which was still expensive for students, mind you - so the same bundle was then shared around the class (including the teachers, who only had an older bundle of VS6).

I learned from the sdk command line and a book... mostly because after 9-11 within a couple months my day job and side job were gone and had lots of time on my hands for about 8 months... through the later betas and the 2002 release I hadn't touched an IDE for it... after I got a job though, kept up with it... but SharpDevelop worked for me for the most part in the earlier years.

Fair point. I’ve been developing commercially with Visual Studio since 2000. I never had to try to learn it on my own.

It wasn’t until 2014 when MS started releasing the non crippled version.

But consider yourself lucky. Do you know how hard it was for middle schooler to get a good 65C02 assembler in the 80s like I had to?

I never used Resharper and stay away from JetBrains products as much as possible, with exception of Android Studio.

It looks like they design for developers with gaming rigs and ten finger chords, I even start enjoying using Eclipse again after spending a couple of months on a Java assignment with InteliJ.

Then my .NET team wonders why I never complain about VS speed and crashes, easy, I am the only one not using Resharper.

Sure, now, but when I was starting out a decade ago, everything seemed to be quite big money.

Even reading the docs seemed to need a login and paid subscription.

I used "C# The Complete Reference" with the command line compilers to learn it in the later beta and through the 2002 release. No IDE, just a windows desktop.

I think VS Community started in 2015... there was also a SharpDevelop and MonoDevelop option in between, from pretty early on. I used SharpDevelop on my own projects for a number of years. Around 2011 I started using Node more and have been back and forth on projects since.

Visual Studio Express was available before Community getting available.

Had forgotten about express tbh... I thought there was something, but have either had access to a company version, or used SharpDevelop (at least in earlier years).

Could you please provide a link to some cases where Microsoft is suing Linux users for violating their patents? I thought they stopped that practice years ago.

You mean, except for every major Android manufacturer paying exFAT tax?

Microsoft certainly don't sue as much as they used, but that's because they are no longer developing as much groundbreaking tech. And when they do develop something, everyone avoids touching it with ten-foot pole (see also: "success" of UEFI on ARM devices).

Microsoft made an official statement almost a year ago that it was OK to use exFAT, and they published the specs as well.


I won't argue how groundbreaking their technology is of late, but in all fairness they have been much more open with their code.

If you use rhel the guarantees are also very strong though it requires some extra steps to package and distribute your own software. Ubuntu on the other hand I've found breaks stuff often enough to be a nuisance.

Try OpenSUSE, it has a more recent kernel then RH, it's still rock solid, and it has a great community.

Aside from breaking stuff all the time, Ubuntu seems to have wide swings in the quality of documentation.

Snapcraft fixed that

I worked on a derivatives system for a bank where we had a few thousand servers globally running distributed calculations. We chose C# over Java as it was more powerful at the time. Windows server was rock solid throughout, it also had the benefit of being integrated with Windows security. Now Linux you can run dotnet applications OK we'd probably choose that because its cheaper not because its better. Most server applications you just need a bare OS and run a process, it doesn't need much.

It C# the reason why we have to ingest CSVs attached to emails and use SMB/FTP(not sFTP, btw) to integrate with banks?

Btw - "more powerful" is literally the ultimate subjective statement.


The real skill I try to teach my pupils, when joining top down ICT skills to bottom up CS / hacker skills, is to be able to re use existing tools in new and creative ways.

This is especially applicable if you learn a bunch of generic “everything is a text file” hacker skills, maybe with a little code on top.

Could you manage the crematorium using YAML files edited in GitHub? Maybe. Any collaborative tool with tagging can usually be beaten hard enough with a hammer to turn it into something useful in a specific domain.

We make pens and paper and do all sorts of specific business with these generic tools. Encouraging the generic use of computers might seem like dragging us back to the 1980s, but it’s how these tools were meant to be used!

I definitely agree with you in regards to the "everything is a text file" thing, but the average computer user doesn't know how to properly utilise text files and manage them effectively.

This is similar to the fact that basic knowledge of file systems and file formats is lacking in many modern teenagers, because this is all abstracted away by smartphone apps and cloud storage.

As a tech guy, I'd love to run the crematorium using YAML files in GitHub, but the reality is that the target audience for the system is Funeral Directors and Cremation Assistants, so it needs to be a non-technical user interface.

Identity management with Active Directory is probably the biggest plus of a Windows environment; there's a saying that goes something like "in the end, every identity system ends up trying to be AD". Sure, you can run OpenLDAP on Linux and effectively get a domain controller, but the ready-to-go nature of AD on Windows Server is very compelling, especially for larger businesses.

+1. Did a lot with LDAP auth in an all-linux environment but AD is much easier, IMO. I'm not even much of a Windows guy, but the pools documentation/youtube vids/number of people to ask the question to are way deeper.

Don't use openldap, use FreeIPA (aka RedHat Identity Management). It covers most of AD's functions except GPOs, such as kerberos, HBAC, PKI, 2FA, HA, joining a client to a domain and so on. Nice GUI, decent CLI, installer takes care of everything on the client.

As a sysadmin, I support way more Windows servers than I'd like, mostly because they run some .NET CMS (Content Management System in this case): Umbraco, Kentico, SiteCore etc.

Most mind-blowing use case of a Windows server I've encountered was a server running nothing but nginx.

How hard was it to maintain that nginx only Windows server compared to doing the same on Linux?

Not much. Standard nginx commands work in cmd (like "nginx -t"), and most of the work I did was kicking out one of the production servers when new deployment was in order and letting it back in to serve requests after it finished.

On the bad side, I just couldn't be bothered to create a script that would do that automatically, which resulted in one or two downtimes when I accidentally just tested the changes instead of reloading nginx for them to take affect ("nginx -s reload"). Repeated boring work lead to more carelessness, which would absolutely not happen if it was on any Linux system. I'd create an Ansible script(s) instead and wouldn't have to log into the servers manually for that one frequent use case.

One thing I wonder though, is that whether Windows has monitoring tools like Monit, so that incidents are automatically detected and reported.

Our old office was across the street from a crematorium and we had a public wi-fi named "xxxxxx Family Funeral Home" with a landing page with a link to a (fake) crematorium web control panel, complete with a fake live view of the cremation (it was actually just looped gif) You could adjust the temperature, the speed of the exhaust fans, etc.

Anyway, we have Windows servers running in Azure because we do a lot of GPGPU work, and the NVIDIA/CUDA drivers work best in Windows.

A few use cases come to mind:

- Microsoft often bundles discounts for their their desktop and server licenses (used for AD and such) with Azure credits. Managers are under pressure to use the free service to deliver value, so teams are told to spin up things in azure as they see fit. reject the credits and reject the discount for the stuff you do use.

- Office 365, Outlook, Xbox Live, Xbox Music and Video, the Microsoft website, MSN, Search, and Visual Studio Online all run in Azure either in whole or in part. If you think Microsoft doesnt dog-food their cloud to pump its numbers, you probably werent around when they bought out shared hosting providers "park website" clusters and converted them to IIS instances to inflate their Netcraft numbers against Apache.

Azure has some decent services, and a lot of people do run Windows beyond MS's own services. That said, I'm surprised the Linux users aren't closer to 75%, and bet they will be with kubernetes uptick over time for newer projects.

Office 365, Outlook and Xbox Live definitely aren't run on Azure.

SQL? There are literally millions of production SQL databases out there and custom apps built on top of them - and while MS may have created a Linux port, there's no planet on which the average enterprise is moving to SQL on Linux without VERY good reason (and there currently isn't one that I can see outweighing the risk).

A company I consult for specializes in SQL Server applications. There is a significant push to Linux from both the largest customers, paying over 150k$/month for server licensing and hosting, and from the smallest companies, who are severely budget-conscious. The big customers have teams fluent in both Linux and Windows, and the smallest outsource their DBAs and sysadmins, so in both cases the cost is mostly a one-time development cost and maintenance goes on as usual.

The medium-sized enterprises (100MM$ to 500MM$ annual revenue) are the ones who aren't looking at Linux. These are the ones with in-house tech support, who only know Windows. Maybe they are the average customers that won't switch over?

Note that I'm only talking about cloud-hosted SQL Server installations. When you're buying the whole infrastructure stack, it's a different story. Also, this isn't the area I specialize in, so this is just a narrow observation, not a broad pattern.

I'm surprised to hear that to be quite honest. The feature gap is still pretty significant for any serious production deployment. MS is working hard to close the gap, but given basic enterprise features like database mirroring still aren't there - I'm not sure why any large enterprise would shift at this point.


My understanding is that feature (database mirroring) was deprecated many years ago, for availability groups. There were some limitations in the Analytics area, but that customer was using another solution for most of that anyway. There were also some features of SQL Agent that were discarded (scheduling events or something) but in each case there is some officially supported route to accomplish the same task.

You can find most of the material we reviewed in the official MS documentation:


For example, here's the note about deprecating database mirroring:


> Database mirroring, which was deprecated in SQL Server 2012, is not available on the Linux version of SQL Server nor will it be added. Customers still using database mirroring should start planning to migrate to availability groups, which is the replacement for database mirroring.

Availability groups are great on paper - in practice they add latency which many customers can't and won't tolerate. MS has been aware for years and still hasn't found a fix. Not sure hwy I'm getting downvoted - I'd assume people without SQL experience looking at a data sheet thinking it tells the whole story...

I believe you! The documentation says one thing, but in practice there are often hidden gotchas that can be extremely frustrating. Especially with respect to Azure, but even with the regular product. Microsoft has amazing sales teams, and some amazing engineers, but the overall quality of their engineering as delivered perhaps could use some work -- too much frustration, not enough delight.

> given basic enterprise features like database mirroring still aren't there

Database mirroring has been deprecated for several years. The replacement is Always On Availability Groups, which is vaguely supported on Linux.

MS WANTS customers to move to AGs but it introduces latency, which is why mirroring still exists in SQL 2019 and will indefinitely.

SQL is a language. Perhaps you meant a particular implementation like Microsoft SQL Server?

Thank you for being pedantic. Clearly in an article focused on Microsoft, where I specifically call out Microsoft porting SQL to Linux, I meant the language and not the product. I'll be sure to type out "Microsoft SQL Server 2019™" because who would know what I'm talking about otherwise? Oh right, you and everyone else on HN.

I’d recommend checking out .NET Core and F# or C# 8. Honestly .NET is becoming my go-to for anything beyond what a bash script can offer because it’s typed, functional or OO, has great frameworks for basic MVC sites or desktop apps, and the IDE on Windows is amazing while being able to compile cross-platform.

It’s not just for enterprises!

The only thing that really shits me with .NET Core is that their idea of a "single executable" deployment is really a self-extracting zip file with an entire copy of the runtime.

These days many of my deploy targets are docker images anyway, so not as much an issue... though looking at Rust with great interest.

For anything less, I reach for Node first, only because I can get something done faster.

Then have a look at Deno. Best of both worlds.

Other than bespoke enterprise apps, my guess would be the majority of the remaining Windows machines run SQL server.

Doesn't SQL server run under Wine or https://reactos.org/wiki/ReactOS ?

If you have wine in your production stack you gotta be doing something wrong. I did have some reporting tools run in Dosbox though as we couldn’t get the printer working without it. And it printed the reports and OCR’d them in because every time someone tried to rewrite that reporting tool they never got the same numbers and nobody wanted to be the one who answered the question as to “well whose right”

Microsoft officially support SQL Server on Linux and Docker now:


No need for emulation.

The docker images have been great... can spin up a fresh install and deploy schema and config in a couple minutes for local development.

Microsoft supports SQL server on Linux. No need for Wine.

Are you seriously suggesting either of these are appropriately dependable options for running a production database? Let alone supported options!?

I would bet that the license doesn’t encourage that, and SQL Server is the sort of software where the license is king.

The vast majority of Windows servers I’ve deployed in Azure are either domain controllers or legacy workloads that a customer wanted to just shift to Azure. Azure Domain Services and Azure AD still have some deployment complexities when compared to a scaled out domain.

I burned through a lot of Azure credit playing Football Manager on beefy instances, but presumably that's not representative.

>The wackiest one I've come across is a CMS (Crematorium Management System).


There is so much software written in Microsoft Access in my country that is basically irreplaceable. They have created interfaces to all the common industry software, vendors and devices and have all the common business processes built in.

Big chains probably have these things integrated in their ERP and CRM or use a custom software solution. But for smaller chains in their respective industries I don't see how you move these programs in the cloud right now without a complete rewrite.

Microsoft uses Windows behind the scenes on a lot of their managed services, but specifically for VMs on Azure... I'm as clueless as you (and I say that as someone who work in the enterprise space for mostly Microsoft shops).

I was surprised to see that the web server for www.microsoft.com reports via the 'Server:' header that it runs on IIS 10.

To me that just seems weird... even though it's their product.

It's out there. One website that always impresses me with its extreme speed is McMaster-Carr. Their Server header also claims they're using IIS.

re: Crematorium Management System, it might be possible to do that rather easily with a CRM? I develop mostly with CiviCRM (https://civicrm.org). It is contact-centric (rather than sales-centric), with good workflows for communications. You can add custom fields or custom activity types to track specific types of data or activities.

However, my experience in those kind of niches, is that the client will likely be very non-technical, and will prefer to use what everyone else is using, even if it's vastly overpriced and locks their processes in place.

A lot of applications use commercial components, that tend to be tethered to windows security models for registration/activation reasons.

Many internal apps likely don't need to target windows, but may have windows-isms baked in for a number of historical reasons. There are also a lot of .Net apps tethered to windows more out of heritage, most are easy enough to port, others much harder, it varies and there's a lot of apps unlikely to receive the effort to even explore the feasibility of migration to either mono or dotnet-core.

My understanding is that if you're using Microsoft's Hypervisor product, the management console allows you to seamlessly deploy either to a VM on local hosts or in the Azure cloud (presumably through a VPN). I imagine that makes it easy to spin up those extra windows machines and licenses you need to support all the extra Active Directory add-ons you can get (inventory management, update management, SQL server to handle their data, etc) on Azure rather than procuring more hardware.

> I'm interested to know what the majority of the Windows machines are used for.

I'll bet that there's a huge number of virtual desktop machines. Instead of having to push a really fat windows disk image with all the apps installed, to all workstations, those workstations run remote desktop, and connect to a windows VM running in Azure.

Virtual desktops or straight up on-prem to cloud migration might be my guess. If big companies can avoid it, why pay the cost of a rewrite?

> Crematorium Management System

Genuine question: why can't you use Microsoft Access or G Suite for this? Seems like the canonical data entry use case these products are designed for.

A Crematorium Management System built in Microsoft Access is still a Crematorium Management System.

I use a lot of Windows machines in the cloud as build machines for video games.

Many of the services Azure offers run on Windows, these are probably included in the number of total machines.

For example, the managed Postgres service runs on Windows which I didn't expect.

> I'm interested to know what the majority of the Windows machines are used for.

Willing to bet it's SQL Server and Azure AD

I've built a few pentest labs with AD and Windows VMs but that's it.

.NET, also IIS and Windows 10 Server UI is more useable than a Linux server console in my opinion. I used to be a Linux server guy, but then I started using Windows Server and IIS and never went back and I don’t think I’ll ever want to go back.

Running MS own services like Teams and Office359?

Legacy sharepoint ?

I’ve never heard of a crematorium management system, but once when I was doing the rounds at my chemical factory I came across the technician who handled the burners for our boilers.

It was October so he was doing a checkup before firing them up. I went in for some PR chit-chat. Unfortunately he took it as a genuine occasion for conversation (genuinely lovely guy, genuinely liked him, genuinely liked the conversations, but... well...)

“Yeah yeah, you guys are great, calling me in to do preventative maintenance before firing ‘em up.” “Yeah, it’s protocol, I guess.” “Yeah yeah, but not everybody follow protocol y’know...” “Really?” “Yeah yeah, no... just came over from the crematorium.. they had a fat client inside and the burner conked out and we had to pull ‘em out half-done and let the chamber cool off so I could service the damned thing.” “...” “Yeah yeah, real bitch of a mornin’. Family there and everything. Waiting for the ashes, y’know?” “That must’ve been awkward.”“Yeah yeah, no no, they understood... you only do these things once, y’know... yeah yeah... bitch of a mornin’.”“Well I guess I’ll leave you to it then.” “Yeah yeah see you around."

This is hardly surprising.

Not to crap on MS, but I do think Linux has traditionally just been a better operating system for servers, probably due to the fact that that's where most of the funding for Linux ends up going, with the desktop versions of Linux being sort of secondary.

It doesn't help that a lot of serverey software is kind of designed with POSIX-ey stuff in mind. Node.js took awhile to get Windows support and ZeroMQ's Windows support doesn't support IPC sockets. Hell, in the little bit of testing that I've done with this, .NET Core is faster on Linux than Windows.

This isn't to say Windows is "bad". I'm personally not a fan but plenty of smart people I know like Windows better than Linux. It's just to say that, for the domain that Azure fills, Linux is often a better fit, and if MS wants to compete with AWS, it would be borderline-idiotic not to support Linux.

Windows came from the single-user domain. Unix came from server space. It is literally two different evolutionary paths. From day-1, Unix was built for shared file systems, multi-user accounts, and distributed computing.

Networking on Microsoft was an afterthought until the mid-90's. Sure there were token ring and banyan vines drivers in the late 80's early 90's, but that was to share a drive letter. The entire idea of multiple users didn't appear until Win3.1 for workgroups, decades after multiuser OSes. WinNT started to take protected mode seriously, but they re-invented networking from an MS perspective (e.g., POP/SNMP? nope: MSExchange).

It's not a surprise that Windows is such a mess under the hood compared to Linux/BSD/Unix/SystemV when you look at the history of Windows. I dunno, maybe that's not fair, but switching between *nux and Windows from a networking / distributed computing perspective is jarring. And that's just for simple things (interface config, tracing, configuration of multiple adapters and bridges...)

I think both are a mess under the hood. I cannot make sense of where things will be configured in Linux. maybe /etc/, maybe somewhere else, with loads of setting file configurations. There's not a huge amount of consistency.

What would be interesting is a complete re-imagining of a server OS. Consistent file locations for everything, which are automatically versioned a la git allowing you to roll back any specific change, built into the OS level. All processes totally sandboxed from each other with specific permission elevations as required (like iOS).

Nor is there any consistency with the windows registry. Except that, by being such a pain in the posterior, noone edits it manually anymore and all changes are done through GUI apps. Editing config files is easier and faster (not to mention, automatable), so people don't bother creating "control panels" on Linux for the most part.

> What would be interesting is a complete re-imagining of a server OS

Not sure how 'complete', but aren't you after something like NixOS?

> Nor is there any consistency with the windows registry.

The Windows registry is a bad implementation of a good concept.

Imagine if the registry had a schema with embedded documentation, foreign key relationships, a packages database with each key owned by a package, etc...

I've wrestled with this for DECADES!

I've built software with a configuration database (Windows registry) and with flat configuration file-folders (*nix). I feel like I'm running in a circle because the trade-offs are so equally balanced that there's no best-case. From considering backups and database consistency, to tripwire malware detection of changed settings, to automation of access, to consistency in naming, retrieval, and type conventions, to self-documentation, to backwards compatibility... At this point I'm not even sure I would understand the correct or "best" methodology if it was handed to me on a silver platter.

> Imagine if the registry had a schema with embedded documentation, foreign key relationships

So a database then?

Well, if you think about it, the Windows registry already is a hierarchical database – just a quite limited one. You could have a hierarchical database with richer features (LDAP is a good example of that, although in some ways LDAP is rather clunky).

If by database you mean "relational database" – I'm not sure the relational model is the best fit for a config store. I think a more hierarchical database model is more natural for that purpose. (Of course, you could use a relational database as an underlying implementation detail – LDAP is fundamentally a hierarchical database, but there exist LDAP servers which store the data in an RDBMS, for example Oracle Internet Directory.)

Well gconf is hardly any better.

Yeah, that's a good point.

There is a big difference between Linux and BSD here.

Linux has really exploded with configuration. Are we using 'ip' or 'ifconfig'? Are we using /etc/<app> or /etc/<app>.d/? What about /lib/system? Do I restart that systemctrl or service?

BSD has kept it a little more "pure" in that it is more of a "only one way to do it" philosophy.

Having grown up under Solaris, which was BSD-flavored at one time, and then added SysV commands, and when Linux was on the ascent, added the GNU suite ... all I have to say is, bring it! All the inconsistent compile options for sed, find, things that look the same but don't act the same. Bah! :) It mystified me that epel packages install by default in /opt so you have to stick /opt/rh/blah/bin in PATH, and the attendant libraries. Considering the OS war in the 90s between Microsoft and Sun, it's a little funny, like cry-a-bit funny. Mr. McNealy would be surprise how his spats (probably for show) with Mssrs. Gates and Ballmer turn out.

One of Linux's many great failings: it constantly seeks to reinvent the wheel. Unfortunately modern Microsoft is also suffering from this terrible affliction.

On the one hand I like seeing behavioral trends feed back into the software. On the other hand, there needs to be tighter acceptance standards, and more effort to minimize the disruption to established rules. That's probably much harder to do on Linux than BSD (windows & macos have no excuse, they control everything!). Linus T. really exemplifies that with the KERNEL, it is the OS that is suffering.

I'd still take a heavily configurable and customizable "mess", as you put it, than the one that doesn't allow me to control how it works.

There's a good reason why Linux absolutely dominates the server space.

When I worked for a large semiconductor company from late 1980's till early 2000's, I saw IT move from a ghastly mix of AIX, HPUX and SunOS terminals & distributed computing to 100% Linux ... in under 2 years!!

It was like someone flipped a switch and suddenly there was only one type of environment for all computing (we used something else besides VNC to manage remote X sessions, I forget what it was, it was some weird launcher for Windows).

IT claimed hundreds of millions in savings due to not having to pay for AIX, HPUX and SunOS licensing. I think the cost is what tipped the scales.

> "There's a good reason why Linux absolutely dominates the server space."

Yeah, it was cheaper (in both senses of the word) than all the other contenders and more or less Unixy enough that software could easily be ported over.

Well, not to mention that you're allowed to add features to it without waiting for Microsoft to do it. Linux might not be the best codebase in the world, but it's probably easier to add a feature to Linux than it is to get MS to add a new feature.

Correct me if I'm wrong, but wasn't Linux quicker to support multithreading than Windows, largely because big megacorps needed that for their servers?

> "Correct me if I'm wrong, but wasn't Linux quicker to support multithreading than Windows, largely because big megacorps needed that for their servers? "

Surely you jest. The other Unixes, including Microsoft Xenix, supported multithreading before either Windows NT or Linux even existed. Big megacorps in that era used Sun Microsystems ("The com in dotcom™") server and other proprietary Unix servers because both Windows NT and Linux on the x86 hardware of the time weren't considered good enough to handle megacorp server loads and that didn't change until around '00.

If you go strictly by release date, Linux 0.01 came out in '91 and Windows NT 3.1 came out in '93 so it's technically correct that Linux had multithreading first but neither Linux 0.95 (what would have been available in '93) nor NT 3.1 was exactly something anyone would consider using in production, let alone at a megacorp.

Linux didn't have real threads early on tho, didn't it? LinuxThreads were only introduced in 1996, and they were broken in many ways.

You're right. According to the old Linux Threads FAQ (https://web.fe.up.pt/~jmcruz/etc/threads/linuxthreads-faq2.h...), that seems to be about the right timeframe.

Oh well, shows what my memory's worth.

Yeah, that completely ignores how shitty all the UNIX userland was.

Some of the kernels were even good, but what gain does it bring if tar would always do the most insane action instead of backing-up your disk?

It moves around as well. There's a huge distinction between Linux _as a kernel_ and "Linux" _as an operating system_. The kernel has the same layout across all the distributions / operating systems, but everything else is up to the particular stack that distro chose. Gentoo is different from Redhat which is different from Debian, etc.

You can do what you describe without changing the kernel, but you would need to modify the userspace environment. One of my toy projects has been trying to run all "user" applications in filesystem sandboxes because I'm sick of how many random "dot-files/directories" get created in my home folder.

Yes, that is an important distinction I glossed over. Thanks for adding that.

Maybe you can check out openbsd.

It's as opinionated as you are.

(FWIW there has always been a level of consistency in Linux w.r.t. file paths, if you stick to your distribution maintainers packages)

In another post I point out that BSD has done a great job keeping things consistent, whereas linux is a bit of a freeforall trying new stuff.

Not if you stick with Suse (OpenSUSE) or RedHat (centOS).

Gee, I need to change a setting on $APP. Wait, is that a config file in /etc/$app/? Or did they put that in /var/lib/$app/? Huh, can't find the setting I'm looking for there, guess I better check /etc/sysconfig/$app...

Usually, developers are sane and put things in /etc. However, there are a number of very common applications which have configurations strewn all over the place. Bonus points for apps which have even additional configurations in their systemd service files!

Rpm will tell you where installed files are:

rpm -ql <package>

This is where Kubernetes points, FWIW :)

My favorite way to picture Kubernetes is to think of it as a distributed operating system. It generally happens to be run across Linux hosts and is used to run Linux containers, but the Linux parts are incidental, like the specific model of CPU in a desktop system. The Linux parts aren't core to Kubernetes' abstractions, and it's possible that Kubernetes could add support for hosts and container runtimes that aren't Linux but instead platforms more streamlined for Kubernetes usage without historical baggage.

> "Windows came from the single-user domain. Unix came from server space."

Windows 9x came from the single-user domain but is long dead except for compatibility stuff left over in modern Windows. Windows NT and its successors came from server space from its VMS lineage.

While that is true, the win32 API and other stuff that floats on top of NT for the GUI based stuff (which is most stuff) is still very much restricted by that legacy.

NT was supposed to be very tunable and modular and have multiple personalities and abstractions but it turns out it mostly is just win32-on-NT and everything else ended up being on top of win32.

It has gotten better, especially post-OneCore, but it still is very much restricted compared to Linux, BSD and Unix.

Windows NT (and VMS) were not as much multi-user server as they were more closer to mainframes (which died/are dying for their own reasons). VMS, like plan9, had concepts to allow many (theoretically) fancy server things, but were to much a created world instead of a grown world (like post-system-v). Also, while the VMS dude who was on the NT team did influence it heavily, the legacy they had to support made it almost impossible to use any of that fancy 'new' stuff back then. While it was orders of magnitude better than DOS, it feels like Windows needs to make yet another evolution like it did going from DOS to NT to get back to current times instead of patching on top of patches every time.

Can you give any examples of Win32 API that manifest design flaws or restrictions due to "single-user domain" legacy?

Not expecting that to happen as MS is investing less in Windows.

Given that we just got the new SDK, DirectX 12 Ultimate is around the corner and release notes are a pleasure to read, it doesn't look to me like that.

I have to wonder what percentage of the Windows 10 codebase is for backwards compatibility.

Yep, many years ago I casually stated here that Windows was clumsy on the network, and it really offended some guy. Limited drive letters and backslashes didn’t help, but the history you mentioned is the foundational problem. The misdesign around console/remote admin is another. For more background read the Art of Unix programming and the Windows Terminal blog about all the layers they had to work around to shoehorn it in.

I still run into Windows Admins who don't understand UNC paths vs mapped drives.

I agree that networking stuff feels pretty frustrating on Windows compared to POSIXey stuff; for the most part using a good networking library like (as mentioned above) ZeroMQ or even some of the built-in JVM socketing stuff will help abstract it, but Unix sockets are simply better out of the box.

Even a simpler example though; the fact that there wasn't really a built-in analog for the "pipe" until PowerShell sort of boggles the mind. Yes, I'm aware that Windows isn't command-line focused, but it baffles me that software engineers didn't really see a need in Windows to compose programs together. (Someone, please correct me if I'm wrong on this point, I haven't touched Windows in any serious capacity in almost a decade).

Windows IO model - including network - is arguably much more well designed than anything POSIX, thanks not least to Dave Cutler and his team.

From the get-go Windows NT (Win32) had a common model for overlapped IO (async IO in today's parlance). This async capability permeated everything from disk IO to network IO, and works well today with USB, Thunderbolt etc. Mind you, this model has not changed while Unix/Linux/POSIX still have not found a common model that works for all. See for instance https://stackoverflow.com/questions/13407542/is-there-really...

Under the hood Windows was also designed with a completion-oriented model (IO completion ports) which scales better in most instances than the Linux readiness-oriented approach (I believe that MacOS uses a completion approach as well, so the readiness approach is not universal *nix). Basically, windows saves at least one context switch for every basic IO operation. Windows supports the Berkeley sockets API with some optional extensions to make socket operations overlapped/async as well, essentially supporting both the straight socket model as well as the completion oriented extensions.

This is a very enlightening post, thank you. I don't often develop cross platform to this depth, and I think it is a wise reminder that Microsoft actually knows a thing or two about operating systems, contrary to popular opinion. It's almost like the application teams sabotage the OS teams...

You appear to understand this stuff better than me, so you've gotten me a bit curious; how did something like Java NIO work before kernel 5.1?

You could always pipe commands, there just weren’t any worthwhile to pipe to. I often consulted this site often twenty years ago:


And half the time you’d have to use the “for” cli monstrosity. I looked up the instructions every single time I used it for decades, that’s how bad it was/is.

Windows has many ways to compose programs together, but usually the applications need to be designed to be composed together. The idea wasn't just to shuffle text around, but entire binary objects and shared state.


This is definitely getting deep into the Windows world, and if you didn't do a lot of highly Windows focused software development its probably easy to miss it. COM is a big giant complicated beast and definitely not a complete darling to work with but it had (has?) a lot of really cool concepts. Very different from the everything is a stream of text basis in Linux. And even now with things like Powershell, you're not sending text to the things you're piping to, you're passing .NET objects around.

socketing stuff will help abstract it, but Unix sockets are simply better out of the box

Can you elaborate? Do you mean performance? Because the on the C level both use (apart from some minor differences, which aren't enough to call one 'simply better' than the other) the same berkeley socket API.

Again, haven't touched Windows in years so it's entirely possible that I'm wrong or out of date, but the last time I did any kind of networking on Windows was with the WinSock API, which if I recall correctly, didn't have any support for any kind of IPC out of the box, and I had to do everything with TCP.

I might have been too harsh on Windows there, I've never paid to do any direct socket-level programming within Windows.

Do you mean Unix domain sockets by "IPC"? TCP sockets are also a perfectly valid IPC mechanism, as are many others. Win32 generally promotes pipes (named, if needed) for this purpose. For sockets, if you insist on them, it offers some optimizations for local/local TCP connections. Win10 eventually added Unix sockets, mostly for WSL integration (https://devblogs.microsoft.com/commandline/af_unix-comes-to-...). In the end, it's all just byte streams, so it doesn't matter much once you get past establishing connection.

The biggest annoyance with Windows sockets is that they aren't really unified with files - as in, a socket file descriptor cannot be passed to file APIs that work on FDs, like read() and write(); you have to use send/recv(). This makes it harder to write generic code in C, since you need an abstraction layer for code that deals with I/O streams. But most frameworks already offer such an abstraction layer - e.g. a .NET API would just use Stream (and if it's a socket, it would be a NetworkStream instance).

It hasn't changed! Except for PowerShell, but that's... oy.

I have PowerShell, Git Shell and Cygwin running on my windows Dev box and all three make me cry on a weekly basis because I have to switch between them to do different things. I suppose one day I will sit down to fix my Windows dev environment, but it makes me feel so bloody frustrated that my muscle-memory shell is so ... perverted by windows implementations.

MS is bringing Bash to windows natively. I don't know how that is going to work since piping just isn't there, but I've got my fingers crossed that I'll only have to deal with ONE shell for windows!!!

"Bash for Windows" is just WSL2 (forget WSL1 existed, as a proof of concept market validator it was good, but it has real issues because bridging NT -> Linux syscalls is hard.).

It's not a native Bash experience but it's pretty close. I say "not native" because it's actually running a VM under the hood and bridging the file systems via 9P network protocol.

I know people who are using it and love it. I have not switched yet, but I am definitely going to investigate it at some point.

I came from the Windows world to Linux. Especially when it comes to Enterprise, my experience is that Windows Admins are more likely to know network basics then Linux admins.

However, Linux admins who know networking usually have a deeper understanding.

Lots of FUD and misunderstanding going on there.

Windows NT, which is what everyone uses, is a descendant of VMS design and culture, built for shared file systems, multi-user accounts, and distributed computing just like UNIX.

GP is unfair in as you say, NT was a from-the-ground-up new OS based on VMS design without the single-user baggage of old DOS/Windows.

But still for a long time Windows NT made a lot of concessions to the single-GUI-user model that UNIXes didn't, like putting font rendering in the kernel.

Except for UNIXes like NeXTSTEP and Sun NeWS, as no other UNIX ever had any clue about GUI development, as Jobs used to say.

Unix was originally a single tasking OS, that's what the U is for

I've used Windows on desktop since 3.1, and nowadays Windows 10 is my daily driver - IMO it's the best desktop OS, and always has been.

But for servers, I much prefer Linux. It uses less resources, and it's so much easier to get it into a known and consistent state that Windows.

Also, something about having a GUI for servers has always felt... wrong somehow!

Right tool for the job. Also Windows finally has a terminal and ssh, so it is tolerable.

Almost the same here. I do not really have problems with GUI on server besides you can configure Windows server run without one. But the pricing however is not very attractive for smaller fish.

WSL2 is the only thing that has made Windows bearable for me. That and VS Code's remote extensions. Almost everything I use is cross platform for years now.

I've always got the feeling that MS specific developers are highly visual studio oriented. They have their connection string built into the IDE. In fact, if you asked them to make something that builds from the command line, and runs, they would probably be scratching their head for a while.

It wouldn't surprise me at all to find some windows based companies that have classic windows VMs in the cloud, running Visual Studio, and production deployments are: RDP in, stopping with the little red button, updating the codebase (with a copy over a network share), and then re-running the code on the production server with the little green play button. Database changes are done by hand in SQL Server Manager.

I know this isn't at all related to the post here - but it's just an interesting phenomenon - the fact that Azure is about 50 percent windows means it's mostly running things like Office backend products, crazy running copies of Visual Studio. It is probably a total mess if half of Azure goes down for 5 minutes. Linux servers would come back up and continue running whatever. All the Windows companies get a call, have to RDP in and do shit.

I've always got the feeling that MS specific developers are highly visual studio oriented

This and the rest of your post is about one specific kind of developer, and I honestly wonder how you can think that's it.. Lack of experience or just bad luck and only ever met this kind perhaps? Anyway while I know for a fact these exist, your feeling is sort of wrong in that there are also developers who just use VS as the IDE it is (to write code and debug; it's good at that) and for the rest automate build environment setup, deployment and whatnot just like you'd do on another OS. I just don't know the numbers, i.e. which share of developers using VS act like you describe. It's not like using VS somehow makes people blind for concepts like CI. Data point: we have a bunch of scripts (combination of PowerShell/Python/MSbuild) which will install and setup the complete build environment from scratch (VS/Miniconda3/Vcpkg/...) checkout all code (yes from git, not from a network share), build a myriad of different flavors of the product, package it; all in one commandline command.

One example, when you install many applications, even automated via cli, it will open a gui progress bar.

Python used to do this, it still may.

If the gui doesn't actually require interaction that doesn't really matter, does it? But most installations have a silent mode where they do not show a gui. All of the aforementioned applications are installed like that (inluding Miniconda i.e. Python), as is common when installing via package management in Powershell.

The Visual Studio ecosystem has supported command line builds for a long time. msbuild commands are run by Visual Studio and display right in the console. The .NET Core stack is also very command line first designed.

I think what you're noticing is that Visual Studio makes it easier for point-and-click developers to participate while a bash shell takes a little more effort to learn. You can have poor developers in both worlds or great developers in both worlds.

I'm one "visual studio Dev"

I have also build in Go, nodejs, Php, RoR.

But VS is truelly the best IDE experience I have witnessed.

Database changes are automatically migrated and deployed using entity framework fyi.

The big deployment button changes the connection string per deployment, deploys with feature flags directly and automatically migrates the db to the latest version

It's kind of funny to see the disdain for VS based deployment here, while all the current rage right now is declarative deployment systems.

Something VS based development, build and deployment based workflows have been doing for over a decade through the largely Web..config files and .csproj files.

Using it on a huge MFC application was the worst IDE experience I've ever had. So much time wasted on waiting for the thing doing it's job. That said, it's a lot better when used with a modern language like C#. But still dog slow. Nowadays even Java based IDEs feel snappier. And that says a lot.

> I've always got the feeling that MS specific developers are highly visual studio oriented. They have their connection string built into the IDE. In fact, if you asked them to make something that builds from the command line, and runs, they would probably be scratching their head for a while.

That might have been true once, but I think Microsoft-stack devs are a lot more command line savvy these days.

The move towards DevOps and scripting makes the command line much more necessary, and makes the benefits of the command line clear. The move towards cloud has introduced Linux to a lot more organisations, yes, even Microsoft shops.

That might have been true once, but I think Microsoft-stack devs are a lot more command line savvy these days.

I don't think it's ever been that true. It's just a handy stick for some people to bash their perceived out-group.

For decades the cli interface on Windows was neglected on purpose, and it showed. Sure you could hobble along, but I don’t blame folks who didn’t.

Is it any better than SSH'ing into a server and "services stop mybusinessapp; update.sh; services start mybusinessapp"?

I'm detecting a bit of disdain for the VS world

manual ssh operation looks not better

> In fact, if you asked them to make something that builds from the command line, and runs, they would probably be scratching their head for a while.

As someone who learnt to program using linux/Mac OS, when I got a job somewhere that used windows and VS exclusively, trying to get my head around the (to me) needlessly confusing and convoluted way in which windows-dotnet devs seemed to do things was a painful experience.

Watching some of them try and debug docker files was amusing because that was a total role reversal.

Obviously anecdotal, but in my personal experience this is pretty close to the truth.

> in the little bit of testing that I've done with this, .NET Core is faster on Linux than Windows.

My experience as well. In fact it was extremely much faster.

- Hello world took something like 5 seconds to run on Windows

- on Linux (a VM on the exact same laptop, with the same Windows running underneath in fact) it was so fast I had to use the time command to figure out how many milliseconds it took :-)

FTR: while it was hardly scientific I actually put some effort into trying to make sure it was equal.

That does not make sense at all. The files were probably cached in the Linux case.

Believe me I tried a number of things. It was like that from the first run and every time.

Well, technically it might have used 700ms the first time and 400ms later, I didn't measure the first run, but it was still way faster than on Windows which was why I started measuring using time after all.

Also on Windows it kept being slow on subsequent runs, and in my world that matters too.

I don't have recordings, but I recommend you try for yourself. I compiled both from the same source, and as far as I can remember I used standalone compilation for both (I don't have perfect memory but I have a history of being careful.)

Spawning a process is a rather heavyweight operation on Windows, compared to Linux, so it's not surprising that it's slower. But 5 seconds definitely makes it sound like an I/O caching issue.

Another common problem on Windows seems to be antivirus. Many of the available options seems to insist on checking every file on every invocation even if I have compiled it from my own sources (I see this can be hard to judge) and even if it last checked it a few seconds ago.

It just makes sense to run a server OS that works well and is full-featured when there is no GUI.

You know, I was thinking this, but honestly how much overhead is the Windows GUI when no one is logged into it? I know it's not nothing, but I suspect it's not that much, is it?

Windows Server is available in a "core" edition without a GUI.

Not only that but pretty much everything on Windows Server is exposed to PowerShell and has been for some time.

And does it actually have well tested, well known, reliable functionality on everything you tend to use? Configuration of programs using easy to edit text files vs some system of ticking checkboxes in a GUI?

RegEdit, is there a text version for that? Imo there's no comparison to the Linux ecosystem.

Windows is GUI first.

> RegEdit, is there a text version for that?

Uhm, in Powershell:

    PS C:\Users\zb3> cd hklm:
    PS HKLM:\>
(you are now in the registry local machine hive)

It's just that I have a hard time believing that three or four years of development for administrating windows through text is equivalent to 30 years development for administrating Unix through a CLI.

In many ways it's much better. There is a consistent interface over everything. Whereas on my Unix boxes my ~/.ssh/config file looks completely different to my Apache config.

Also I get much more fine grained control over permissions in the registry, right down to individual keys. Try doing that when your config is in a text file.

The default fallback to can't find it in GUI is cmd? Windows 95 used to start from command prompt. We've been administering windows from cli for 30 years. But our cli was oo from the start. Powershell is really amazing. Once you start falling out of object land into crap like ssh, you see the split, e.g. why it took so God damn long to port it. And I still have plink into switches using a premade command file.

PowerShell has been around since 2006, not just the past 3 or 4 years. And there was a good deal of CLI-based administration / scripting before then, too.

And does it actually have well tested, well known, reliable functionality on everything you tend to use?

Yes, MicroSoft made a big thing about this about five or six years ago if I recall correctly. All the server products like Exchange or SQL Server treat PowerShell as a first class client.

You can walk around the registry just as if you are traversing a directory tree.

If anything the GUI is becoming rather dumbed down in recent releases.

> RegEdit, is there a text version for that?

AFAIK, yes, it's at %WINDIR%\System32\reg.exe

So it exists, but is the usability and functionality nearly equivalent? And the thing is that it also involves third-party software tools.

Nearly equivalent to what? It can do everything that the GUI can, if that's what you mean. Some things are obviously going to be less convenient, but that's just the nature of CLI.

And what third party tools does that involve?

What I mean is that oftentimes on Windows, I use non-OS tools or applications which are not made with the expectation that you will want to script them or configure them via text interface.

Great, *nix has been available in that way for hmmm, maybe a few decades.

It's not the resource but the cognitive load is so high when you have to click little parts and you can only interact as good as what the UI offers.

Apparently Windows is human first which isn't what servers are meant to be.

You don't have to click little parts, you can use Powershell :-)

It's a lot. I run core for critical windows infrastructure to cut costs, patch times, and save on core count, instance sizing and reduce attack footprint.

The GUI is big, and I run that when needed, RDS, userland stuff etc.

I don't mean as far as idle resource usage.

The advantage of Linux is when you want to accomplish tasks without a great internet connection, but the windows tools are not evolved enough as far as command line functionality.

This was once the case, but PowerShell has been a solid administration tool for quite some time now. OpenSSH Server also ships as part of Windows, which was the missing link (since WinRM is, charitably, awful). The remote administration experience of Windows is now certainly comparable to Linux, in my opinion.

Thanks, at this point I'm neither a sysadmin not a masochist so I haven't checked this out.

You seem to want to retain your bias.

You don't want to know how many RHEL servers are out-there running a gui...

As many as wannabe sys admins.

Windows Server Core (no GUI) has been the default since 2012.

I bought books such as Linux Programming Interface to learn linux so that I can apply for jobs. Later I discovered that when the companies say Linux, they don't mean writing native applications on Linux, but that they are actually talking about java applications on app servers such as JBoss. Does anyone write native applications like thick client native applications on Windows which is also rare these days.

I'm in a bit of an odd place here, but we develop C++ on Windows (both client and server software).

You'd think that C++ would be relatively portable (though not as portable as Java), but you'd be mistaken. So many things are not portable.

I actually was writing C and lo and behold MSFT has a 16 byte char as their default. lol tchar

TCHAR is a Win9x relic hasn't really been something to be concerned with for well over a decade now. It took a while for MSDN to acknowledge that, but these days it even documents the A/W versions of each struct and function explicitly, and doesn't mention the generic aliases.

But wchar_t is still 16-bit, yes - it's baked into the ABI for good now. It's not even a "default", since you can't change it.

I am very interested in the book you mention, simply out of passion. How do you find it?

You sound very enthusiastic. I hope my response does not negatively impact your enthusiasm.

As soon as I started reading the book, I knew it was way beyond my depth. I was taken by all the news praising Linux and the developers who were changing the world for the better. I too wanted to be part of that story and contribute to FOSS. I was so naive and got deceived by own magical thinking. I got laid off from my first job around that time. I looked for Linux jobs but soon realized that when normal companies put ads for dev jobs on Linux platform, they were mostly for web based java applications running on Jboss or Websphere. I was so dumb that I did not know this simple fact when I bought that book. I lacked mentors and with my social anxiety did not have friends who could advice me. Anyway. After a year at the second job, I got laid off again. One day I came back home and my wife complained about all the fat books I strew around for the umpteenth time. That was the last straw that broke my resolve. In a fit of rage and depression I threw all my books in the dumpster. I should have focused more on Java but I was too dump to realize that C and C++ native applications were on the wane (I used to be a windows developer) and Java based web applications were on the rise. Now I work as a production app support engineer where my coding skills don't count for much. If there is an issue, I have to call different teams like network, dev team and get the issue fixed. I weep sometimes at how my career has turned out. I realized I can't code anymore. When I sit to write code I am overcome by anxiety and can't complete the task for weeks. I am middle aged now in my mid forties.

Don't let my experience dampen your spirit. Go on and change the world.

I am truly saddened to hear your account of things. I myself am trying to find out in my early 30s what piece of the computing puzzle I should reshape myself to be.

I am driven to go through this book (in fact, I made the purchase just last night) out of an urge to comprehend the full stack of computing. I can't say I have mastered any layer of the stack but familiarizing myself with the domain and limits of each gives me the confidence to trust my intuitions.

I was a mere designer before, turned front-end/back-end web developer, but my curiosity now drives me towards infrastructure. I don't even know if coding is what I will end up doing with my professional life but I can't imagine any of this knowledge will ever go to waste. Knowing the ins-and-outs of the infrastructure that runs the world can't possibly be useless knowledge.

However, one thing that troubles me about computing is how quickly I become fatigued sitting and staring into a screen. Although for now, I can do it for an unhealthy amount of time, I know deep down that my body or my eyes will not last if I keep this up. And for that, I don't know if my coding skills will count for very much in the future either. I don't quite know what it is you do as an app support engineer but it doesn't sound so terrible to me! I get a satisfaction out of simply knowing what needs to be done and directing the rightful minds even if I may not be the one doing it.

I don't know if you had ever had a "passion" for coding but I certainly hope you can find some delight in it (again). The wall of text is an abyss when the inertia of joyful productivity is not on your side. And when the implementation roadmap is long, the burden can be too heavy. I dare to suggest something like visual code-art (like Processing.js, or Flutter art) to change your mind about coding. I find that getting immediate feedback from small changes you make to your code minimizes that burden and can make coding fun again.

I appreciate your encouraging words. Thank you and best of luck to you.

Yes, that's the one. I have responded to Whytaka.

I see my question was rather daft.

no, it wasn't. i have responded to your previous question.

It was useful for me to understand Linux under the hood (dataflow in memory, syscalls, etc) to help pass SRE interviews. I'm sure (hope?) the knowledge will come handy at some point when debugging some nasty low level bugs.

I probably only read 15% of it to supplement my on-the-job Linux experience.

I’ve been reading that book recently - I recommend Windows Internals Part 1 as a good way to dive into the deeper parts of Windows :)

I found it funny when in Azure you create a resource and you can pick between Windows and Linux for the host and Windows is the default option... like bro, I’m not gonna run my shit on Windows walled and heavily licensed garden. What if tomorrow I have to migrate my stuff somewhere else. Are you crazy?

P.S: I know there are legitimate reasons for running something on Windows but for any generic project that shouldn’t be the default at all. Linux is the de facto cloud OS.

You know you’re suggesting it’s crazy that Microsoft would suggest their own OS as the default, right?

Wouldn’t be crazy if Windows wasn’t a licensed closed source OS, but that’s exactly what it is. That’s the definition of vendor lock.

Maybe ask it a different way. Is it crazy for a bar that brews its own beer to prefer to serve it?

If the majority of customers prefers another beer they also sell? Yes, kind of. If the customers most likely wants the other beer, you're wasting his time by asking him whether he wants to try your special brew.

I don't believe Microsoft is attached to making Windows the default though. Who knows why it is, maybe because people that use the Wizard are likely playing around and will rather want a Windows system? My guess is they'll switch the default if the percentages shift even more and they see that most that manually set something up will change the default. In the end, they want to sell computing and anything that gets in the way, well, gets in the way.

There are tons of restaurants that have their own specialty drinks like their home made soda or root beer or their own alcoholic beers, but most customers pick Budweiser and Coca Cola.

It's not crazy for them to suggest their own home-made soda first, it's marketing. Because that specialty soda makes them more money and differentiates them from their competitors who are just selling off-the-shelf stuff.

It's not crazy to not want to be a commodity.

> It's not crazy to not want to be a commodity.

If you're trying to serve a mass market? I think it is. Wasting time (yours + the customer's) by making them go through and deny the less common options isn't a good idea if you're aiming for serving as many people as possible as efficiently as possible. This would be very different if it wasn't Azure but some local tech company that sits down with you to talk about what kind of server you need and want, what OS best fits your requirements etc, but that's a very different game from what Microsoft is playing with Azure.

With low/self-service, streamlining the process is the right thing to do. In the end, you have to ask yourself, whether you want to earn a few bucks by also selling a licence to your OS, or whether you want more customers. Of course, it's not an actual issue in this case, because you'll usually not manually provision (but then again: why do it if you don't expect to make money by doing it?)

They keep doing it because it works. Same reason restaurants tell you the specials when you first get to your table even though you’re going to end up ordering a burger anyway.

You can argue all you want about what they “should” be doing but they wouldn’t be doing it if it didn’t work.

I've been working with Linux and BSD for so long, I don't even know of any Microsoft-only cloud applications. I'm completely *nix bubbled. I cannot imagine cloud programming on windows, probably because I only write user-space native apps. Is there a Puppet or Ansible for windows? I'd be completely lost.

Yes, there are things like Puppet and Ansible for Windows. Two good examples are Puppet and Ansible.



There's also Desired State Configuration


There's also Group Policy.


There's probably a thousand different ways to automatically deploy and manage Windows servers. Its been a thing since at least NT.

Yes, it’s called Ansible

The default option for VMs is Linux. But for docker window, go figure?

I've moved .Net Core apps freely between Linux and Windows.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact