
The Sad History of the Microsoft Posix Subsystem (2010) - the_why_of_y
https://brianreiter.org/2010/08/24/the-sad-history-of-the-microsoft-posix-subsystem/
======
justincormack
Here is a more recent history of Interix and the history of it, from Stephen
Walli who was part of it:

[https://medium.com/@stephenrwalli/running-linux-apps-on-
wind...](https://medium.com/@stephenrwalli/running-linux-apps-on-windows-and-
other-stupid-human-tricks-part-i-acbf5a474532)

[https://medium.com/@stephenrwalli/running-linux-apps-on-
wind...](https://medium.com/@stephenrwalli/running-linux-apps-on-windows-and-
other-stupid-human-tricks-part-ii-c244b2ee535)

------
toolslive
"Is Windows NT POSIX Compliant ?" is a good read:
[https://books.google.be/books?id=t_HcO8cY91IC&pg=PA37&lpg=PA...](https://books.google.be/books?id=t_HcO8cY91IC&pg=PA37&lpg=PA37&dq=windows+NT+3.5+posix+certified&source=bl&ots=Px9DXJRLat&sig=6aTVf7_8y2dgO7J80qV4OCKdO_A&hl=en&sa=X&ved=0ahUKEwjgjL-6mYzLAhUiQpoKHatGCxgQ6AEINDAE#v=onepage&q=windows%20NT%203.5%20posix%20certified&f=false)

~~~
chris_wot
Bloody hell - they really were utterly unethical back in the day! Seriously,
somehow they were bypassing procurement rules - there had got to be either
seriously incompetent decision makers, or some sort of graft that occurred at
the time.

~~~
dredmorbius
Look up the Orange Book certificatoon story after your blood's stopped
boiling.

[https://gcn.com/Articles/1998/10/26/Former-Microsoft-
contrac...](https://gcn.com/Articles/1998/10/26/Former-Microsoft-contractor-
Ed-Curry-says-that-the-company-deliberately-misledgovernment-buyers.aspx?m=1)

------
adekok
One of the Interix guys gave a talk at my local Unix group in 1997 or so. He
said they had a support contract with Microsoft. Once Windows NT came out,
Microsoft stopped answering calls for about 6 months. Because NT was going to
win, right?

After 6 months, their technical contacts were sheepishly apologetic...

On a personal note, I did my main FreeRADIUS development on XP with Interix
for many years. It was... adequate as a Unix replacement. Not spectacular, but
adequate.

When that system died, I replaced it with an OSX system. Which was enormously
better.

~~~
ludamad
I don't quite understand, what caused the 6 month period of silence and then
apology?

~~~
gvb
Microsoft burned their bridge with Interix because "NT was going to win,
right?" Six months later, when Microsoft figured out that NT still needed a
Posix compatibility layer, they had to (sheepishly) rebuild their bridge with
Interix.

------
chris_wot
This is interesting. So basically, my understanding is that the Posix
subsystem is just a user mode portion of Windows that translates API calls to
calls into Executive services through ntdll.dll

Now this means that if you know the ntdll.dll interface then this remains
stable. So if you want to develop your own environmental subsystem it should
be possible. After all, the Posix subsystem was purchased by Microsoft!

What I'm _really_ interested in is what the ReactOS guys are doing. Have they
implemented the same layering? If they have implemented ntdll.dll, then this
could actually mean that they could technically get in ahead and do what
Microsoft are doing right now.

For that matter, it starts to make me wonder whether someone could do what
Microsoft have done but in reverse on Linux! In other words, implement a
translation layer that translate ntdll.dll function calls to corresponding
kernel syscalls, then work backwards implementing each of the user mode
subsystems. Maybe WINE has done this already?

~~~
cptskippy
Up until Win7 the Win32 subsystem wasn't entirely in userland. In Vista they
started the MinWin kernel initiative to extricate it but I don't know if
that's full come to fruition in Windows 10.

If it has then you could theoretically write a WinNT interface for Linux and
run the Win32 userland on it. But that's pretty much asking Microsoft to sue
you into oblivion.

* According to Wikipedia, the MinWin project started after Server 2003 shipped.

~~~
chris_wot
Wouldn't Microsoft have sued the ReactOS guys then?

~~~
cptskippy
No, ReactOS is an implementation of the Win32 API, not the WinNT kernel.

There's a layer of separation between WinNT and Win32 that's basically an API.
In theory if you implemented this API, then you could run Microsoft's Win32
subsystem on your API without Microsoft's underlying WinNT kernel. That Win32
layer would be Microsoft code and they'd sue the heck out of you for running
it.

The WinNT kernel was originally designed so that Subsystems would run on top
of it and those Subsystems would provide the interfaces for applications.
Originally WinNT had Win32, OS/2 and POSIX subsystems that ran on top of it.
Over time the distinction between WinNT and Win32 erroded while the OS/2
subsystem was canned and the POSIX one neglected.

Starting after Server 2003 they began to redefine the boundary between WinNT
and Win32. The primary reason was to allow for headless Servers that didn't
have the overhead of the GUI (Win32) or other unnecessary functionality like
the Printer systems.

~~~
chris_wot
That's entirely wrong. ReactOS implements the subsystems that allow drivers to
run and a whole bunch more besides.

If you don't believe me, then I refer you to the following:

[https://github.com/mirror/reactos/tree/master/reactos/dll/nt...](https://github.com/mirror/reactos/tree/master/reactos/dll/ntdll)

The Wiki itself states that:

"The ReactOS project reimplements a state-of-the-art and open NT-like
operating system based on the NT architecture. It comes with a WIN32
subsystem, NT driver compatibility and a handful of useful applications and
tools."

[https://www.reactos.org/wiki/ReactOS](https://www.reactos.org/wiki/ReactOS)

You might also want to review:

[https://www.reactos.org/wiki/Ntoskrnl.exe](https://www.reactos.org/wiki/Ntoskrnl.exe)

And also:

[https://www.reactos.org/wiki/ReactOS_Core](https://www.reactos.org/wiki/ReactOS_Core)

And here is the header for the kernel functions used in ntdll.h:

[https://github.com/mirror/reactos/blob/master/reactos/includ...](https://github.com/mirror/reactos/blob/master/reactos/include/ndk/kefuncs.h)

------
grennis
It seems this article badly needs to be updated in light of Ubuntu on Windows.
Here's a link from 2016 instead of 2010.
[https://insights.ubuntu.com/2016/03/30/ubuntu-on-windows-
the...](https://insights.ubuntu.com/2016/03/30/ubuntu-on-windows-the-ubuntu-
userspace-for-windows-developers/)

~~~
lmm
It's a timely reminder for anyone getting excited about Ubuntu on Windows that
Microsoft had a working system for doing that, only to slowly run it into the
ground. I'd have a lot more respect for the new system if they'd revived
Interix/SFU/SUA rather than releasing a different, incompatible replacement.

~~~
yuhong
I think the point is to allow Linux binaries to be used without a recompile,
which saves MS most of the cost of maintaining binaries for it.

~~~
chungy
Which is the key. Plenty of ISVs make Linux binaries, but nobody was
interested in Interix binaries.

~~~
WorldMaker
Indeed. I think you can see this in some of the history in this article:
Interix was "POSIX compatible", but essentially its own OS, like compiling for
Linux versus BSD. So someone had to maintain binary builds of GNU tools for
Interix and thus you ended up with the large "Tools" distribution of user
space binaries. Ultimately, "Tools" was its own Unix distribution that was
subtly incompatible with any other Unix distribution. Even today on Linux you
still see a lot of the headaches in the subtle binary incompatibilities across
Linux distributions.

The amazing thing with Ubuntu on Windows is that user space is the same Ubuntu
distribution of user space tools as on Linux. That lessens the maintenance
burden of the User Space considerably as Canonical is already actively
maintaining that distribution, and will continue to actively maintain that
distribution, and that there are a considerable number of users of that
distribution on Linux already and a considerable ecosystem of third parties
building for that distribution. Those are definitely the missing pieces that
Interix never had and makes this "Son of Interix" that is Ubuntu on Windows
much more interesting.

------
SFJulie
Maybe the fact that Dave Cutler who was much more a VMS guy that did not liked
Unix/Posix at all may have had an influence on the design of the kernel of
windows NT is an explanation?

I must admit I used SFU and IO was all quirkish (buffered IO called in non
buffered mode).

But knowing that async IO, threading was radically different from POSIX they
may have decided at some point that it was not possible to offer a quirkless
API for Posix by conflict of construction, and they dropped it.

I mean for a lot of programmer an IO is an IO, a process a process... but the
kernel may thinks differently.

For the record, before I hear the Unix fan boys says that Posix is superior,
PyParallel by continium exploits a parallel version of python on windows and
have considerable gain from using exactly the heart of Dave Cutler's
architecture: multi-threading, async IO...
[https://speakerdeck.com/trent/parallelizing-the-python-
inter...](https://speakerdeck.com/trent/parallelizing-the-python-interpreter-
the-quest-for-true-multi-core-concurrency)

I do not say it is the panacea, I say it worths being looked at.

As you guess, thinking that the same cause produces the same effect and if the
kernel still use Cutlers architecture around multi threading and asyncio then
I expect the ubuntu runtime on windows to have quirks around... asyinc io and
multithreading.

------
rwmj
I think this is a best described as a tale about using and writing proprietary
software. At any time what you've written can be taken away and/or abandoned,
and there's usually nothing you can do about it.

~~~
_yosefk
Using an abandoned free software project is rarely going to be much fun,
either. In fact, using actively maintained free software projects is rarely
much fun, since many of them tend to often abandon old features and introduce
new incompatible ones, and you either have to keep using old versions of
everything or update everything at once, breaking a ton of things. (Of course
at times you're way better off with a free program than you'd be with a
proprietary alternative, but the reverse can be true just as much in other
cases.)

~~~
pjc50
_using actively maintained free software projects is rarely much fun, since
many of them tend to often abandon old features and introduce new incompatible
ones_

GNOME has a lot to answer for. Most other projects try hard to avoid this and
lean the other way, preserving ancient features at the cost of clarity.

~~~
_yosefk
I dunno, KDevelop 4, based on KDE's rather than Gnome's libraries, was nothing
like KDevelop 3 and couldn't do some of the old things, and there was no way
you could compile KDevelop 3 on an Ubuntu box bundled with KDevelop 4 and the
new KDE libraries unless you were really good at this shit. Eclipse, based on
the JVM, had a popular plugin that was broken, I think, in Juno or a bit
earlier, and whose author refused to update it for the new Eclipse because too
much of the things in Eclipse he depended on changed. Upgrading Emacs broke
everyone's emacs-lisp hacks that they copied from each other. And so forth.

I think that in the land of Linux, the only things which remain backward
compatible are the kernel (if you want to run statically linked userland
binaries - not if you want to run driver binaries) and the GNU userspace
utitilies like sh and grep (though sh is bash on some distros and dash on
others, breaking the very popular assumption that sh is in fact bash, and env
might reside in either /bin/env or /usr/bin/env, defeating its purpose of
letting you deal with the fact that #!/bin/tcsh will break on systems keeping
tcsh at /usr/bin. But a given distro will usually keep its shit where it last
put it, which I guess is better than it could have been.)

------
yuhong
I should also mention this fiasco:

[http://archives.miloush.net/michkap/archive/2008/01/12/70624...](http://archives.miloush.net/michkap/archive/2008/01/12/7062458.html)

------
rbanffy
I used Cygwin to offer me a sane environment that ran most of my code and
tools since at least Windows XP. Under it, Windows was a reasonably adequate
workstation OS.

~~~
moonfern
Between 2001 and the arrival of service pack 2, it was a horror due to virus
and worms for sys admins and end users. The ignore the problem thing was a
resident resource hungry program called "virus scanner" Com & co were great,
ms used it,the rest abused it. And Xp feels fast too, it wasn't, 7 was faster,
but it felt the other way.

~~~
clevernickname
>And Xp feels fast too, it wasn't, 7 was faster, but it felt the other way.

Desktop composition adds a frame or more of latency. If you disable it (sadly
not possible in Windows 8 and later) UI interaction feels noticeable more
responsive. But most people run Windows 7 with composition on, so for them it
makes sense that "XP felt faster"

------
gfody
> Around this time, the core development team was reformed in India rather
> than Redmond and some of the key Softway developers moved on to other
> projects like Monad (PowerShell) or left Microsoft.

This is indeed very sad. It seems like every project shipped overseas suffers
the same fate. When will we ever learn that you can't "leverage the salary
disparity" \- you don't outsource your engineers when your core competency is
engineering.

~~~
jeffjose
This is not outsourcing. I'm an Indian and I know for a fact that Microsoft
India employs the very best talent. Yes, there are cost benefits for having
engineering done in India, but for companies such as Microsoft there shouldnt
be a quality difference. However where you'd see quality difference is if you
try to build an app for $50 with a company you found on a job website. Those
employ regular run-of-the-mill, copy-paste-from-stackoverflow developers.

~~~
gfody
Offshoring then. Talent probably isn't even the problem. It could have more to
do with putting 8000 miles between the devs working on some project and the
people who care about that project. If you try to get an app built for $50
you're gonna have a bad time regardless of where you do it - that's not an
argument.

~~~
hirsin
> the devs working on some project and the people who care about that project.

Those two aren't mutually exclusive. Toyota employees in the US probably still
care about the cars they're working on. I can verify that the Finnish
employees at Microsoft do great work (e: and care about it), despite being
4700 miles away from Redmond.

~~~
gfody
I'm not trying to imply that offshore workers don't care about their work. If
you take an existing project and send it far away from everyone who was
involved with that project, that project will probably suffer, especially with
software.

------
PaulHoule
It's sort of a truism that the more expensive software is to buy, the more
expensive it is to own... (i.e. worse quality, less development on the part of
the vendor, etc.)

~~~
ethbro
I think the starting point of that could be more precisely identified as "the
fewer buyers for a given piece of software..." the more expensive it is to
own.

If you're the only customer running OS/2-on-NT-using-Token-Ring-over-Carrier-
Pidgeon, you're not going to have a good time when you find a bug or missing
feature.

~~~
agumonkey
A variant on the network effect effect.

~~~
qubex
(Absence thereof.)

------
csours
Well that was indeed sad, including the Walli writeup linked by Justin.

Question: Does, and if so how does the Window Station[1] fit into the NT
Subsystem framework?

1\.
[https://blogs.technet.microsoft.com/askperf/2007/07/24/sessi...](https://blogs.technet.microsoft.com/askperf/2007/07/24/sessions-
desktops-and-windows-stations/)

------
rubyfan
I still have a Softway Systems CD laying around somewhere. Wasn't quite the
same as Linux but was fun to play around with.

------
ullus12
details matter

[https://docs.freebsd.org/cgi/getmsg.cgi?fetch=156953+0+archi...](https://docs.freebsd.org/cgi/getmsg.cgi?fetch=156953+0+archive/1998/freebsd-
chat/19980823.freebsd-chat)

~~~
spb
[https://www.youtube.com/watch?v=9wWUc8BZgWE](https://www.youtube.com/watch?v=9wWUc8BZgWE)

------
wsfull
Here's my tl;dr:

1\. Interix/SUA subsystem was not developed by Microsoft. It was acquired from
a company called Softway. It was used internally to transition Hotmail from
FreeBSD to Windows. It is believed some important MS customers also made use
of Interix and possibly came to rely on it.

2\. How to explain MS seeming ambivlance toward a POSIX layer on top of
Windows? Idea: Windows API is so complex (convoluted?) as to exclude
competition. See Joel On Software reference. He marvels at Windows' backwards
compatibility - being able to run yesterday's software on today's computers.
Yet he also admits MS strategically developed software that would not run on
today's hardware, but only on tomorrow's. (Not intending to single out MS as I
know other large companies in the software business did this too.)

Complexity as a defensive strategy. Who would have guessed?

Many years ago, I gave up on Windows in favor of what I perceived as a more
simple, volunteer-run UNIX-like OS that was better suited to networking.

As it happens, unlike Windows, _all versions_ of this OS run reliably on most
_older hardware_. Although it was not why I switched at the time, I have come
to expect that by virtue of the UNIX-like OS, my applications will now run on
older _as well as_ current hardware. I rely on this compatibility.

Unlike Windows I can run the latest version of the OS on the older hardware.

Windows backwards compatibility is no doubt worthy of praise, however the
above mentioned compatibility with older hardware is more important to me than
having older software run reliably on a proprietary OS that constantly
requires newer hardware.

The 2004 reference Reiter cites on the "API War" suggests people buy computers
based on what applications they will be able to run.

Unlike the reference, I cannot pretend to know why others buy certain
computers. Personally, I buy computers based on what OS they will be able to
run. Traditionally, in the days of PC's and before so-called smartphones, if
you were a Windows user this was almost a non-issue. It was pre-installed
everywhere.

At least with respect to so-called smartphones it appears this has begun to
change. Maybe others are choosing to buy computers based on the OS the
computer can run? I don't know for sure.

As for the "developers, developers, developers" and availability of
applications idea, since switching to UNIX-like OS, being able to run any
applications I may need has been a given. In fact, I have come to rely on
applications that will only run on UNIX-like OS!

And now it seems MS is going to make running UNIX applications on Windows
easier. Why?

As with Interix, will the reasoning behind this successor POSIX layer remain a
mystery?

~~~
SSLy
>MS strategically developed software that would not run on today's hardware,
but only on tomorrow's

what do you mean by that?

~~~
wsfull
If you follow the "API war" hyperlink, it's under the heading "It's Not 1990".

When consumers are upgrading their hardware regularly as they were in the
1990's, then developers can disregard the notion of users "upgrading" their
software.

Instead they can just write applications targeting new hardware. It does not
have to run on older hardware.

The user will be compelled to upgrade the hardware and, in the case of
Windows, by default they get the new software. The example cited was Excel
versus Lotus123.

MS also benefitted from hardware sales through agreements with the OEM's.

