After 6 months, their technical contacts were sheepishly apologetic...
On a personal note, I did my main FreeRADIUS development on XP with Interix for many years. It was... adequate as a Unix replacement. Not spectacular, but adequate.
When that system died, I replaced it with an OSX system. Which was enormously better.
Now this means that if you know the ntdll.dll interface then this remains stable. So if you want to develop your own environmental subsystem it should be possible. After all, the Posix subsystem was purchased by Microsoft!
What I'm really interested in is what the ReactOS guys are doing. Have they implemented the same layering? If they have implemented ntdll.dll, then this could actually mean that they could technically get in ahead and do what Microsoft are doing right now.
For that matter, it starts to make me wonder whether someone could do what Microsoft have done but in reverse on Linux! In other words, implement a translation layer that translate ntdll.dll function calls to corresponding kernel syscalls, then work backwards implementing each of the user mode subsystems. Maybe WINE has done this already?
If it has then you could theoretically write a WinNT interface for Linux and run the Win32 userland on it. But that's pretty much asking Microsoft to sue you into oblivion.
* According to Wikipedia, the MinWin project started after Server 2003 shipped.
There's a layer of separation between WinNT and Win32 that's basically an API. In theory if you implemented this API, then you could run Microsoft's Win32 subsystem on your API without Microsoft's underlying WinNT kernel. That Win32 layer would be Microsoft code and they'd sue the heck out of you for running it.
The WinNT kernel was originally designed so that Subsystems would run on top of it and those Subsystems would provide the interfaces for applications. Originally WinNT had Win32, OS/2 and POSIX subsystems that ran on top of it. Over time the distinction between WinNT and Win32 erroded while the OS/2 subsystem was canned and the POSIX one neglected.
Starting after Server 2003 they began to redefine the boundary between WinNT and Win32. The primary reason was to allow for headless Servers that didn't have the overhead of the GUI (Win32) or other unnecessary functionality like the Printer systems.
If you don't believe me, then I refer you to the following:
The Wiki itself states that:
"The ReactOS project reimplements a state-of-the-art and open NT-like operating system based on the NT architecture. It comes with a WIN32 subsystem, NT driver compatibility and a handful of useful applications and tools."
You might also want to review:
And here is the header for the kernel functions used in ntdll.h:
I thought that was what WINE did. WINE = WINE Is Not an Emulator.
EDIT Source: "Over time these [Interix] subsystems were retired. However, since the Windows NT Kernel was architected to allow new subsystem environments, we were able to use the initial investments made in this area and broaden them to develop the Windows Subsystem for Linux."
I'm not too familiar with OS as a subject, but is the separation between user and kernel mode similar to high-level language versus assembly; i.e., the approach they took was to emulate the Linux kernel, which is sort of like a virtual machine. But I imagine emulating a kernel is harder, right? Because of all the stuff that goes on?
And in general, would kernel emulation be a performant approach for running userspace of any OS in any other OS?
It can come close to the host OS, but that requires a lot of work.
The amazing thing with Ubuntu on Windows is that user space is the same Ubuntu distribution of user space tools as on Linux. That lessens the maintenance burden of the User Space considerably as Canonical is already actively maintaining that distribution, and will continue to actively maintain that distribution, and that there are a considerable number of users of that distribution on Linux already and a considerable ecosystem of third parties building for that distribution. Those are definitely the missing pieces that Interix never had and makes this "Son of Interix" that is Ubuntu on Windows much more interesting.
Please revive the Subsystem for Unix Applications, Microsoft!
You own the technology. And it addresses quite a number of the issues that you are currently listing on GitHub against the Linux Subsystem. Including:
* Interix has pseudo-terminals. (https://news.ycombinator.com/item?id=11415843 https://wpdev.uservoice.com/forums/266908-command-prompt-con... https://github.com/Microsoft/BashOnWindows/issues/169 https://github.com/Microsoft/BashOnWindows/issues/85 https://github.com/Microsoft/BashOnWindows/issues/80)
* Interix has production code for terminal-style control sequence processing in Consoles. (https://github.com/Microsoft/BashOnWindows/issues/111 https://github.com/Microsoft/BashOnWindows/issues/243 https://github.com/Microsoft/BashOnWindows/issues/27)
* Interix has the mechanism for sending signals to Win32 processes. (https://news.ycombinator.com/item?id=11415872)
* Interix had an init and could spawn daemons. It could also run POSIX programs under the SCM. (https://news.ycombinator.com/item?id=11416376 https://github.com/Microsoft/BashOnWindows/issues/229 https://github.com/Microsoft/BashOnWindows/issues/206)
* The Interix build of the Z Shell runs just fine. (https://github.com/Microsoft/BashOnWindows/issues/91)
I must admit I used SFU and IO was all quirkish (buffered IO called in non buffered mode).
But knowing that async IO, threading was radically different from POSIX they may have decided at some point that it was not possible to offer a quirkless API for Posix by conflict of construction, and they dropped it.
I mean for a lot of programmer an IO is an IO, a process a process... but the kernel may thinks differently.
For the record, before I hear the Unix fan boys says that Posix is superior, PyParallel by continium exploits a parallel version of python on windows and have considerable gain from using exactly the heart of Dave Cutler's architecture: multi-threading, async IO...
I do not say it is the panacea, I say it worths being looked at.
As you guess, thinking that the same cause produces the same effect and if the kernel still use Cutlers architecture around multi threading and asyncio then I expect the ubuntu runtime on windows to have quirks around... asyinc io and multithreading.
GNOME has a lot to answer for. Most other projects try hard to avoid this and lean the other way, preserving ancient features at the cost of clarity.
I think that in the land of Linux, the only things which remain backward compatible are the kernel (if you want to run statically linked userland binaries - not if you want to run driver binaries) and the GNU userspace utitilies like sh and grep (though sh is bash on some distros and dash on others, breaking the very popular assumption that sh is in fact bash, and env might reside in either /bin/env or /usr/bin/env, defeating its purpose of letting you deal with the fact that #!/bin/tcsh will break on systems keeping tcsh at /usr/bin. But a given distro will usually keep its shit where it last put it, which I guess is better than it could have been.)
Actually reading the licence dialogue displayed by the SFUA/SUA installer was interesting: umpteen different iterations of mostly the same BSD licences, over and over, with the only differences being the copyright declarations.
Desktop composition adds a frame or more of latency. If you disable it (sadly not possible in Windows 8 and later) UI interaction feels noticeable more responsive. But most people run Windows 7 with composition on, so for them it makes sense that "XP felt faster"
Proper firewalls alone helped a lot.
This is indeed very sad. It seems like every project shipped overseas suffers the same fate. When will we ever learn that you can't "leverage the salary disparity" - you don't outsource your engineers when your core competency is engineering.
Those two aren't mutually exclusive. Toyota employees in the US probably still care about the cars they're working on. I can verify that the Finnish employees at Microsoft do great work (e: and care about it), despite being 4700 miles away from Redmond.
If you're the only customer running OS/2-on-NT-using-Token-Ring-over-Carrier-Pidgeon, you're not going to have a good time when you find a bug or missing feature.
To name few examples - there's no decent alternative to Microsoft Office (free or otherwise), there're no decent open source alternatives to most CAD applications, to MATLAB (no, Octave doesn't count) & Simulink, LabVIEW, musical software, electronics design & manufacturing, etc. Essentially anywhere where the problem domain is nontrivial and the potential audience is limited.
I don't know what Microsoft Office has to do with this.
Microsoft Office is a good example of software that a) has a very broad audience, b) pretty much every computer user has seen, and c) somehow nobody can make a decent free replacement of.
Question: Does, and if so how does the Window Station fit into the NT Subsystem framework?
1. Interix/SUA subsystem was not developed by Microsoft. It was acquired from a company called Softway. It was used internally to transition Hotmail from FreeBSD to Windows. It is believed some important MS customers also made use of Interix and possibly came to rely on it.
2. How to explain MS seeming ambivlance toward a POSIX layer on top of Windows? Idea: Windows API is so complex (convoluted?) as to exclude competition. See Joel On Software reference. He marvels at Windows' backwards compatibility - being able to run yesterday's software on today's computers. Yet he also admits MS strategically developed software that would not run on today's hardware, but only on tomorrow's. (Not intending to single out MS as I know other large companies in the software business did this too.)
Complexity as a defensive strategy. Who would have guessed?
Many years ago, I gave up on Windows in favor of what I perceived as a more simple, volunteer-run UNIX-like OS that was better suited to networking.
As it happens, unlike Windows, _all versions_ of this OS run reliably on most older hardware. Although it was not why I switched at the time, I have come to expect that by virtue of the UNIX-like OS, my applications will now run on older as well as current hardware. I rely on this compatibility.
Unlike Windows I can run the latest version of the OS on the older hardware.
Windows backwards compatibility is no doubt worthy of praise, however the above mentioned compatibility with older hardware is more important to me than having older software run reliably on a proprietary OS that constantly requires newer hardware.
The 2004 reference Reiter cites on the "API War" suggests people buy computers based on what applications they will be able to run.
Unlike the reference, I cannot pretend to know why others buy certain computers. Personally, I buy computers based on what OS they will be able to run. Traditionally, in the days of PC's and before so-called smartphones, if you were a Windows user this was almost a non-issue. It was pre-installed everywhere.
At least with respect to so-called smartphones it appears this has begun to change. Maybe others are choosing to buy computers based on the OS the computer can run? I don't know for sure.
As for the "developers, developers, developers" and availability of applications idea, since switching to UNIX-like OS, being able to run any applications I may need has been a given. In fact, I have come to rely on applications that will only run on UNIX-like OS!
And now it seems MS is going to make running UNIX applications on Windows easier. Why?
As with Interix, will the reasoning behind this successor POSIX layer remain a mystery?
what do you mean by that?
When consumers are upgrading their hardware regularly as they were in the 1990's, then developers can disregard the notion of users "upgrading" their software.
Instead they can just write applications targeting new hardware. It does not have to run on older hardware.
The user will be compelled to upgrade the hardware and, in the case of Windows, by default they get the new software. The example cited was Excel versus Lotus123.
MS also benefitted from hardware sales through agreements with the OEM's.