Hacker News new | past | comments | ask | show | jobs | submit login

It seems this article badly needs to be updated in light of Ubuntu on Windows. Here's a link from 2016 instead of 2010. https://insights.ubuntu.com/2016/03/30/ubuntu-on-windows-the...



IIUC the new Ubuntu-on-Windows stuff is a different take, reusing some of Interix but diverging substantially from the old implementation.

EDIT Source[1]: "Over time these [Interix] subsystems were retired. However, since the Windows NT Kernel was architected to allow new subsystem environments, we were able to use the initial investments made in this area and broaden them to develop the Windows Subsystem for Linux."

[1] https://blogs.msdn.microsoft.com/wsl/2016/04/22/windows-subs...


Great article!

I'm not too familiar with OS as a subject, but is the separation between user and kernel mode similar to high-level language versus assembly; i.e., the approach they took was to emulate the Linux kernel, which is sort of like a virtual machine. But I imagine emulating a kernel is harder, right? Because of all the stuff that goes on?

And in general, would kernel emulation be a performant approach for running userspace of any OS in any other OS?


It's more like Wine: the code runs directly on the hardware, but there is an extra layer that emulates the system calls. Except that in this case, the emulation has a lot of supporting code in a kernel driver, something that Wine doesn't have. (Actually, for Win32 programs there is also such a layer, however that layer and the NT kernel were designed together from the start, so it can be pretty thin.)


It's an interesting fuzzy gray boundary between the Ubuntu on Windows and "emulation". It's still built as an NT sub-system (like Interix was), and it's still the NT kernel ultimately in charge of everything. The difference seems to be the NT kernel implementing the generic POSIX standard versus the NT kernel implementing the specific system calls of the Linux kernel. So on one hand it is an emulation because there are specific real world binary behaviors (and quirks and bugs) it's trying to replicate, versus implementing standards from a specification document, but on the other hand, it still seems to be the NT Kernel doing NT Kernel things.


I guess the line is a bit blurry, especially when you factor in fast virtualization techniques that involve running emulated instructions directly :)


Gotcha. Thanks for the explanation! Which would you say was harder, Wine or this? It seems like Wine should have been harder since the NT kernel was designed with multiple userspaces in mind...


Wine is basically reverse engineering a badly documented black box. On the other hand, even though the Linux source is available, Microsoft programmers probably aren't allowed to look at it to prevent their code from becoming a derivative GPL'ed work.


> And in general, would kernel emulation be a performant approach for running userspace of any OS in any other OS?

It can come close to the host OS, but that requires a lot of work.


Would this include stuff like printing to stderr being slower on Windows? If they get to write kernel drivers, there could be a chance that they could replicate the file handling behaviors of Linux.


I don't really know anything about this, but if stderr is really slower on Windows, I would suppose that's a userland library thing. I would not know why that would be slower at kernel level. So that problem might not even arise under the Linux emulation.


great link, a must-read for anyone interested in how they did it.


It's a timely reminder for anyone getting excited about Ubuntu on Windows that Microsoft had a working system for doing that, only to slowly run it into the ground. I'd have a lot more respect for the new system if they'd revived Interix/SFU/SUA rather than releasing a different, incompatible replacement.


I think the point is to allow Linux binaries to be used without a recompile, which saves MS most of the cost of maintaining binaries for it.


Which is the key. Plenty of ISVs make Linux binaries, but nobody was interested in Interix binaries.


Indeed. I think you can see this in some of the history in this article: Interix was "POSIX compatible", but essentially its own OS, like compiling for Linux versus BSD. So someone had to maintain binary builds of GNU tools for Interix and thus you ended up with the large "Tools" distribution of user space binaries. Ultimately, "Tools" was its own Unix distribution that was subtly incompatible with any other Unix distribution. Even today on Linux you still see a lot of the headaches in the subtle binary incompatibilities across Linux distributions.

The amazing thing with Ubuntu on Windows is that user space is the same Ubuntu distribution of user space tools as on Linux. That lessens the maintenance burden of the User Space considerably as Canonical is already actively maintaining that distribution, and will continue to actively maintain that distribution, and that there are a considerable number of users of that distribution on Linux already and a considerable ecosystem of third parties building for that distribution. Those are definitely the missing pieces that Interix never had and makes this "Son of Interix" that is Ubuntu on Windows much more interesting.


Does anyone actually care about Linux binaries? A lot of linux programs tend to be shipped as source.


Exactly, couldn't say it better! MS has a history of decisions which were meant to strangle the competition, one way or the other. Interoperability was never one of their strong suits.


Would you actually call for such a revival?

* https://news.ycombinator.com/item?id=11446694


What do you mean "call for"? In the abstract, yes. I might put a bit of personal time into it. I might even pay $30 or so for it. But that's unlikely to move the needle for Microsoft.


Instead of the wistful subjunctive ("if they had revived Interix I would have ...") something in the imperative mood.

Please revive the Subsystem for Unix Applications, Microsoft!

You own the technology. And it addresses quite a number of the issues that you are currently listing on GitHub against the Linux Subsystem. Including:

* Interix has pseudo-terminals. (https://news.ycombinator.com/item?id=11415843 https://wpdev.uservoice.com/forums/266908-command-prompt-con... https://github.com/Microsoft/BashOnWindows/issues/169 https://github.com/Microsoft/BashOnWindows/issues/85 https://github.com/Microsoft/BashOnWindows/issues/80)

* Interix has production code for terminal-style control sequence processing in Consoles. (https://github.com/Microsoft/BashOnWindows/issues/111 https://github.com/Microsoft/BashOnWindows/issues/243 https://github.com/Microsoft/BashOnWindows/issues/27)

* Interix has the mechanism for sending signals to Win32 processes. (https://news.ycombinator.com/item?id=11415872)

* Interix had an init and could spawn daemons. It could also run POSIX programs under the SCM. (https://news.ycombinator.com/item?id=11416376 https://github.com/Microsoft/BashOnWindows/issues/229 https://github.com/Microsoft/BashOnWindows/issues/206)

* The Interix build of the Z Shell runs just fine. (https://github.com/Microsoft/BashOnWindows/issues/91)


it incompatible with its previous version, but OTOH if a Linux binary works without recompilation, is it a problem?


Except this is an article about Interix. Correct me if I'm wrong, but Microsoft seems to have discarded this particular technology entirely and started over fresh with Ubuntu on Windows.




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: