Hacker News new | past | comments | ask | show | jobs | submit login

> I wish Windows/MS would abandon NT and just create a Linux distro. I don't know anyone who particularly likes NT and jamming multiple systems together seems like an awful idea.

I do. The NT kernel is pretty clean and well architected. (Yes, there are mistakes and cruft in it, but Unix has that in spades.) It's not "jamming multiple systems together"; an explicit design goal of the NT kernel was to support multiple userland APIs in a unified manner. Darwin is a much better example of a messy kernel, with Mach and FreeBSD mashed together in a way that neither was designed for.

It's the Win32 API that is the real mess. Having a better officially supported API to talk to the NT kernel can only be a good thing, from my point of view.




Well, large parts of the NT API are very close from Win32 API for obvious reasons, and so are often in the realm of dozen of params and even more crazy Ex functions. Internally there are redundancies that do not make much sense (like multiple versions of mutex or spinlock depending on which parts of kernel space use them, IIRC), and some whole picture aspects of Windows makes no sense at all given the architectural cost it induces (Winsock split in half between userspace and obviously needed kernel support is just completely utterly crazy, beyond repair, it makes so little sense you want to go back in past and explain the designer of that mess how stupid this is). The initial approach of NT subsystems was absolutely insane (hard dep on a NT API core, so can't do emulation with classic NT subsystems - so either limited to OS having some technical similarities like OS/2, or very small communities when doing a new target like the Posix or SFU was) -- WSL makes complete sense, though, but it is maybe a little late to the party. Classic NT subsystems are of so little use that MS did not even use them for their own Metro and then UWP things, even though they would like very hard to distinguish that more from Win32 and make the world consider Win32 as legacy. I've read the original paper motivating to put Posix in an NT subsystem, and it contained no real strong point, only repeated incantations that this will be better in an NT subsystem and worse if done otherwise (well for fork this is obvious, but the paper was not even focused on that), with none of the limitations I've explained above ever considered.

Still considering the whole system, an instable user kernel interface has few advantages and tons of drawbacks. MS is extremely late to the chroot and then container party because of that (and let's remember that the core technology behind WSL emerged because they wanted to solve the chroot aside userspace system on their OS in the first place, NOT because they wanted to run Linux binaries) -- so yet another point why classic NT subsystems are useless.

Back to core kernel stuff, IRQL model is shit. Does not make any sense when you consider what really happens, and you can't really use arbitrary multiple levels. It seems cute and clean and all of that, but Linux approach of top and bottom halves and kernel and user threads might seem messy but is actually far more usable. Another point: now everybody uses multiprocessor computers, but back in the day the multiple HAL were also a false good idea. MS recognize it now and only want to handle ACPI computers, even on ARM. Other OSes do all kind of computers... Cutler pretended to not like the "everything is a file" approach, but NT does basically the same thing with "everything is a handle". And soon enough, you hit exactly the same conceptual limitations (except not in the same places) that not everything is actually the same, so that cute abstraction leaks soon enough (well, it does in any OS).

On a more result oriented approach, one of the things WSL makes clear is that file operations are very slow (just compare an exactly identical file heavy workload under WSL and then under a real Linux)

So of course there are (probably) some good parts, like in any mainstream kernel, but there are also some quite dark corners, and I am not an expert about all architectural design of NT but I'm not a fan of the parts I know, and I strongly prefer the Linux way to do equivalent things.


> Cutler pretended to not like the "everything is a file" approach, but NT does basically the same thing with "everything is a handle". And soon enough, you hit exactly the same conceptual limitations (except not in the same places) that not everything is actually the same, so that cute abstraction leaks soon enough (well, it does in any OS).

Explain? Pretty much the only thing you can do with a handle is to release it. That's very different from a file, which you can read, write, delete, modify, add metadata to, etc... handles aren't even an abstraction over anything, they're just a resource management mechanism.


You are right, but those points are details. FD under modern Unixes (esp. Linux, but probably others) serves exactly the same purpose (resource management). The FD where read/write can't be used just don't define those (same principle for other syscalls) -- similarly if you try to NtReadFile on an incompatible Handle it will also give you an error back. Both are in a single numbering space per process. NT largely makes use of NtReadFile / NtWriteFile to communicate with drivers, even in quite core Windows components (Winsock and AFD). And NT Handles do serve at least an abstraction (I know of): they can be signaled, and waited for with WaitFor*Objects.

So the naming distinction is quite arbitrary.


> You are right, but those points are details.

Uh, no, they are very crucial details. For example, it means the difference between letting root delete /dev/null like any other "file" on Linux, versus an admin not being able to delete \Device\Null on Windows because it isn't a "file". The nonsense Linux lets you do because it treats everything like a "file" is the problem here. It's not a naming issue.


Linux has plenty of file descriptor types that do not correspond to a path, along with virtual file systems where files cannot be deleted...

Your example of device files is hardly universal, and the way it works is useful.


And to give you another example, look at how many people bricked their computers because Linux decided EFI variables were files. You can blame the vendors all you want, but the reality is this would not have happened (and, mind you, it would have been INTUITIVE to every damn user) if the OS was sane and just let people use efibootmgr instead of treating every bit and its mother as files. Just because you have a gun doesn't mean you HAVE to try shooting youself, you know? That holds even if the manufacturer was supposed to have put a safety lock on the trigger, by the way. Sometimes some things just don't make sense, if that makes sense.


How many people really did this compared to eg windows users attacked by cryptolocker?


"The way it works is useful?"! When was the last time you found it useful to delete something like /dev/null via command line? And how many poor people do you think have screwed up their systems and had to reboot because they deleted not-really-files by accident? Do you think the two numbers are even comparable if the first one isn't outright zero?

It literally doesn't make any sense whatsoever for many of these devices to behave like physical files, e.g. be deletable or whatnot. Yes there is an exception to every nonsense like this, so yes, some devices do make sense as files, but you completely miss the point when you ignore the widespread nonsense and justify it with the exceptions.


Your complaint is with the semantics of the particular file. There's no reason why files in /dev need be deletable using unlink. That's an historical artifact, and one that's being rectified.

"Everything is a file" is about reducing most operations to 4 very abstract operations--open, read, write, and close. The latter three take handles, and it's only the former that takes a path. But you're conflating the details of the underlying filesystem implementation with the relevant abstraction--being a file implies that it's part of an easily addressable, hierarchical namespace. Being a file doesn't imply it needs to be deletable. unlink/remove is not part of the core abstraction. But they are hints that the abstraction is a little more leaky than people let on. Instantiating and destroying the addressable character of a file poses difficult questions regarding what the proper semantics should be, though historically they're deletable simply because using major/minor device nodes sitting atop the regular persistent storage filesystem was the simplest and most obvious implementation at the time.


Hm I was more thinking about open FD, not just special file entries on the FS. Well, I agree with you: it's a little weird and in some cases dangerous to have the char/block devices in the FS, and it has already been worked-around multiple times in different ways, and even in some cases simultaneously with multiple different work-around. NT is better on that point. But not once the "files" are open and you've got FD.


> IRQL model is shit. Does not make any sense when you consider what really happens,

On the contrary. It's only when one considers what happens, especially in the local APIC world as opposed to the old 8259 world, that what the model is actually does finally make sense.

* http://homepage.ntlworld.com./jonathan.deboynepollard/FGA/ir...


I don't care about the 8259, I don't see why anybody should care except PIC driver programmers, it's doubtful anybody designing NT cared given the very first architecture it was design against at was not a PC, and this is typically the kind of thing that goes through the HAL.

IRQL is a completely software abstraction thing, in the same way top/bottom halves and various threads are under Linux (hey, in some version of the Linux kernel it even transparently switches to threaded IRQ for the handlers, there is no close relationship with any interrupt controller at this point...). IRQL is shit because most of the arbitrary levels it provides are not usable to distinguish anything continuously from an application point of view (application in the historical meaning, no "App" bullshit intended here), even so in seemingly continuous areas (DIRQL), so there is no value in providing so many levels with nothing really distinguishing between them -- or at some level transitions too many things completely different. It's even highly misleading, to the point the article you link is needed (but does not even provide the whole picture.) I see potential for misleading people with PIC knowledge, people used to real time OSes (if you try to organize application priority by basing on IRQL, you will miserably fail), people with background in other kernels, well, pretty much everybody.


Having a better officially supported API to talk to the NT kernel can only be a good thing, from my point of view.

That's particular interesting now that SQL Server has been ported to Linux. Would be funny if they're going to use the Linux subsystem on Windows too.

Although I suspect SQL Server already talks to the kernel directly.


No, they do have sophisticated user mode library but use only public kernel APIs. That user model library also helped them relatively painlessly migrate SQL Server to Linux.


> Having a better officially supported API to talk to the NT kernel can only be a good thing, from my point of view.

This is what I am looking forward to with WinRT, hence why Rust should make it as easy as C++/CX and C# to use those APIs. :)


Well I've personally seen Microsoft employees themselves complaining about the state of NT while saying it's "fallen behind Linux".

An old HN commenter once wrote (mrb)

> There is not much discussion about Windows internals, not only because they are not shared, but also because quite frankly the Windows kernel evolves slower than the Linux kernel in terms of new algorithms implemented. For example it is almost certain that Microsoft never tested I/O schedulers, process schedulers, filesystem optimizations, TCP/IP stack tweaks for wireless networks, etc, as much as the Linux community did. One can tell just by seeing the sheer amount of intense competition and interest amongst Linux kernel developers to research all these areas.

>The net result of that is a generally acknowledged fact that Windows is slower than Linux when running complex workloads that push network/disk/cpu scheduling to its limit: https://news.ycombinator.com/item?id=3368771 A really concrete and technical example is the network throughput in Windows Vista which is degraded when playing audio! https://blogs.technet.microsoft.com/markrussinovich/2007/08/...

>Note: my post may sound I am freely bashing Windows, but I am not. This is the cold hard truth. Countless of multi-platform developers will attest to this, me included. I can't even remember the number of times I have written a multi-platform program in C or Java that always runs slower on Windows than on Linux, across dozens of different versions of Windows and Linux. The last time I troubleshooted a Windows performance issue, I found out it was the MFT of an NTFS filesystem was being fragmented; this to say I am generally regarded as the one guy in the company who can troubleshoot any issue, yet I acknowledge I can almost never get Windows to perform as good as, or better than Linux, when there is a performance discrepancy in the first place.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: