
A rant about some of the nice things in the NT kernel - ComputerGuru
https://neosmart.net/blog/2008/shipping-seven-is-a-fraud/comment-page-1/#comment-160264
======
mjg59
In the time since that comment was written (just under 5 years), Linux
replaced most graphics drivers with ones based on in-kernel modesetting and
the DRI2 interface. This was done without breaking applications. So, the
assertion that changing the video driver model would be disruptive is kind of
disproven by reality. I guess Linux has a liberal bias.

That's not to say that NT doesn't have benefits. Linux is still catching up
with implementing some features that Windows has had for some time (and
multiple GPU support is actually a great example of that), but so far there's
no real evidence that these disparities are because of architectural
differences.

Really, a worthwhile comparative analysis requires someone who has a deep
understanding of the kernels they're comparing. I'm pretty familiar with Linux
but know almost nothing about NT, so I'm a bad choice. But "Take the recent
Linux arguments about the HardLocks code that is giving Linux trouble with
multi-processor granularity"? That's not someone who knows Linux, otherwise
they'd be using words that I recognise. "You call BSD a kernel, it technically
is a set of APIs"? That's not someone who knows BSD either. This isn't an in-
depth analysis of benefits that one kernel has over another. It's a handwavy
justification of some NT design decisions without any reasoned comparison to
Linux design decisions in the same area.

I'd love to read an in-depth comparison of the benefits of NT over Linux. This
isn't it. Is there one?

~~~
quotemstr
When I first started at Microsoft, I once told a coworker something like,
"Hey, I've heard NT revolutionarily elegant, but so far, I've just seen
features that, while fairly nicely implemented, are universal across operating
systems." He replied by reminding me that other operating systems have caught
up, while NT was pretty close to its present design even in the early 90s ---
it was like a modern operating system had travelled through time back to 1992.
It _was_ revolutionary.

(I spoke to someone else who was utterly amazed that NT allowed driver
development without requiring that the whole kernel be relinked from object
files.)

~~~
lolcraft
IDK, from what I've just read NT looks like someone took Minix (released in
1992) and worked on it to make it useful. Plus adding a subsystem layer.

Frankly, I'm still more impressed by a monolithic kernel that did it better,
before: Plan 9 (released in 1991). What use is for NT to be object-oriented,
if it can't share its objects through a network like Plan?

~~~
pjmlp
> IDK, from what I've just read NT looks like someone took Minix (released in
> 1992) and worked on it to make it useful. Plus adding a subsystem layer.

VMS.

> Frankly, I'm still more impressed by a monolithic kernel that did it better,
> before: Plan 9 (released in 1991). What use is for NT to be object-oriented,
> if it can't share its objects through a network like Plan?

Sadly the UNIX crowd decided it was not worth it adopting it.

------
nailer
The article (not the comment linked to, which was quite informative) also has
this:

> Anyone that’s ever manually compiled a Linux kernel knows this. You can’t
> strip ext3 support from the kernel after it’s already built any more than
> you can add Reiser4 support to the kernel without re-building it.

Even 5 years go (and at least 10 years ago) you could remove ext3 and add add
another filesystem without rebuilding the kernel. I know Red Hat at least
included a helpful Makefile for precisely this purpose.

Rebuilding entire kernels in order to compile a single kernel module is a well
known habit of early Linux uses, following advice from pre 2.x kernel days
when loadable modules didn't exist that seems to have stuck around in the
collective mind of the Internet.

~~~
ithkuil
Before distributions switched to initramfs, you had to have at least one
filesystem compiled in, for bootstrap reasons, so that the kernel could mount
the initrd.

I don't remember exactly when did initramfs get introduced, but I remember it
was a relatively long time before major distros switched to it. TFA is old
enough so that might have been the case back then.

~~~
nailer
10 years ago we still had mkinitrd in RH (source: I'm a programmer now, but in
2003 I was worked for Red Hat as an instructor for these topics). Initial RAM
disks still had kernel modules in their gzipped filesystem and were rebuild
able without rebuilding the kernel.

------
aaronbrethorst
> “Soma”

It's weird to me that the original blog post put Soma's name in "scare
quotes." Soma is the Corporate VP of Developer Division at Microsoft, which
means he's in charge of—among other things—Visual Studio, .NET Framework,
ASP.NET, the now-dead Expression Studio, and I'm sure a few other things.

He goes by Soma because his full name is Sivaramakichenane Somasegar (really,
I looked it up in Headtrax once and remember it, for whatever reason, seven
years later). And, let's be honest, that is _really hard to spell._

~~~
quotemstr
I know a few mononymed developers myself. What's strange is that they're all
_awesome_.

------
United857
From a developer's perspective, the main problem facing Windows is not the
kernel itself -- despite common misconceptions to the contrary. For example,
OS X is built on a BSD which has it's roots in 60's and 70's OS design, just
like the VMS roots of WinNT.

OS X didn't change the world by bringing some great new underlying
architecture to the table. In fact, their kernel and filesystem are arguably
getting long in the tooth. The value that OS X brought to the table was the
fantastic Carbon and Cocoa development platforms. And they have continued to
execute and iterate on these platforms, providing the "Core" series of APIs
(CoreGraphics, CoreAnimation, CoreAudio, etc.) to make certain HW services
more accessible.

There's very little cool stuff to be gained in the windows world by developing
a new kernel from scratch. A quantum leap would not solve MS's problem. The
problem is the platform. What's really dead and bloated is the Win32
subsystem. The kernel doesn't need major tweaking. In fact, the NT kernel was
designed from the beginning such that it could easily run the old busted Win32
subsystem alongside a new subsystem without needing to resort to expensive
virtualization (as the original article mentions).

Unfortunately, the way Microsoft is built today it have a fatal organizational
flaw that prevents creating the next great Windows platform. The platform/dev
tools team and the OS team are in completely different business groups within
the company. The platform team develops the wonderful .NET platform for
small/medium applications and server apps while the OS team keeps crudging
along with Win32. Managed languages have their place, but they have yet to
gain traction for any top shelf large-scale windows client application vendors
(Adobe, even Microsoft Office itself, etc.) Major client application
development still relies on unmanaged APIs, and IMHO the Windows unmanaged
APIs are arguably the worst (viable) development platform available today.

What Windows needs is a new subsystem/development platform to break with
Win32, providing simplified, extensible unmanaged application development,
with modern easy-to-use abstractions for hardware services such as graphics,
data, audio and networking.

This is starting to come to fruition with WinRT, but the inertia in large
scale apps is unbelievable.

------
coldtea
The neosmart.net post was totally ignorant.

And he dismissed "Shipping Seven" with hysterical handwaving and BS pedantic
arguments, that were even wrong. He could not understand what SKUs were, he
thought Seven referred to Windows as merely a kernel, like what you build in
Linux, and other BS, he made some BS comments that only apply to monolithic
kernels and ONLY if you compile extensions instead of loading them as
modules...

------
ck2
Are there any old-schoolers around here who remember IBM's OS/2 and how it
could have changed the PC world completely

<http://en.wikipedia.org/wiki/OS/2>

Its design was supposedly far better than NT

(it initially came on 50 5.25" disks, that was "fun" to install)

I think UPS was the largest user/developer.

You know the company Parallels that made Virtuozzo (and OpenVZ) - well it was
initially formed to make virtual environments to run OS/2.

~~~
captaincrowbar
I did a lot of development on OS/2 applications in the mid 90s, mostly on the
Warp 3.0 version. Eventually we switched to Windows NT4 once it became clear
that OS/2 had had its fifteen minutes.

I quite liked it at the time, but to be honest I think the wistful "could have
changed the PC world completely" stuff I sometimes hear is just rose tinted
nostalgia. It was a very good OS for its time; its main competition then was
Windows 95, and there was certainly no contest there. But the NT kernel was a
different matter, especially after it got a saner UI in NT4. There were no
huge advantages to one or the other there (with one exception, see below).
Compared to very different modern OSes such as OSX or Linux, OS/2 and NT4 were
close siblings.

OS/2's one real Achilles heel, which gave us endless trouble, was the
synchronous input queue, shared by all programs that had a GUI (including the
OS desktop). The upshot of this was that, if a user-facing program crashed, it
was very likely to freeze up the OS and require a hard reboot. When we
switched to NT4, the vast improvement in reliable uptime was a breath of fresh
air (if nothing to write home about by modern standards). I gather they
partially fixed this in OS/2 4.0, but by then the writing was on the wall.
OS/2 faded away before the rise of modern malware had really hit its stride,
but I suspect the SIQ problem would also have led to all sorts of security
issues. For example, look up "shatter attack"; that was bad enough on Windows,
but I'm pretty sure OS/2 would have been even more vulnerable to that sort of
technique.

While I certainly wouldn't claim that son-of-NT's victory over OS/2 had
anything to do with its technical merits, I do think that, at least between
those two lines of development, the (slightly) better OS won.

~~~
chiph
Former OS/2 developer here (I still have an OS/2 t-shirt around somewhere).
While the single event queue was a problem, at the time the major competitor
was Windows 3.x, which also had cooperative eventing for the UI. NT changed
that for the better.

What OS/2 had was multi-threading. If you had to do some operation that might
take longer than 1/10th of a second, the guidance from IBM was to put it on
it's own thread. So by necessity, OS/2 developers became expert multi-
threaders.

I think it was Stardock that had an excellent newsgroup reader for OS/2 -- it
was multithreaded, so you could queue-up several requests for your newsgroups
(alt.binaries.*, {ahem}) and the UI remained responsive and you could go do
other things, like fire up GoldenCompass for your Compuserve fix. ;)

~~~
mackwic
As a cs student, I must say that I'm very impressed. A dev kernel from the era
where while(true); was absolutely forbidden cause of the lack of preemption in
windows. Seems like a piece of History for me.

Would you accept to tell us more about what your team did, what were the goals
and concerns about your system ?

~~~
chiph
Also - a book that I haven't seen mentioned in this thread yet is:

[http://www.amazon.com/Showstopper-Breakneck-Windows-
Generati...](http://www.amazon.com/Showstopper-Breakneck-Windows-Generation-
Microsoft/dp/0759285780/)

I haven't seen this edition, but if the comments are correct, try and find an
original from 1994 (has a shiny cover with big red letters) to avoid the
printing errors.

It's written for a general audience, but still has some technical details in
it. It's more interesting from a business and personality standpoint - at that
level of software development doing things like kicking holes in your office
walls becomes a little more acceptable (that would have gotten me fired at any
job I've held)

~~~
mackwic
Thank's for the feedback and the book. I'll try to find an original one.

------
eschaton
"[M]ulti GPUs may not have existed when XP was released"

The NT kernel had been shipping for about 10 years by the time XP was
released. Windows NT 3.5 and earlier didn't have any kernel-level graphics
(though they did have kernel-level drivers); kernel-level GDI came in NT 4
(1995-1996) for performance.

Windows NT was also used by some graphics workstation vendors like Intergraph
and SGI, whom I presume did support multiple GPUs like most workstation
vendors did.

Macs also had actual GPUs (not just frame buffers, but real QuickDraw
accelerators) and supported multiple GPUs in the late 1980s - on System 6!

All it really takes is a reasonable display abstraction model/API, and an
ability for that to interface reasonably with hardware drivers. Seems Linux
could've handled that too - and it wouldn't surprise me if it did, at least
theoretically.

I didn't follow Linux's initial development as 386BSD had already come out. I
do remember the first accelerated X11 for 386BSD supporting some card with a
number like 911. I don't recall that it required explicit kernel support,
other than perhaps a bunch of ioctl calls or something like that.

------
michielvoo
Given that the source code of the NT kernel is apparantly available for
academic purposes, I wonder if there are university courses where the NT
kernel is used as the subject, and students are exposed to this code. Does
anyone have that experience?

~~~
quotemstr
At least one course uses NT.

[http://www2.cs.uregina.ca/~hamilton/courses/330/notes/memory...](http://www2.cs.uregina.ca/~hamilton/courses/330/notes/memory/page_replacement.html)

------
vxxzy
"Anyone that’s ever manually compiled a Linux kernel knows this. You can’t
strip ext3 support from the kernel after it’s already built any more than you
can add Reiser4 support to the kernel without re-building it."

Ummm.. Kernel modules anyone?

Or do I have something wrong here? Have I taken something out of context?

------
gnu8
This article is just wasted bytes since no one can work with the code or even
see it. The author may as well be talking about a fictional kernel. Closed
source isn't worthless because of its terrible code quality, closed source is
worthless because it is closed.

~~~
wazoox
The guy mentions that it's still possible for academics to get their hands on
NT sources. I suppose, however, that it's forbidden to modify or compile them.

At some point in time, NT (3.1) was apparently shipped in source form: there
was no binary distribution for the first 4 and 8 processors systems back then
and you had to compile your kernel yourself on the target machine (after
installing the vanilla NT first).

~~~
hga
Xen 1.x development included a paravirtualized Windows XP using this program
([http://en.wikipedia.org/wiki/Xen#Microsoft_Windows_systems_a...](http://en.wikipedia.org/wiki/Xen#Microsoft_Windows_systems_as_guests)):

" _During the development of Xen 1.x, Microsoft Research, along with the
University of Cambridge Operating System group, developed a port of Windows XP
to Xen — made possible by Microsoft's Academic Licensing Program. The terms of
this license do not allow the publication of this port, although documentation
of the experience appears in the original Xen SOSP paper._ "

------
wfunction
> MinWin

MinWin (affecting Kernel32.dll) has nothing to do with the NT kernel.

~~~
quotemstr
You have no idea what you're talking about.

~~~
wfunction
> You have no idea what you're talking about.

Care to elaborate why, instead of just accusing?

FYI, MinWin is completely a user-mode library change; it has nothing to do
with the kernel. Even Wikipedia explains the misconception:
<https://en.wikipedia.org/wiki/MinWin#cite_ref-14>

On the other hand, there _is_ such a think called "MinKernel", which is
related to the kernel itself: [http://cdn-
static.zdnet.com/i/r/story/70/00/013290/minwinmin...](http://cdn-
static.zdnet.com/i/r/story/70/00/013290/minwinminkernel-620x428.png)

~~~
quotemstr
I've already said too much. I just want to emphasize that the whole thing is
quite subtle, and it's hard to make sense of it given only public information.
What you should focus on is the documented ABI available on each OS and
platform since that's the real place where what we do affects what you do.

~~~
wfunction
> I just want to emphasize that the whole thing is quite subtle, and it's hard
> to make sense of it given only public information.

Well, _I_ made sense of it using only public information, and yet you claimed
"You have no idea what you're talking about".

Obviously that means _you_ must know better than me, so care to share where I
made a mistake?

~~~
meanguy
For starters, it's ntoskrnl.exe. Dig out the original Windows NT stuff from
the early 90s and you can infer Dave Cutler and team's original vision. The
fact that Microsoft made hundreds of billions of dollars off it, layering all
sorts of legacy chaos atop it, obscures the jewel at the core.

------
ulpis
The NT kernel represents all strings as UTF-16 internally AND in the syscall
API.

This single fact is enough to make it clear that it's a piece of shit.

~~~
nbevans
There's nothing wrong with UTF-16.

~~~
Dylan16807
Yes there is. It's variable width without being backwards compatible to ASCII.
Even worse it's variable width but lets people assume it's fixed-width. UCS-2
was okay. UTF-16 is a hack.

