Hacker News new | past | comments | ask | show | jobs | submit login
Windows Kernel Architecture Internals (2010) [pdf] (microsoft.com)
105 points by doener on Dec 21, 2015 | hide | past | favorite | 16 comments



Anyone know if there's work being done on a 7th edition of Windows Internals? I read Inside Windows 2000, which was totally awesome, but not super much changed on the architectural level between 2000 and Windows 7 (which is covered in the 6th edition), at least not to my untrained eye. Windows 8 and 10 might have some radically new stuff, just judging from the range of devices they target, and their efforts on speeding it up on slower hardware (reversing the trend of the bloating Windows versions up to Vista).

Now I've not really used Windows much in the past 10 years. I switched to OSX and then to Ubuntu (when the desktop experience became more bearable, and the license issues with OSX more important to me). But reading about the NT kernel feels like reading about how a kernel should be designed. It really is that much cleaner in my opinion than anything I ever read on OSX or Linux. If they ever did open source it, I'd certainly join a subculture of before-it-got-hip Windows kernel lovers with XMonad as a window manager ;)


Windows Internals 7th Edition Book 1 is available already[0], and there will be 2 more books[1]. Russinovich and Solomon have stepped down a while ago, according to that forum thread, but the new book lists them as authors because Catlin and Hanrahan worked over the already existing stuff.

[0] http://www.amazon.com/Windows-Internals-Book-User-Edition/dp...

[1] http://forum.sysinternals.com/when-is-the-7th-edition-book-f...


I have always heard that the windows kernel is actually pretty well designed. In particular the way it handles concurrency and asynchronous I/O. Unfortunately I have not found (euphemism for: I have been lazy) an easily consumable description of its differences from the Unix, Linux APIs supporting concurrency and asynchrony features, in what ways are they better, with some code examples thrown in that illustrate the differences

If anyone would be so kind to humor me with such a comparison or pointer...


Well you have to be careful with what you're comparing. It would be rather unfair to compare the original UNIX design with a modern design like NT. UNIX did not have async I/O. It even blocked syscalls in its original design. I think Linux added async I/O quite recently (2.6?). I'm not really a kernel dev, but I have had to work on systems level stuff quite often on both OSs and from what I've seen NT has a much more richer I/O model. The I/O class/device/filter/driver model of NT makes writing driver code less of a chore IMHO. Also async I/O is supported on almost all I/O devices. I believe its only network and disk on Linux.


UNIX intentionally deviated from monolithic systems that had many useful features because they couldn't run on their hardware. Then, they kept that even when hardware got better with some added back in band-aid on ways and some others cleanly. Windows NT's design came from experience and strategies of OpenVMS's designer. No wonder that a cleaner, monolithic design came out of it. Took them a while to get the reliability near their ancestor, though. Haha.

Meanwhile, QNX and Minix 3 took a more UNIX philosophy approach than UNIX by decomposing the whole thing into communicating components. QNX did it fast and ultra-reliable where Minix 3 is at least reliable. Easier to update due to architecture, even live updates. They each did this with little manpower and resources compared to what went into UNIX's.

My direct comparison for UNIX is with them. It shows UNIX architecture is fundamentally flawed in terms of flexibility, reliability, security, determinism, and ease of maintenance. I'd compare Windows kernel with other monoliths on servers and desktops. Usability and stability are better than most that are still around with other attributes hit and miss (or comparable). UNIX desktops have gotten much better than before but still too many WTF moments. And more dependency hell than I used to deal with on Windows. (sighs)


Dependency hell has very little to do with kernel design - it's entirely a user mode problem.

Neither Linux or Windows NT in the modern iterations are true monolithic kernels - e.g. on Linux FUSE allows filesystems in user space, loadable and unloadable kernel modules, Windows has a User mode drivers framework which is increasingly used post Vista.

Neither QNX or Minix are as feature full as Linux or Windows NT - almost all development on Linux and Windows is in driver development, not core kernel work (e.g. scheduler, IO system, memory manager).

Windows NT remains a more modern kernel (in terms of internal consistent object model, basic kernel primitives) than either BSD or Linux, but whether it's absolutely better is a harder argument.

In general the argument regarding micro/monolithic kernels is old hat in practice we can make extra-ordinarily reliable monolithic kernels, while we can also avoid a lot of the pain associated with microkernels - e.g. minimising context switch time. The reliability, security, flexibility argument is largely made redundant by man-power dropped into modern kernels - better use Linux because although it's theoretically inferior it has 1000s of days of development thrown at it.

Kernel development also appears to be transitioning to using safety in compilers to improve reliability, but that's been a research topic for decades - the answer to 'can we build a sufficiently smart compiler' has been 'maybe' for a long time - though LLVM and Rust's compiler show a way forwards.


"Dependency hell has very little to do with kernel design - it's entirely a user mode problem."

The comparison was desktop Windows to desktop Linux, where it matters. The Linux software managers work around this problem most of the time but can fail hard in ways that require a different interface and level of knowledge. With Windows, I usually just uninstalled and reinstalled a specific component. All done.

"Neither QNX or Minix are as feature full as Linux or Windows NT - almost all development on Linux and Windows is in driver development, not core kernel work (e.g. scheduler, IO system, memory manager)."

"n general the argument regarding micro/monolithic kernels is old hat in practice we can make extra-ordinarily reliable monolithic kernels, while we can also avoid a lot of the pain associated with microkernels - e.g. minimising context switch time. "

Again, within my claim, this has less relevance. What work went into QNX and Minix led to systems with better attributes (esp reliability & maintenance) than UNIX's at similar level of labor. This was almost totally due to their architecture: variants of microkernel design. Improvements in CompSci & industry to microkernel work killed a lot of their disadvantages. The kludgey ways of emulating such things in monoliths take a ton of work to get right and still don't go as far as the real thing. So, it's a worthwhile comparison.

You can band-aid up bad architectures all day until they seem to get by well. Doesn't change the fact that building on good architectures gets better ROI in terms of attributes they provide.

"Windows NT remains a more modern kernel (in terms of internal consistent object model, basic kernel primitives) than either BSD or Linux, but whether it's absolutely better is a harder argument."

Only way I can see to make assessment is to see how easily amateurs make readable, maintainable, consistent, efficient code with each. Not sure how that would turn out.

"The reliability, security, flexibility argument is largely made redundant by man-power dropped into modern kernels - better use Linux because although it's theoretically inferior it has 1000s of days of development thrown at it."

I've lost too much data to Linux desktops, the mainstream ones, to ever believe that nonsense. I'd switch over to a more resilient architecture the second I had the chance should one have been developed. Unfortunately, clean-slating an OS is a ton of work even with a good starting point: Solaris 10 was almost $200 mil. It's mainly economics that keeps them going: cheaper/easier to maintain backward compatibility & use good enough features with bandaids for availability & reliability. I use Linux desktop (with plenty backups!) for the same reason.

Note that this doesn't counter technical arguments against using such architectures at all. New, clean-slate projects should use what got better results. Projects focused on practical... something usable now... should build on cleanest prior OS's.

"Kernel development also appears to be transitioning to using safety in compilers to improve reliability, but that's been a research topic for decades - the answer to 'can we build a sufficiently smart compiler' has been 'maybe' for a long time - though LLVM and Rust's compiler show a way forwards."

It was more or less answered with things like Ada where common issues were countered by language and compiler design. It was even used in high assurance OS's like Army Secure OS (ASOS). Tools like Cyclone, SPARK, Astree, SVA-OS w/ SafeCode, and Softbound + CETS take it even further to kill even more issues. Hell, even Burroughs in 1961 caught out-of-bounds, stack, and invalid argument issues at hardware level with acceptable performance for OS written in high-level, ALGOL variant. It's not a lack of evidence than better language, architecture, or hardware design leads to more robustness: it's a lack of adoption by mainstream like always. They'll ignore most developments no matter how little work or performance overhead if it's not the norm.

As you said, LLVM and Rust are bringing it back in people's consciousness. A Very Good Thing. They're already experimenting with verified compilation, safe OS's, etc. Maybe they'll rediscover some of that 60's or 80's wisdom. Maybe, probably, learn great new things on the way.


Well, the thing is, the UNIX philosophy was not originally intended to guide kernel design decisions. It was meant more for tools that reside in the user envinronment and affect end user scenarious. The interface layer exposed to the end user was of primary concern.


It's a fair point. The alternative was the design strategy as shown in Dijkstra's THE and Hansen's concurrency-safe OS's. That is, tiny kernel, small modules, clean interfaces, type/memory-safe language much as possible, and straight-forward linking. That's on user and kernel sides. Still makes more sense than UNIX even thought UNIX philosophy for tools was step up from minicomputer OS's at the time.


Windows Internals is the canonical resource.

(I bet you would enjoy the concurrency and I/O sections from The Be Book and The Design and Implementation of the FreeBSD Operating System too.)


Specifically as described here:

https://users.cs.jmu.edu/abzugcx/Public/Student-Produced-Ter...

The benaphores were a brilliant idea. Plus, the general threading approach that avoided issues with direct, pointer manipulation. Add microkernels, memory protection, and threading all the way down for quite a unique kernel with incredible performance on old hardware. I'd be curious what a vanilla port of BeOS kernel on modern, multicore hardware would do vs a typical OS in fair benchmarks. We'd also include a little note that only one does it relatively safely. ;)


Is The Be Book available in PDF or ebook format anywhere?


I believe you can find something about BeOS here:

https://www.haiku-os.org/legacy-docs/bebook/index.html


If you're into NT kernel internals, I recommend an excellent book "What Makes It Page?". It describes the VMM in way too much detail (in a good way).

It's a great combination of theory and practice - it describes concepts and then shows them on the live kernel. For example, the author introduces the concept of cache coloring and then he shows in WinDbg the actual variable on the process data structure where the color is stored.

http://www.amazon.com/What-Makes-It-Page-Windows/dp/14791142...


Phenomenal book, highly recommended.


For more about this, have a look at "Windows Internals" (6th edition is the latest, I believe) by Mark Russinovich, David Solomon and Alex Ionescu. It's an awesome book.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: