
Windows NT Kernel Contributor Explains Why Performance is Behind Other OS - mrb
http://blog.zorinaq.com/?e=74
======
kevingadd
It's nice to see that internal developers feel the same way about XNA that
external developers (who used to build XNA games, or still build XNA games)
do.

From the outside I always assumed the constant flood of new, half-baked
features instead of fixes and improvements to old ones was caused by interns
and junior devs looking for glory - sad to hear that's actually partly true. I
always considered frameworks like WPF (or Flex, for that matter) 'intern code'
- not that interns necessarily wrote them, but they reek of not-experienced-
enough engineers trying to solve problems by writing a bunch of new code,
instead of fixing existing code.

It really is too bad, though. There are parts of the NT kernel (and even the
Win32 API) that I consider a joy to use - I love IOCP, despite its warts, and
APIs like MsgWaitForMultipleObjects are great tools for building higher-level
primitives.

Plus, say what you want about GDI (there's a lot wrong with it at this point),
but it's still a surprisingly efficient and flexible way to do 2D rendering,
despite the fact that parts of it date back to before Windows 3.1. Some really
smart people did some really good API design over time over at Microsoft...

~~~
quotemstr
Actually, I think one NT's largest advantages over POSIX systems is process
management: yes, the venerable CreateProcess API.

See, in Windows, processes are first class kernel objects. You have handles
(read: file descriptors) that refer to them. Processes have POSIX-style PIDs
too, but you don't use a PID to manipulate a process the way you would with
kill(2): you use a PID to _open a handle_ to a process, then you manipulate
the process using the handle.

This approach, at a stroke, solves all the wait, wait3, wait4, SIGCHLD, etc.
problems that plague Unixish systems to this day. (Oh, and while you have a
handle to a process open, its process ID won't be re-used.)

It's as if we live in a better, alternate universe where fork(2) returns a
file descriptor.

You can wait on process handles (the handle becomes signaled and the wait
completes when the process exits). You can perform this waiting using the same
functions you use to wait on anything else, and you can use
WaitForMultipleObjects as a kind of super-select to wait on anything.

If you want to wait on a socket, a process, and a global mutex and wake up
when any of these things becomes available, you can do that. The Unix APIs for
doing the same thing are a mess. Don't even get me started on SysV IPC.

Another thing I really like about NT is job objects
([http://msdn.microsoft.com/en-
us/library/windows/desktop/ms68...](http://msdn.microsoft.com/en-
us/library/windows/desktop/ms684161%28v=vs.85%29.aspx)). They're a bit like
cgroups, but a bit simpler (IMHO) to set up and use.

You can apply memory use, scheduling, UI, and other restrictions to processes
in job objects. Most conveniently of all, you can arrange for the OS to kill
everything in a job object if the last handle to that job dies --- the closest
Linux has is PR_SET_PDEATHSIG, which needs to be set up individually for each
child and which doesn't work for setuid children.

(Oh, and you can arrange for job objects to send notifications to IO
completion ports.)

Yes, Windows gets a lot wrong, but it gets a _lot_ right.

~~~
huhtenberg
> _WaitForMultipleObjects as a kind of super-select to wait on anything_

Except for the 64 handle limit, which makes it largely useless for anything
that involves server applications where the number of handles grows with the
number of clients. So then you'd spawn "worker" threads, each handling just 64
handles. And that's exactly where your code starts to go sour - you are forced
to add fluff the sole purpose of which is to work around API limitations. What
good is the super-select if I need cruft to actually use it in the app?

And don't get me started on the clusterfuck of the API that is IOCP.

> _Yes, Windows gets a lot wrong, but it gets a_ lot _right._

Oh, no, it doesn't.

Specifically, what Windows does _not_ get is that it's other developers that
are using its services and APIs, not just MSSQL team that can peek at the
source code and cook up a magic combination of function arguments that does
_not_ return undocumented 0x8Fuck0ff. I have coded extensively for both Linux
and Windows, including drivers and various kernel components, and while
Windows may get things right _at the conceptual level_ , using what they end
up implemented as is an inferior and painful experience.

~~~
acqq
As far as I see you had a contact with Win API but you haven't understood
enough, based on your complaints. Moreover, why do you mention "MySQL" as the
team with the access to kernel sources?

~~~
huhtenberg
You see wrong. And I meant MSSQL, of course. Fixed.

~~~
acqq
Can you than please give a technical reason for you to insist on waiting on
more than 64 threads in a single API call, having simply too many threads for
good performance anyway when IO Completion Ports could be used instead?

~~~
ambrop7
He probably doesn't have 64 threads, but he may have much more than 64 files
or sockets. After all, the response was to the statement that
"WaitForMultipleObjects as a kind of super-select to wait on anything". It
clearly isn't, with this arbitrary limitation.

The problem with Windows is that there isn't _any_ such super-select, unlike
on Linux (see epoll, signalfd, timerfd...). You should take a look at my SO
answer which lists _six_ different ways to use sockets in Windows:
[http://stackoverflow.com/questions/11830839/when-using-
iocp-...](http://stackoverflow.com/questions/11830839/when-using-iocp-should-
i-set-wsaoverlappeds-hevent-to-null-or-to-a-valid-handl/11831450#11831450)

~~~
acqq
Whatever he has more, waitfor is not the optimal way on Windows for that so
it's still a complaint that round holes on Win32 are poor fit for his cubes.
First choosing tools then complaining that they don't fit to some goal is
typical for beginners.

~~~
huhtenberg
Do tell me, oh, enlightened one, what I should be using if I have 10K idle
network connections and I can't really use IOCP, because I need keep things
reasonably portable. That's not even considering that IOCP is a great example
of premature optimization at cost of clarity and simplicity. It tries to solve
a problem that I don't have and it doesn't offer a simpler solution that would
just do. It is not a "hole" for my square peg, it's a freaking snowflake.

And expecting a modern OS to have an uncomplicated O(1) API for monitoring
sockets is clearly too much to ask.

~~~
ambrop7
It's possible to abstract IOCP and non-blocking sockets into a single
interface with reasonable efficiency. For example:

\- To send data, user calls Socket::send(const char *data, size_t length).
This starts sending the given data; the buffer must remain available while
data is being sent. IOCP implementation will initiate I/O using WSASend() or
similar.

\- When send operation is complete, the socket implementation calls a Done()
callback of the user, and from that point on, the user is allowed to call
Socket::send() again. With proper implementation, send() can directly be
called from the callback.

This interface is very easy to implement on Linux. But on Windows there's a
complication, because it's non-trivial to just stop an IOCP I/O operation that
is in progress, in case you decide you don't need the socket any more and want
to red rid of it NOW. If you just forget about it, there could be a crash when
you release your buffer but Windows is still using it. A simple solution is to
CancelIo() the socket and do blocking GetQueuedCompletionStatus()s until you
get an event inidicating the completion of the pending I/O operation - taking
care to queue up any unrelated results you may have gotten, for later
processing. However, a more efficient solution is to use reference counting or
similar with your buffers, so they only get released when Windows is done with
them.

I didn't say it's easy, but it's possible.

~~~
huhtenberg
Yes, I know the abstraction is possible. In fact, I ended up implementing a
nearly complete BSD socket API emulation on top of IOCP. Just don't lose the
sight of the context, which is _Windows does a_ lot _of things right_ and my
point being is that it doesn't.

------
jsolson
> Another reason for the quality gap is that that we've been having trouble
> keeping talented people. Google and other large Seattle-area companies keep
> poaching our best, most experienced developers, and we hire youths straight
> from college to replace them.

I will say all of the ex-Microsoft folks I've encountered at Google Seattle
have been fantastic.

On a related note, it's stupidly easy to get code accepted by another team at
Google.

Also we're hiring.

~~~
tracker1
Too bad you guys don't seem to give a second glance at someone without any
formal education, despite working on systems for government, training, banking
security and airline industries.

~~~
ajross
To be fair, and in context: neither does Microsoft.

~~~
tracker1
I've been approached a number of times by MS... once fairly aggressively.
Never a word or response from Google though.

------
quotemstr
I actually _enjoy_ working at Microsoft --- I'm in Phone, working on telemetry
and various other things --- and I've met a ton of very smart people people.
I've also made the cross-team (and cross-org) contributions that the OP says
are nearly impossible (just today, even). While the OP makes a few good points
(some teams are kinda reluctant to take patches), I think he's grossly
exaggerating the problems, and the level of vitriol really isn't called for
either.

He's also slightly off-base on some of the technical criticisms: there are
often good reasons for doing things a certain way, and these reasons aren't
always immediately apparent. Besides, change _does_ happen: Arun Kishan (who
is smarter than I'll ever be) broke the dispatcher lock a while ago (see
[http://channel9.msdn.com/shows/Going+Deep/Arun-Kishan-
Farewe...](http://channel9.msdn.com/shows/Going+Deep/Arun-Kishan-Farewell-to-
the-Windows-Kernel-Dispatcher-Lock/)) when nobody thought it could be done.

~~~
quotemstr
By the way: there's actually quite a lot of information available on Windows
Internals. In fact, there's a whole book, Windows Internals
([http://technet.microsoft.com/en-
us/sysinternals/bb963901.asp...](http://technet.microsoft.com/en-
us/sysinternals/bb963901.aspx)) on exactly how the NT kernel and related
system components work. The book is highly recommended reading and demystifies
a lot of the odder parts of our API surface. We use it internally all the
time, and reading it, you'll repeatedly say to yourself, "Ah, so _that's_ why
Foo does Bar and not Qux! Yeah, that's a good reason. I didn't think of it
that way."

------
kyllo
_Look at recent Microsoft releases: we don't fix old features, but accrete new
ones. New features help much more at review time than improvements to old
ones.

(That's literally the explanation for PowerShell. Many of us wanted to improve
cmd.exe, but couldn't.)_

Ahh, I was wondering about that. So, I guess I'll just keep using cygwin.

On another note, I recently asked a friend who works at Microsoft how work is
going. His reply: "Well, it's calibration time, so lots of sucking up to the
boss." Must be hard to get much actual work done when you're worried about
that all the time.

~~~
manojlds
Powershell is so damn good and is probably so because it was not improvement
of cmd. Give credit where it is due.

~~~
shadowmint
I'm always found it difficult to read and impossible to debug.

It's useful, sure, but nice? I'm not convinced.

How about an empty recycle bin command? Why, certainly, you can do that:

    
    
        $Shell = New-Object -ComObject Shell.Application
        $RecBin = $Shell.Namespace(0xA)
        $RecBin.Items() | %{Remove-Item $_.Path -Recurse -Confirm:$false}
    

Teh F33243ck. Readable much? I sure hope your powershell scripts come with
comments; the one's we've inherited sure didn't and they're black magic.

~~~
lake99
Over here, on Linux-land, I used to have PS-envy whenever I thought about the
PowerShell. You cured me!

Though I don't use Trash/Recycle myself, to contrast, I like the simplicity of

    
    
        rm -rf ~/.local/share/Trash/info ~/.local/share/Trash/files

~~~
manojlds
Commands like Remove-Item have familiar aliases like rm by default. If you see
code in tools like Psake, etc. it would be very verbose, because the intention
is to have readable code. But you sure can use the aliases for everyday use.

------
tzs
It's possible he deleted his post because he realized that he included
something in it that could identify him, which could be bad for his career.

It might have been better to rewrite his post into your own words, and take
out some of the unnecessary detail, rather than literally repost.

~~~
logn
I'd agree. I know the OP is trying to spread knowledge and this is a great
read, but I think this was slightly rude, not to mention questionable fair
use.

~~~
dangrossman
There's no "fair use" argument at all. Copying an entire work, then
distributing it with no transformative change or criticism, is inexcusable
infringement should the author want to enforce his rights.

~~~
robryan
Although in this case I think it would be hard for this author to enforce
his/her rights and still remain completely anonymous?

~~~
iso8859-1
which just makes it even more wrong...

------
pg
"His post has been deleted! Why the censorship?"

We didn't delete it. It was deleted by whoever posted it.

~~~
jamesaguilar
No delete button on the internet v125786123. Sidebar: do deleted comments
appear with showdead on, or just ones that are deleted on HN side.

~~~
hollerith
>do [deleted-by-author] comments appear with showdead on[?]

No, they do not. Specifically, I have showdead on, and when trying to view the
comment in question, I see only "[deleted]".

------
softbuilder
>We fill headcount with nine-to-five-with-kids types,

then

>We occasionally get good people anyway

Uh... nine-to-five-with-kids type here. Thanks for the stereotyping... from
your safe corporate nest.

Otherwise an insightful post.

~~~
temphn
If your first priority is your kids, your first priority is not your work.
That's fine, it's a choice, but there is a frequent claim that the massive
incremental time demand of kids makes one so much more time efficient at work
that it more-than-compensates. Insofar as one is the same human being, with
the same energy reservoirs and time management skills as before your child was
born, this is unlikely.

I understand the reason this fiction is maintained: people with kids need the
job even more than people without, and have an interest in denouncing people
who claim kids make you less productive. There's also the second order effect
in which "with kids" is correlated with "older".

The net of it though is that devs with kids tend to assign work a lower
priority, to take fewer risks, to need more money, and to be older and hence
less familiar with new technologies (and too busy to learn in their free
time).

Society doesn't have a good answer for this situation yet. In times past,
technology didn't move so fast that experience was mostly obsolete (and hence
useless) in a decade's time. A 40 year old farmer with kids in 1713 would
probably have much to teach a young whippersnapper. The same isn't true for a
40 year old programmer with kids in 2013.

~~~
corresation
_Insofar as one is the same human being, with the same energy reservoirs and
time management skills as before your child was born, this is unlikely._

Your whole post drips with resentment-fueled bigotry and ignorance, making me
wonder if you're fighting for attention at your workplace or whatever, feeling
unloved. Here's an internet hug. Hugz.

Many years back I -- a new graduate employee -- was chatting with my boss, who
was a part owner of the company/president. He had five kids, or maybe even
six. He asked me when I was thinking of having children (it was still too
early for me, but just as a conversational thing), and my honest answer was
that I didn't know how he could afford it.

He then told me about an Arabic parable or the like that each child comes with
a bag of money.

That seemed counter intuitive to me, but my life has proven it out. I now have
four children, and I would wager good money that I know more current
technologies, in much more depth, than you do.

If you have the capacity, having children has a _profound_ ability to make you
focus: While I am the same intellectual being, like the vast majority of
developers I was absolutely pissing time away before children, and I doubt I
passed even 5% productivity. Slashdot was the Reddit of the time, and doing
asinine, meaningless implementations for days on ends was just a normal day.
And I _know_ this is the case for most developers.

Now I don't have time for the bullshit. I focus specifically on the things
that yield success, in the most efficient manner possible. I'm still only
maybe 15% productive (still piss away a lot of time), but the result is my own
company, a lot of success, etc.

~~~
lttlrck
This.

Its hard to remember life before kids, but one thing I know for sure: I wasted
a huge amount of time.

I work far more effectively at 39 than I did in my 20's. And my ability to
focus took a massive boost after becoming a father.

~~~
etha
But how much of that is due to nearly two decades of experience and how much
is due to having kids?

------
dietrichepp
I think there are two complementary parts that Linux gets right here. One is
as cited in the article, that you get personal glory for improvements you make
to the kernel even if they are fairly small. The other part is that there is
for Linux someone who will say "no" to a patch. Linus will certainly do it,
and other trusted devs will do it too. Linus is perhaps even famous for
telling people when they are wrong.

I've seen a number of open-source projects where you get all the personal
glory for your additions but there is nobody who takes the responsibility to
tell people "no". These projects, almost universally, turn into bloated messes
over time. Open-sourced computer games seem to fall down this path more
easily, since everyone and her little dog too has ideas about features to add
to games.

~~~
philsnow
> Open-sourced computer games seem to fall down this path more easily, since
> everyone and her little dog too has ideas about features to add to games.

I just listened to an episode of Roguelike Radio from the mid-30s (maybe the
one with Red Rogue or Thomas Biskup) and Andrew / Keith (IIRC) go on and on
about how all the "major" roguelikes are just accretions of features.

This rings very true to me; I grew up playing Nethack [0], and Slash (and then
Slash'Em) just got silly with all the things they were adding. Light sabers ?
In my roguelike ? Come on, fhgwgads. Nethack at least has a mysterious Dev
Team who ostensibly are the guardians of quality... but they _did_ bring
Sokoban back from Slash'EM into vanilla.

[0] since Nethack was my first roguelike, I didn't realize until lately just
how crappy it is in many ways, most notably that it's very difficult to win
before you've figured out all the sources of instadeath (whether you've
figured them out by spoilers (written or source-diving) or through (really
frickin hard-won) experience).

~~~
p0nce
Try brogue, it's a breath of fresh air.

~~~
philsnow
I <3 the crap out of brogue; I'm currently playing through some roguelikes
that I've never looked at before [0], and I keep coming back to brogue.

In particular, brogue's food clock is excellent in that if I forget about it
too long and go back to my natural "explore everywhere, kill everything"
tendency, the food clock _will_ kill me.

[0] something that jumped out at me is just how much modern roguelike
development is windows-first (or even windows-only), and how much of it is
distributed primarily as runnable binaries vs source. I had to bend way over
backwards to get PrincessRL running: I ended up installing a 32-bit ubuntu
userspace virtualbox instance so that mono could run the prepackaged binary.

------
pjmlp
The typical scenario you will find in any big corporation developing software.

I started to understand better how Microsoft works, after getting into the
Fortune 500 enterprise world.

Many of the bad things geeks associate with Microsoft are actually present in
any development unit of big clunky Fortune 500 companies.

------
chamanbuga
What's wrong with 9-5? Can't you be passionate about what you work on, excel
in you career, yet stick to dedicating ~50% of your time awake to your job?

~~~
pjmlp
It is an American thing. In the rest of the world we can happily do 9-5
without feeling bad for it.

~~~
chamanbuga
It's amusing watching Americans boast about working past the norm 9-5 hours
and calling the rest of the world lazy. My response, "What are you proud
about? After working twice as hard, you have the same quality of life as
friends in Canada, and statistically speaking you'll drop dead sooner because
of stress."

That always wipes the smug grin of their faces.

~~~
illuminate
"That always wipes the smug grin of their faces."

While I agree with you, nothing could wipe the smug grin off these persons'
faces. They are beyond reality, and I'd be much happier with America if
something so simple could give the "socialists are universally lazy" rabble
whenever labor rights are mentioned in any sense.

~~~
pjmlp
Yes, here in the socialist Germany I can enjoy my free time after 4:30 pm and
if my boss wants me to stay longer I get paid overtime.

------
bitwize
_We can't touch named pipes. Let's add %INTERNAL_NOTIFICATION_SYSTEM%! And
let's make it inconsistent with virtually every other named NT primitive._

Linux does this motherfucking bullshit too. "Oh, systemd is piss slow? We'll
bung d-bus straight into the kernel, right along side the over 9000 other IPC
mechanisms. Everybody uses d-bus these days, it's an Essential System
Component. What? Systemd is a crap idea to start with? People like you should
be committed."

~~~
ajross
Considering linux actually hasn't done this yet, that seems like a weak
argument. Are there better examples?

------
kabdib
I keep coming here, writing a bunch of stuff, then removing it.

The author isn't far off. He's over the top on some things, but on the whole
it's a good take on some of the reasons Microsoft culture is toxic.

My take:

1\. The review system needs to be fixed. They have to stop losing good
engineers.

2\. They have to somehow get rid of a bunch of toxic people. Many of these are
in management. It's hard.

3\. They have to stop treating Windows as "the thing we put in all the other
things".

Windows is a fine OS; it could be better, and the OP points out a lot of
reasons why it isn't. But it's not a great fit for embedded systems, nor game
consoles, nor anything where you're paying for the hardware.

But I keep coming back to the review system rewarding the wrong people,
causing good people to leave. The underlying backstabbing and cutthroat
culture needs to go; it's hurting the bottom line, and I'm surprised the board
has been willing to let it happen.

------
dchest
Ah, so the symlink plan is true:
[https://gist.github.com/800407/3588729d57abb7731985c99c492a2...](https://gist.github.com/800407/3588729d57abb7731985c99c492a2a08459d7b2f)

------
pdknsk
> These junior developers also have a tendency to make improvements to the
> system by implementing brand-new features instead of improving old ones.

That's exactly the problem that's plaguing Google Chrome right now, although
probably for a different reason, as many senior developers still seem to be on
board. Google keeps adding new features at high pace and doesn't care what
brakes in the process. The amount of unfixed (albeit mostly minor) bugs is
huge.

------
brudgers
Backward compatibility is an OS performance metric. Maybe sales is too.
Microsoft has to think long and hard about any kernel change. In some irony,
Microsoft doesn't own the Windows code, and any individual can own the Linux
kernel - i.e. Windows lacks forks.

That Microsoft discourages individual junior developers from cowboying, is a
point in their favor. Optimization for its own sake is not what benefits their
users - real research does.

------
dottrap
_We just can't be fucked to implement C11 support_

I thought Microsoft refusing to update their decades behind, ancient C
compiler was just to piss me off and make life difficult cross-platform
developers that need to work in C. Interesting to see this applies to their
own employees too.

~~~
bitwize
I believe the official policy is that C++ is the upgrade path from C, and C is
therefore deprecated.

~~~
pherz
I think a lot of people that care about C support are already aware of Herb
Sutter's stance on the matter. But knowing why he made his decision doesn't
really change anything, its not like C++ never crossed their minds before. And
classing it up by calling it official policy doesn't make the pill go down any
easier.

Microsoft's position in the market is already big enough that they'll never
really be proven wrong on C unless they relent. But they're also not large
enough to kill it outright.

~~~
dottrap
Yeah, Herb Sutter can GFHS. We use C for good reasons. C++ is not an upgrade,
it is a completely different language.

I just remembered Microsoft wrote this scathing document: C++ for Kernel Mode
Drivers: Pros and Cons [http://msdn.microsoft.com/en-
us/library/windows/hardware/gg4...](http://msdn.microsoft.com/en-
us/library/windows/hardware/gg487420.aspx)

I didn't find an actual pro. Makes sense that a Microsoft Kernel developer
would also be frustrated by their lack of modern C support.

------
chetanahuja
tl;dr : corporatism and careerism. It's the death of creativity and
productivity. No large organization is immune to it. Never will be. (and yes,
that includes Google... it's just much smaller and newer than Microsoft is
right now)

~~~
yuhong
michaelochurch has talked about this before, including the fundamental flaws
behind it.

~~~
randomfool
He's also a vindictive windbag who is extremely disgruntled about his
shortcomings at Google.

~~~
yekko
And he's totally correct.

------
api
I've even seen this mentality in startups. It does have some business
rationale, provided you are thinking short term and focused only on near-term
goals.

One of the reason businesses have trouble really innovating is that it's
_hard_ in a business to work on long-term things when markets are very short
sighted. Only mega-corps, monopolies, and governments can usually do that...
or hobbyists / lifestyle businesses who are more casual about hard business
demands.

That being said, MS is surely cash-rich enough to think long term. So this
doesn't apply as much here.

I've also found that of all things _optimization_ almost gets you looked down
upon in most teams -- even young ones. "Premature optimization is the root of
all evil," and all that, which is usually misinterpreted as "optimization is
naive and a waste of time." It's seen as indicative of an amateur or someone
who isn't goal-focused. If you comment "optimized X" in a commit, you're
likely to get mocked or reprimanded.

In reality, "premature optimization is the root of all evil" is advice given
to new programmers so they don't waste time dinking around with micro-
optimizations instead of thinking about algorithms, data structures, and
higher order reasoning. (Or worse, muddying their code up to make it "fast.")
_Good_ optimization is actually a high-skill thing. It requires deep knowledge
of internals, ability to really comprehend profiling, and precisely the kind
of higher-order algorithmic reasoning you want in good developers. Most good
optimizations are algorithmic improvements, not micro-optimizations. Even good
micro-optimization requires deep knowledge-- like understanding how pipelines
and branch prediction and caches work. To micro-optimize well you've got to
understand soup-to-nuts everything that happens when your code is compiled and
run.

Personally I think speed is really important. As a customer I know that slow
sites, slow apps, and slow server code can be a reason for me to stop using a
product. Even if the speed difference doesn't impact things much, a faster
"smoother" piece of code will convey a sense of quality. Slow code that
kerchunks around "feels" inferior, like I can see the awful mess it must be
inside. It's sort of like how luxury car engines are expected to "purr."

An example: before I learned it and realized what an innovative paradigm shift
it was, speed is what sold me on git. The first time I did a git merge on a
huge project I was like "whoa, it's done already?" SVN would have been
kerchunking forever. It wasn't that the speed mattered that much. It was that
the speed communicated to me "this thing is the product of a very good
programmer who took their craft very seriously as they wrote it." It told me
to expect quality.

Another example: I tried Google Drive, but uninstalled it after a day. It used
too much CPU. In this case it actually mattered -- on a laptop this shortens
battery life and my battery life noticeably declined. This was a while ago,
but I have not been motivated to try it again. The slowness told me "this was
a quick hack, not a priority." I use DropBox because their client barely uses
the CPU at all, even when I modify a lot of files. Google Drive gives me more
storage, but I'm not content to sacrifice an hour of battery life for that.

(Side note: on mobile devices, CPU efficiency has a much more rigid cost
function. Each cycle costs battery.)

Speed is a stealth attribute too. Customers will almost never bring it up in a
survey or a focus group unless it impacts their business. So it never becomes
a business priority.

Edit: relevant: <http://ubiquity.acm.org/article.cfm?id=1513451>

~~~
columbo
> In reality, "premature optimization is the root of all evil" is advice given
> to new programmers so they don't waste time dinking around with micro-
> optimizations instead of thinking about algorithms, data structures, and
> higher order reasoning. (Or worse, muddying their code up to make it
> "fast.")

It's also for _experienced_ programmers who dink around with macro-
optimizations. For example, designing an entire application to be
serializable-multi-threaded-contract-based when there's only a handful of
calls going through the system. Or creating an abstract-database-driven-xml-
based UI framework to automate the creation of tabular data when you have
under a dozen tables in the application.

 _premature optimization is the root of all evil_ is a really _really_
important mindset, and I agree it doesn't mean to not optimize, and many
developers seem to take it that way.

X+1 = How many transactions your business does today

Y = How many transactions your business needs to do in order to survive

Y/X = What the current application needs to scale to in order to simply
_survive_. This is the number where people start receiving paychecks.

(Y/X)4 = How far the current application needs to scale in order to grow.

The goal should be to build an application that can just barely reach (Y/X)4 -
this means building unit tests that test the application under a load of
(Y/X)4 and optimizing for (Y/X)4

Spending time trying to reach (Y/X)20 or (Y/X)100 is what I'd call premature
optimization.

Disclaimer: (Y/X)4 is no real point of data that I know of, just something I
pulled out as an example, anyone who knows of actual metrics used please feel
free to correct.

~~~
brudgers
The canonical version is Alan J. Perlis Epigram 21:

    
    
       'Optimization hinders evolution.'
    

If you have a black box, then optimize the fuck out of it. The Windows kernel
is not a black box.

~~~
api
I think you missed half the argument. Windows is _noticeably_ slower than
Linux or Mac on the same hardware. Isn't that a problem?

And if optimization always hinders evolution, boy should Windows be
evolving... I mean... the NT kernel should have smashed through all kinds of
antiquated paradigms by now. It should be doing memory deduplication, disk
deduplication, fast JIT compilation of binaries for alternative architectures.
It should support live process migration between machines, joining of systems
together efficiency to form larger super-systems, a better permission model
obviating the need to rely completely on virtualization for true privilege
isolation in enterprise environments. It should have truly efficient network
filesystems supporting disconnected operation, sharding, etc.

Oh wait... it's stuck in the 90s... never mind. _And_ it's slow.

Linux, which optimizes a lot, has at least some of the things I mentioned
above.

"Premature optimization is the root of all evil" is a deeply nuanced statement
that is nearly always misunderstood. "Optimization hinders evolution" is
probably likewise. They're quotes cherry-picked out of context from the minds
of great craftsmen who deeply understand their craft, and alone I do not
believe they carry the full context required to comprehend what they really
mean. I think they have more to do with maintaining clarity and focusing on
higher-order reasoning than they do with whether or not to try to make things
run faster. (And like I said, the most effective optimizations are usually
higher-order conceptual algorithmic improvements.)

~~~
rogerbinns
> Windows is noticeably slower than Linux or Mac on the same hardware. Isn't
> that a problem?

Windows isn't slower at running Windows apps, fitting into a Windows
infrastructure (Active Directory, management tools, Exchange etc), using
Windows device drivers, working with NTFS volumes and their features,
backwards compatibility, printers etc.

It all comes down to what the goals of the users (or purchasers) are, and I
doubt anyone buys Windows because of "performance" in the sense being talked
about. But they do care about the "performance" of items mentioned in my
previous paragraph.

Linux has been able to optimise because of being open source and vehemently
ignoring closed source. Open source means for example that the Linux USB stack
could be optimised and all affected code due to API changes could be updated.
This and other topics are covered really well in Greg Kroah-Hartman's OLS 2006
keynote - <http://www.kroah.com/log/linux/ols_2006_keynote.html> \- see "Linux
USB Code" about halfway down for that specific example.

------
marshray
Disclosure: I'm a new guy at Microsoft, not related to Kernel or Windows
stuff.

So where does he back up the claim that NT kernel performance is, in fact,
"behind other OS"?

------
gizmo686
Slightly off topic, but is there anyway we can actually confirm that "the
SHA-1 hash of revision #102 of [redacted] is [redacted]."

EDIT: redacted information. Still, when that information we present, how would
anyone be able to confirm it?

~~~
marshray
From what I've heard (unofficially) the source is typically watermarked to
identify the sources of any leaks. He may have given himself away with the
SHA-1.

~~~
gizmo686
I doubt it. Not only would that involve generating a different watermark for
each person with access to the code, but in doing so you also raise
complications for source control. Additionally, it is almost impossible for
developers not to notice if supposedly identical files are not identical, not
to mention the fact that implementing the type of system without the conscious
corporation of the developers seems impossible.

------
giulivo
Given the topic of the article I understand my question could be seen as OT
but I'd like to dig more into the following:

> _The NT kernel is still much better than Linux in some ways --- you guys be
> trippin' with your overcommit-by-default MM nonsense_

What's wrong with the overcommit-by-default behaviour? Or how/why the NT
kernel is supposed to do better?

~~~
asveikau
I don't know what the author was thinking of specifically, but the OOM killer
seems like a bit of a hack to me. I can think of situations where I'd rather
have an allocation fail upfront predictably than be randomly killed by the
system.

A lot of people seem to think you're screwed when you're out of memory and so
think killing the process is acceptable, but I've worked on code bases where
it's actually handled somewhat gracefully. (Although, on NT, if you run out of
nonpaged pool you start to get weird, random I/O failures.)

~~~
giulivo
but again, if it is just about the default behaviour which we debate, one can
simply disable the overcommit

vm.overcommit_memory = 2

see [https://www.kernel.org/doc/Documentation/vm/overcommit-
accou...](https://www.kernel.org/doc/Documentation/vm/overcommit-accounting)

~~~
asveikau
And that was exactly the criticism, wasn't it? "Overcommit-by-default
nonsense". I didn't write the original, just trying to parse it. (And with
that aspect I kind of agree, it's a goofy behavior.)

I guess there's another, somewhat social aspect of making that the default:
everyone else coding on the platform assumes the behavior is the other way, so
I suspect when you set that flag to off, your other software starts crashing
on null pointer dereferences because no one thinks malloc can fail.

------
termie
UTF-16 everywhere doesn't help either. M\0o\0r\0e\0 \0d\0a\0t\0a.\0

~~~
quotemstr
You do realize that NT is from 1989 and that UTF-8 is from 1992, right?

~~~
termie
Yeah. Which makes it even more irritating.. MS is still off in the double-wide
stix 11 years later and still promoting UTF-16LE in userland. From
[http://msdn.microsoft.com/en-
us/library/dd374081%28VS.85%29....](http://msdn.microsoft.com/en-
us/library/dd374081%28VS.85%29.aspx)

"Unicode-enabled functions are described in Conventions for Function
Prototypes. These functions use UTF-16 (wide character) encoding, which is the
most common encoding of Unicode and the one used for native Unicode encoding
on Windows operating systems. Each code value is 16 bits wide .. New Windows
applications should use UTF-16 as their internal data representation."

~~~
jeltz
I seriously doubt that UTF-16 still is the most common encoding. The web is
mostly UTF-8 and so are most smartphones.

~~~
sratner
JavaScript strings are UCS-2 or UTF-16.

------
rjdagost
There is a great quote in the post: "Incremental improvements just annoy
people and are, at best, neutral for your career."

Working for several different Fortune 500 companies this has been my
experience many times. I have worked on products where I made a large number
of incremental improvements that never got incorporated into the released
product. Taken on their own the improvements don't change things much, but in
the aggregate they make a significantly better product. Why didn't most of my
incremental improvements get incorporated into the product? Every change, no
matter how minor, is perceived by management to be a highly risky proposition.
Product managers won't get promoted because the product is 4% better, but they
could potentially get fired if that same incremental improvement had an
unforeseen negative consequence.

Once the short term sales mindset gets power, hit products are viewed as cows
to be milked and not as things that need continued maintenance and
improvement. As an engineer in such an enterprise you watch helplessly as your
competitors slowly but steadily pass you, and then someday upper management
wonders why the magic money machine broke down. Some large enterprises avoid
this fate but they seem to be the exception.

------
js4all
The problems with the NT kernel are nothing new. When I suggested for Windows
to use a BSD kernel to overcome its current limitations, I got lots of
disagreement. Microsoft developers seem to be happy with the current state and
have no vision of a totally new architecture.

<https://news.ycombinator.com/item?id=2841934>

~~~
United857
People are praising BSD like it was all hot and new. OS X is built on a BSD
which has it's roots in 60's and 70's OS design, just like the VMS roots of
WinNT.

OS X didn't change the world by bringing some great new underlying
architecture to the table. In fact, their kernel and filesystem are arguably
getting long in the tooth. The value that OS X brought to the table was the
fantastic Carbon and Cocoa development platforms. And they have continued to
execute and iterate on these platforms, providing the "Core" series of APIs
(CoreGraphics, CoreAnimation, CoreAudio, etc.) to make certain HW services
more accessible.

There's very little cool stuff to be gained in the windows world by developing
a new kernel from scratch. A quantum leap would not solve MS's problem. The
problem is the platform. What's really dead and bloated is the Win32
subsystem. The kernel doesn't need major tweaking. In fact, the NT kernel was
designed from the beginning such that it could easily run the old busted Win32
subsystem alongside a new subsystem without needing to resort to expensive
virtualization.

Unfortunately, the way Microsoft is built today it have a fatal organizational
flaw that prevents creating the next great Windows platform. The platform/dev
tools team and the OS team are in completely different business groups within
the company. The platform team develops the wonderful .NET platform for
small/medium applications and server apps while the OS team keeps crudging
along with Win32. Managed languages have their place, but they have yet to
gain traction for any top shelf large-scale windows client application vendors
(Adobe, even Microsoft Office itself, etc.) Major client application
development still relies on unmanaged APIs, and IMHO the Windows unmanaged
APIs are arguably the worst (viable) development platform available today.

What Windows needs is a new subsystem/development platform to break with
Win32, providing simplified, extensible _unmanaged_ application development,
with modern easy-to-use abstractions for hardware services such as graphics,
data, audio and networking. This is starting to come to fruition with WinRT,
but the inertia in large scale apps is unbelievable.

------
drorweiss
As an outsider, it looks like MSFT does not grasp kernel performance as a
super important feature of Windows. They add so much bells & whistles over the
UI, but the inside sucks release after release

~~~
glhaynes
As someone else posted in this same thread: _is_ modern Windows substantially
slower than the OSes it competes with (in anything other than a few uncommon
niches)? If it's more than a few percentage points slower for anything but
rare scenarios, I'd be surprised.

------
Zigurd
The description sounds accurate compared to what I heard when I was in contact
with Microsoft insiders. That was about 15 years ago. What's surprising to me
is that it is still this way.

A lot of the justification for being exceedingly conservative about
performance improvements was based on the difficulty of doing testing that is
sufficient to leave no doubt that changes don't cause regressions nor any side
effects bearing on compatibility.

I'm surprised that hasn't changed, or that the value equation about possibly
breaking customer software hasn't changed. By now this approach must be a very
entrenched culture.

------
cpncrunch
I think the main performance problem with NT is its scheduler. Compared to
unix-type schedulers, it doesn't give very good interactive performance when
there are cpu-bound processes. There have been a lot of hacks over the years
(like throwing cpu-starved processes a bone when they haven't had any action
in 4 seconds), but unix/linux just has a better designed scheduler that gives
better performance in most situations.

------
mariuolo
I suspect another aspect is for linux developers to be less worried about
keeping the API/ABI stable.

~~~
gizmo686
The Linux kernel developers take backwards compatibility very seriously. Here
is a tame excerpt from Linux on the subject: "Seriously. Binary compatibility
is _so_ important that I do not want to have anything to do with kernel
developers who don't understand that importance. If you continue to pooh-pooh
the issue, you only show yourself to be unreliable. Don't do it." [1]

[1]<https://lkml.org/lkml/2012/3/8/495>

~~~
slacka
Why is it that I can run my Radeon x1900 under Windows 7/8 with Windows Vista
drivers? Runs StarCraft2 and a bunch of newer FPS games just fine.

Linux on the other hand is a disaster. The kernel devs are so determined to
break binary compatibility, I haven't been able to run with ATI's proprietary
binary drivers for years. While AMD was a good open source citizen and
released the specs, the open source drivers for my card are useless for
anything other than 2D.

~~~
lmz
Linus is talking about userspace binary compatibility. Your graphics drivers
are relying on kernel module binary compatibility, which is not guaranteed.

~~~
slacka
Yes, but without working drivers for my system, what good is user space binary
compatibility? MS has maintained compatibly for their drivers, something
kernel developers seem to careless about. Which was exactly the point of the
original poster.

------
Mo1oKo
"The rot has already set in"

Great words for concluding a video game intro

------
wfunction
At some point Microsoft might want to consider expanding its kernel team into
Silicon Valley.

------
PaulHoule
I support conservative engineering for file systems.

NTFS is highly reliable and Microsoft wants to build on that success with
ReFS. In years of working with NTFS I've never had a serious filesystem wreck
(although I did have one that was never quite the same after I made a few
hundred million small files)

The NTFS story isn't that different from the ext(2|3|4) story on Linux. ext-N
filesystems are boring and not particularly fast, but they're reliable. I've
tried other filesystems on Linux but have often had wrecks or noticed data
corruption.

I like boring filesystems.

------
concision
I'm a recent college graduate starting on a Windows Core team in the next few
months.

Whee. Talk about a downer to start my day.

~~~
NamTaf
Look at it this way: you're lined up to work in a company with far more
capability and potential than almost every other western organisation. When
you're compared to the likes of Google and Apple as examples of where you
might be lacking, you're doing good.

Besides, even if everything is as bad as they say and you absolutely cannot
work in that environment, interning is the perfect time to discover that
without as significant commitments. It will also prove to be a valuable lesson
that you can then take to other companies.

Chin up, I'd love to work even at Microsoft.

------
gdonelli
"our good people keep retiring or moving to other large technology companies,
and there are few new people achieving the level of technical virtuosity
needed to replace the people who leave."

in the post PC era Kernels are not as hot as they were in 2000

------
crookedbeer
Isn't Windows NT used only in large corporations that are afraid to upgrade
their servers?

~~~
ksherlock
The Windows NT _Kernel_ is used in Windows NT, 2000, XP, Vista, 7, 8, etc.
(Versus the DOS kernel used in DOS, 95, 98, and Me)

~~~
yuhong
Yea, that is another pet peeve of mine. Windows 2000 is technically NT 5.0
(and was called that during the beta). XP is NT 5.1 etc...

------
corresation
Is performance behind other OS'? Serious question, because empirically I'm not
aware of any particular performance lag.

------
outside1234
Windows is dying just like DOS.

Azure is the new Windows.

~~~
illuminate
I don't know what this is even supposed to mean.

------
millerc
Sad post IMO, which will unfortunately and unduly tarnish Microsoft's
reputation. I would suppose it was written by a developer who has just become
senior enough to see some of what's going on, but has not yet realized it's
everywhere like that. One day, he'll understand that those challenges too need
to be managed.

Growing pains, all around.

