The whole openness of the culture.
The net result of that is a generally acknowledged fact that Windows is slower than Linux when running complex workloads that push network/disk/cpu scheduling to its limit: https://news.ycombinator.com/item?id=3368771 A really concrete and technical example is the network throughput in Windows Vista which is degraded when playing audio! http://blogs.technet.com/b/markrussinovich/archive/2007/08/2...
Note: my post may sound I am freely bashing Windows, but I am not. This is the cold hard truth. Countless of multi-platform developers will attest to this, me included. I can't even remember the number of times I have written a multi-platform program in C or Java that always runs slower on Windows than on Linux, across dozens of different versions of Windows and Linux. The last time I troubleshooted a Windows performance issue, I found out it was the MFT of an NTFS filesystem was being fragmented; this to say I am generally regarded as the one guy in the company who can troubleshoot any issue, yet I acknowledge I can almost never get Windows to perform as good as, or better than Linux, when there is a performance discrepancy in the first place.
Take a look at these benchmarks that were posted here recently:
There isn't a single comment in that whole thread about how outrageously bad EC2 performance is. Meanwhile I'd bet that most HN startups run on EC2, heroku or other virtualized cloud platforms. And how many are using dog slow interpreted languages like python or ruby? It looks to me like people around here are quite willing to take very large performance hits for the sake of convenience.
I find Windows to be a small performance hit for the sake of convenience.
I used to use Windows exclusively until about 4 years ago. Up to the time i switched, I occasionally was testing a few Linux distros, and repeatedly came to the conclusion that drivers were still an issue, and that Linux wasn't ready for the average user's desktop.
Not so anymore, since about 2009. From that time on, the only thing without a Linux driver I encountered (there are many more, just not that widespread that it should matter) was an HP printer. Which is why I stopped using Windows altogether.
My experience since then? Windows is a royal PITA to use and maintain. Linux, with KDE as the desktop manager, isn't just faster, it's way friendlier for users. One example, which to me is huge: on Windows, you need to run the updater of every f..ing software provider from which you have purchased an app. On Linux, there's just one updater for everything. another one: on Windows, even with all the discount programs for students and others, you have to spend thousands of dollars before you get equivalents of all apps installed that you get for free when you select developer workstation for a Linux installer.
Agreed, games are the only thing that Linux doesn't yet cover as well as Windows - both development and play. However, wanna bet that Linux will tip the balance in this area too, in at most five years?
And here: https://news.ycombinator.com/item?id=5689731 :)
Exactly the same has happened - lots of new hires. Bad management. Really silly review process. Features are valued over fixing things.
There's no mentorship process for said new hires. This has obvious flow-on effects.
The old timers don't get promoted into management but end up fixing more and more bugs (because they're the ones that know stuff well enough to fix said bugs.) They get frustrated and leave, or they just give up and take a pay check.
The management values "time to deliver" over any kind of balance with "fix existing issues", "make things better", "fix long standing issues", "do code/architecture review to try and improve things over the long run."
They're going to get spanked if anyone shows up in their marketspace. Oh wait, but they're transitioning to an IP licensing company instead of "make things people buy." So even if they end up delivering crap, their revenue comes from licensing patents and IP rather than making things. Oops.
Thank god there's a startup mentality in the bay area.
The links that you posted in support of your claim are irrelevant IMO.
Compiling has nothing whatsoever to do with windows internals. You're comparing Visual Studio a full fledged IDE with dozens of extra threads running source indexing, source code completion/help indexing and dozens of other things that gcc does not do. To make a fair comparison you will have to just compare cl.exe with gcc with a bunch of makefiles (yes you can have makefiles on windows too).
Then your "real concrete and technical" example is actually a bug in windows vista which was fixed around 6 years ago.
And your claim about MFT fragmentation kind of sounds bizzare to be honest. Since Vista the OS internally runs a scheduled task to run a partial-defrag that takes care of it. I'm not sure what went wrong in your case.
I'm not saying you imagined the slowness, I believe you experienced what you said. So lets test your theory. Since you can write code runs slower only on Windows - give us some C code that runs horribly on Windows.
What you did is try to move the goal posts.
I guess the only solution is to tag my comments "Hey this is a casual conversational comment" or else people read too much into the wording.
Changes in Windows 8 were smaller, but it did for example improve the efficiency of memory allocation:
Windows 7 and 8 both have lower system requirements than Vista while offering more features. That fact was widely advertised and acknowledged. Sure, not everything was improved, but it's not true that MS never fixes things for better performance. They simply have different business priorities, such as supporting tablets in Windows 8 rather than supporting 1000 Mbps.
One could argue a better decision would've been to pick Core 2 and/or 3, but there's nothing to guide the scheduler to make this decision.
But it's not that "DPCs" are a design flaw. It's the way that Windows drivers have been encouraged by Microsoft to be written. You'll see most Windows drivers have an ISR and DPC. If you look at IOKit (Mac's driver framework), almost all drivers run at the equivalent of Passive Level (IRQL 0) outside of the ISR/DPC path -- the OS makes sure of that.
Because Windows driver devs are encouraged to write ISR/DPC code, and because this code runs at high IRQL, it means that bugs and inefficiencies in drivers show a much larger performance degradation. And when you buy a shit 0.01$ OEM NIC, and you have to do MAC filtering, L2 layering, checksum validation and packet reconstruction in your DPC, plus there's no MSI and/or interrupt coalescing, you're kind of f*cked as a Windows driver.
Also w.r.t to your other point, Threaded DPCs do exist (since vista) which run at PASSIVE_LEVEL.
As for Threaded DPCs, not only does nobody use them in real life, but they MAY run at PASSIVE. The system still reserves the right to run them at DPC level.
Really the only way out of the mess is Passive Level Interrupts in Win8... Which likely nobody outside of ARM ecosystem partners will use.
Though, as a user of badly written drivers, I'm totally fucked. Its too bad the OS design does not allow for the user control any aspect of this (well apart from MaximumDpcQueueDepth).
- Typical driver dev: Knows nothing about DPC Importance Levels, and sticks with medium (default): IPIs are not sent to idle cores, so device experiences huge latencies as the DPC targeted to core 7 never, ever, gets delivered.
- Typical driver dev 2: Hears about this problem and learns that High/MediumHigh Importance DPCs cause an IPI to be delivered even to idle cores: wakes up every core in your system round-robin as part of his attempt to spread/reduce latencies, killing your battery life and causing IPI pollution.
Now I hear you saying: "But Alex, why not always target the DPC only to non-idle cores?". Yeah, if only the scheduler have you that kind of information in any sort of reliable way.
Really this is clearly the job of the OS. As it stands now, targeting DPCs on your own is a "fcked if you do, fcked if you don't" proposition.
You do get a few more variables you can play with as a user, but changing them will usually lead to worst problems than it would solve. Many drivers take dependencies on the default settings :/
Honestly.. I've spent countless hours hunting down bad drivers to fix audio stutter and other crap on my gaming PC. I've finally got DPC latency under 10microSec and I'm not touching a thing :)
It was just a simple example of something that could change for the better in terms of performance, but that probably won't because it's a significant amount of code churn with little $$$ benefit.
I was honestly surprised that a core-balancing algorithm was added in Windows 7, but that was done at the last second (RC2) and by a very senior (while young) dev that had a lot of balls and respect. Sadly he was thanked by being promoted into management, as is usual with Microsoft.
The challenge of course is priority inversion, which is to say that low priority thread A gets started on an otherwise idle processor and then higher priority thread B wants to run, except there isn't a 'tick' where the processor periodically checks to see that the highest priority thread is running. Now your low priority thread is running in preference to the high priority thread.
You can finesse some of that by interrupting on thread state change (which has its own issues since sometimes threads have to run to know they want to change their state) but you're still stuck somewhere between ticks when threads are sleeping and full tickless. Not surprisingly its kind of like building asynchronous hardware in hardware description languages.
I'm glad to see the experiments continue.
Clever system but very very very difficult to maintain.
(I suppose I sort of don't get why this is a big deal - theoretically, I mean. Of course changing a system as widely-used as Linux is probably a challenge itself.)
As for checking for something happening, do you need to poll for this? I don't think it's necessary in the general case. Perhaps there is some broken hardware that doesn't support interrupts, but I have a hard time believing that's very common. Aside from that, nothing ever happens except by request, and (if structured right - not that I'm suggesting this is automatically easy) polling is unnecessary.
You still need a timer, I think, but it doesn't have to be a fixed frequency. It gets set to the time slice period when each new thread is scheduled, and if it fires before the thread has given up its time slice then a new thread is selected. (Ideally this behaviour should be relatively rare.)
However, this also means that the kernel is (in some ways) optimized for more server/super computer type stuff which prefers throughput over latency, at the expense of desktop/smartphone use, which prefers low latency over throughput.
Of course, in many cases there is an option between approaches (either at compile or run time). In the case of the scheduler however, they decided not to support multiple implementations in the mainline. This leaves us with the highly complicated implementation suitable for high end computer, and not the simpler Brain Fuck Scheduler (BFS) that shows minor improvements in desktop use.
Of course, using BFS is only a patch away, and several desktop distributions do use it.
There was a periodic timer process on the Prime, that ran at the highest priority. It was used for things like waking up driver processes periodically, in case an IO device was hung and didn't interrupt like it should have, and for waking up processes that were sleeping until a certain time passed (sleep call).
I think it was advanced for its time. If you want to read more about it, here's a link to the Prime System Architecture Guide:
And! Yes! Finally. Our 'isolated' cores are finally ours. Bare metal. No more jitter. Thank you for everyone who put that together. Your efforts are really appreciated!
On my system, maybe that 1% improvement doesn't mean very much.
But when you add up all the systems in the world that are running Linux, and think about how much electricity is used by them or how many personal experiences they are mediating, it really adds up into something worthwhile.
The curious question is: at what point does it become not worthwhile? 1% is maybe worthwhile. But .01%? .5%?
There is always someone who will want to do it if only to show they can. You only need to care about "worthwhile" if you're balancing it against other concerns.
Obviously people are generally going to be motivated to smash the larger ones first. But Linux doesn't run like a centralized project where developers are directed on what to prioritize.
The more cores/machines you have, the more this savings means. The threshold "is it worth it?" percentage depends on how many machines saved is worth an engineer's time to do the optimization.
The "subscriber link" mechanism allows an LWN.net subscriber to generate a
special URL for a subscription-only article. That URL can then be given to
others, who will be able to access the article regardless of whether they
are subscribed. This feature is made available as a service to LWN
subscribers, and in the hope that they will use it to spread the word about
their favorite LWN articles.
If this feature is abused, it will hurt LWN's subscription revenues and
defeat the whole point. Subscriber links may go away if that comes about.
Emphasis mine. Can you make a case for it being "abusive" in the context of the full text? I take it to mean generating an excessive number of links, not posting one link to a wide audience.
Certainly I don't want large amounts of our content to be distributed this way, but an occasional posting that puts an LWN article at #1 on HN is going to do us far more good than harm.
(That said, I do appreciate your concern!)
I think that this may convince me that I need to move it up higher on my list of priorities!
I read the kernel page and front page religiously, and read bits of the others.
I can't imagine how many things I've learned following the kernel's development. But the mailing lists are HUGE and thus hard to follow. LWN makes keeping up possible.
Separately, there's pretty much no better linux news source, and imo subscription is worthwhile.
Anyway, I've heard rumors that Windows 8 has this, but I can't find a good source on that (admittedly, I'm spoiled with GNU/Linux where it does not take much work to find this stuff).
 I just pieced this together, so take it with a grain of salt. Following the e-mail at , I guessed that Frederic Weisbecker was running this project. Googling him brought me to his github profile  which indicates he is with RedHat. Anyone who actually knows what is going on please confirm or correct this.