
Why is Windows so slow? - kristianp
http://games.greggman.com/game/why-is-windows-so-slow/
======
portman
I'm disappointed HN! There is a lot of pontificating, but not much science
here.

It takes all of 2 minutes to try this experiment yourself (plus ~8 minutes for
the download).

1\. Download chromium [http://chromium-browser-
source.commondatastorage.googleapis....](http://chromium-browser-
source.commondatastorage.googleapis.com/chromium_tarball.html)

2\. Unzip to a directory

3\. Create this batch file in the src directory, I called mine "test.bat"

    
    
        echo start: %time% >> timing.txt
        dir /s > list.txt
        echo end: %time% >> timing
    

4\. Run test.bat from a command prompt, twice.

Paste your output in this thread. Here is mine:

    
    
        start: 12:00:41.30 
        end: 12:00:41.94 
        start: 12:00:50.66 
        end: 12:00:51.31 
        

First pass: 640ms; Second pass: 650ms

I can't replicate the OP's claim of 40000ms directory seek, even though I have
WORSE hardware. Would be interested in other people's results. Like I said, it
only takes 2 minutes.

~~~
coverband
Unfortunate that this comment doesn't get enough attention in this thread... A
single person's experience will always be subjective (even with the provided
technical detail). If it's not consistently repeatable, it can only be used as
an anectode.

~~~
ajross
I'm not sure what your point is. The post you're replying to is no less
anecdotal. Obviously there are complicated performance interactions afoot
here. In the OP's opinion (and mine) Windows is littered with these kinds of
booby traps. Things usually work fast ... until they don't, and you have to
dig hard to figure out why.

For the most part, Linux just doesn't do that. Obviously there are performance
bugs, but they get discussed and squashed pretty quickly, and the process is
transparent. On the occasions I'm really having trouble, I can generally just
google to find a discussion on the kernel list about the problem. On windows,
we're all just stuck in this opaque cloud of odd behavior.

~~~
barrkel
If something is slow with respect to the OS, you can often break out procmon
and examine the stacks of the poorly performing requests; or if something is
stalled, you can do the same with procexp. With dbghelp configured correctly,
you get symbol resolution in the OS libraries, so you can see what's going on.
Worst comes to worst, you can step through the disassembly with a debugger,
but it's not often required.

When I have problems with Linux, I tend to have to fall back to strace or an
equivalent, and I find it harder to figure out what's going on. On Solaris, if
I can find the right incantations to use with dtrace, I can see where the
problems are, but it's easy to get information overload.

My point is, how opaque your perspective is depends on your familiarity with
the system. I have less trouble diagnosing holdups on Windows than I do on
other systems. That's because I've been doing it for a long time.

------
hristov
Interestingly enough Joel Spolsky mentioned something related to the directory
listing problem more than 10 years ago. See:

<http://www.joelonsoftware.com/articles/fog0000000319.html>

In Joel's opinion it is an algorithm problem. He thinks that there is an
O(n^2) algorithm in there somewhere causing trouble. And since one does not
notice the O(n^2) unless there are hundreds of files in a directory it has not
been fixed.

I believe that is probably the problem with Windows in general. Perhaps there
are a lot of bad algorithms hidden in the enormous and incredibly complex
Windows code base and they are not getting fixed because Microsoft has not
devoted resources to fixing them.

Linux on the other hand benefits from the "many eyes" phenomenon of open
source and when anyone smart enough notices slowness in Linux they can simply
look in the code and find and remove any obviously slow algorithms. I am not
sure all open source software benefits from this but if any open source
software does, it must certainly be Linux as it is one of the most widely used
and discussed pieces of OS software.

Now this is total guesswork on my part but it seems the most logical
conclusion. And by the way, I am dual booting Windows and Linux and keep
noticing all kinds weird slowness in Windows. Windows keeps writing to disk
all the time even though my 6 GB of RAM should be sufficient, while in Linux I
barely hear the sound of the hard drive.

~~~
barrkel
I don't think there's an O(n^2) algorithm in there. I just created a directory
with 100,000 entries. Listing it (from Cygwin, no less, using 'time ls | wc')
takes 185 milliseconds. The directory is on a plain-jane 7.2k 1TB drive,
though of course it's hot in cache from having been created. 'dir > nul', mind
you, is quite a bit slower, at over a second.

~~~
prewett
You only did one test, so you have no idea what the complexity curve is. Do at
least three tests, with 1000, 10,000 and 100,000 entries and graph the
results. Three tests is still pretty skimpy to figure out what the curve is,
so do tests at 10 different sizes.

Also, Joel's complaint was about the Windows Explorer GUI (specifically,
opening a large recycle bin takes hours). Cygwin `ls` is using a completely
different code path. Your experiment does suggest that Joel's problem is in
the GUI code, though, and not the NTFS filesystem code.

~~~
barrkel
Oh, the OS treeview is dreadful, everyone who's seriously coded on Windows
knows that.

As to actual complexity curve (which, knowing what I do about NTFS, I'm fairly
sure is O(n log n)), I don't really care about it; since it hasn't shown up in
a serious way at n=100000, it's unlikely to realistically affect anyone badly.
Even if 1 million files (in a single directory!) took 18.5 seconds, it
wouldn't be pathological. Other limits like disk bandwidth and FS cache size
seem like they'd hit in sooner.

------
tankenmate
The problem as I understand it is that Windows's file metadata cache is
broken. I remember reading many years ago a posting by Linus about this but I
can't find it at the moment.

According to this document
([http://i-web.i.u-tokyo.ac.jp/edu/training/ss/lecture/new-
doc...](http://i-web.i.u-tokyo.ac.jp/edu/training/ss/lecture/new-
documents/Lectures/15-CacheManager/CacheManager.ppt)) it would appear that
directory entries have one extra level of indirection and share space with the
page cache and hence can be pathologically evicted if you read in a large
number of files; compiling/reading lots of files for example.

On Linux however the directory entry cache is a separate entity and is less
likely to be evicted under readahead memory pressure. Also it should be noted
is that Linus has spent a largish amount of effort to make sure that the
directory entry cache is fast. Linux's inode cache has similar resistance to
page cache memory pressure. Obviously if you have real memory pressure from
user pages then things will slow down considerably.

I suspect that if Windows implemented a similar system with file meta data
cache that was separate from the rest of the page cache it would similarly
speed up.

Edit: I should note, this probably wouldn't affect linking as much as it would
affect git performance; git is heavily reliant on a speedy and reliable
directory entry cache.

------
evmar
I don't know this poster, but I am pretty familiar with the problem he's
encountering, as I am the person most responsible for the Chrome build for
Linux.

I (and others) have put a lot of effort into making the Linux Chrome build
fast. Some examples are multiple new implementations of the build system (
[http://neugierig.org/software/chromium/notes/2011/02/ninja.h...](http://neugierig.org/software/chromium/notes/2011/02/ninja.html)
), experimentation with the gold linker (e.g. measuring and adjusting the
still off-by-default thread flags
[https://groups.google.com/a/chromium.org/group/chromium-
dev/...](https://groups.google.com/a/chromium.org/group/chromium-
dev/browse_thread/thread/281527606915bb36/) ) as well as digging into bugs in
it, and other underdocumented things like 'thin' ar archives.

But it's also true that people who are more of Windows wizards than I am a
Linux apprentice have worked on Chrome's Windows build. If you asked me the
original question, I'd say the underlying problem is that on Windows all you
have is what Microsoft gives you and you can't typically do better than that.
For example, migrating the Chrome build off of Visual Studio would be a large
undertaking, large enough that it's rarely considered. (Another way of
phrasing this is it's the IDE problem: you get all of the IDE or you get
nothing.)

When addressing the poor Windows performance people first bought SSDs,
something that never even occurred to me ("your system has enough RAM that the
kernel cache of the file system should be in memory anyway!"). But for
whatever reason on the Linux side some Googlers saw it fit to rewrite the
Linux linker to make it twice as fast (this effort predated Chrome), and all
Linux developers now get to benefit from that. Perhaps the difference is that
when people write awesome tools for Windows or Mac they try to sell them
rather than give them away.

Including new versions of Visual Studio, for that matter. I know that Chrome
(and Firefox) use older versions of the Visual Studio suite (for technical
reasons I don't quite understand, though I know people on the Chrome side have
talked with Microsoft about the problems we've had with newer versions), and
perhaps newer versions are better in some of these metrics.

But with all of that said, as best as I can tell Windows really is just really
slow for file system operations, which especially kills file-system-heavy
operations like recursive directory listings and git, even when you turn off
all the AV crap. I don't know why; every time I look deeply into Windows I get
more afraid (
[http://neugierig.org/software/chromium/notes/2011/08/windows...](http://neugierig.org/software/chromium/notes/2011/08/windows-
hookers.html) ).

~~~
fleitz
You can build almost any Visual Studio project with out using visual studio at
all. Visual Studio project files are also MSBuild files. I've setup lots of
build machines sans Visual Studio, projects build just fine with out it.

MSBuild does suck in that there is little implicit parallelism, but you can
hack around it. I have a feeling that the Windows build slowness probably
comes from that lack of parallelism in msbuild.

As for directory listings it may help to turn off atime, and if it's a laptop
enable write caching to main memory. I'm not quite sure why Windows file
system calls are so slow, I do know that NTFS supports a lot of neat features
that are lacking on ext file systems, like auditing.

As for the bug mentioned, it's perfectly simple to load the wrong version of
libc on linux, or hook kernel calls the wrong way. People hook calls on
Windows because the kernel is not modifiable, and has a strict ABI, it's a
disadvantage if you want to modify the behavior of Win32 / Kernel functions,
but a huge advantage if you want to write say, graphics drivers and have them
work after a system update.

Microsoft doesn't recommend hooking Win32 calls for the exact reasons outlined
in the bug, if you do it wrong you screw stuff up, on the other hand, rubyists
seem to love the idea that you can change what a function does at anytime. I
think they call it 'dynamic programming'. I can make lots of things crash on
Linux by patching ld.config so that a malware version of libc is loaded. I'd
hardly blame the design of Windows when malware has been installed.

Every OS/Kernel involves design trade offs, not every trade off will be
productive given a specific use case.

~~~
hkarthik
Regarding MSBuild, the biggest problem I had with it is that if you built
projects with Visual Studio, using most of the standard tooling for adding
references and dependencies, you'd often be left with a project that built
fine with Visual Studio, but had errors with MSBuild.

The reverse, incidentally, was usually okay. If you could build it with
MSBuild, it usually worked in Visual Studio unless you used a lot of custom
tasks to move files around.

I personally believe the fact that Visual Studio is all but required to build
on Windows is one of the single most common reasons you don't see much OSS
that is Windows friendly aside from those that are Java based.

~~~
shadowfox
> I personally believe the fact that Visual Studio is all but required to
> build on Windows is one of the single most common reasons you don't see much
> OSS that is Windows friendly aside from those that are Java based

You don't necessarily have to use VS to develop on windows. Mingw works quite
well for a lot of cross-platform things and it is gcc and works with gnu make.

My experience with porting OSS between Windows and Linux (both ways) has been
that very few developers take the time out to encapsulate OS specific routines
in a way that allows easy(ier) porting. You end up having to #ifdef a bunch of
stuff in order to avoid a full rewrite.

This is not a claim that porting is trivial. You do run in to subtle and not-
so-subtle issues anyway. But careful design can help a lot. Then again this
requires that you start out with portability in mind.

~~~
helmut_hed
I like to make multi-platform code, and I do it with CMake, Boost, and Qt. My
target platforms are Linux/g++ and Visual Studio (not mingw). It usually works
OK after a little tweaking, but you have to maintain discipline on whichever
system you're coding on, and not use nonportable pragmas etc.

------
shin_lao
NTFS is a slower file system, that's probably the main reason why. Also
console I/O is much better on Linux than Windows.

Our software builds everyday on FreeBSD, Linux and Windows on servers that are
identical.

The windows build takes 14 minutes. The FreeBSD and Linux build take 10
minutes (they run at almost identical speed).

Check out is more than twice slower on Windows (we use git).

Debug build time is comparable 5 minutes for Windows, 4 minutes 35 on Linux.

Release build time is almost 7 minutes on Windows and half that on Linux.

VS compiles more slowly than gcc but overall it's a better compiler. It
handles static variables better and is not super demanding about typenames
like gcc is. Also gcc is extremely demanding in terms of memory. gcc is a
64-bit executable, Visual Studio is still a 32-bit execuable. We hope
Microsoft will fix that in Visual Studio 2011.

Its easier to parallelize gmake than Visual Studio, which also explains the
better Linux build time. Visual Studio has got some weird "double level"
mulithreading which is eventually less efficient than just running the make
steps in parallel as you go through your make file.

However our tests run at comparable speed on Linux and Windows and the Windows
builds the archive ten times faster than Linux.

~~~
omellet
There's a 64-bit version of the MSVC toolchain and MSBuild, so if you build
outside of Visual Studio you won't be so constrained. This is how we do our
builds here at work (a mix of C# and C++). We still edit code in VS, but local
builds and continuous integration are done entirely using MSBuild. As of
VS2010, C++ project files are MSBuild projects, and no longer need to use
VCBuild.exe.

~~~
shin_lao
I didn't know about the 64-bit toolchain, where can you get it?

~~~
omellet
It's an optional install package when you install Visual Studio.

~~~
shin_lao
We have it installed, do you have any reference about how to use it from a
build process?

~~~
omellet
If you run the Visual Studio x64 tools command prompt from the start menu, it
will set up the environment to have the 64-bit toolchain in your path.

~~~
shin_lao
Thanks, but we've been struggling with building using the amd64 chaintool from
visual studio.

------
etfb
Someone posted the question on StackOverflow and it got closed as "not
constructive". Is there a way to browse the "not constructive" questions on
SO? They seem to be all the best ones.

~~~
Mithrandir
Ref: [http://stackoverflow.com/questions/6916011/how-do-i-get-
wind...](http://stackoverflow.com/questions/6916011/how-do-i-get-windows-to-
go-as-fast-as-linux-for-compiling-c) (By the author)

Top Answer:

Unless a hardcore windows systems hacker comes along, you're not going to get
more than partisan comments (which I won't do) and speculation (which is what
I'm going to try).

1\. File system - You should try the same operations (including the dir) on
the same filesystem. I came across this which benchmarks a few filesystems for
various parameters.

2\. Caching. I one tried to run a compilation on Linux on a ramdisk and found
that it was slower than running it on disk thanks to the way the kernel takes
care of caching. This is a solid selling point for Linux and might be the
reason why the performance is so different.

3\. Bad dependency specifications on windows. Maybe the chromium dependency
specifications for Windows are not as correct as for Linux. This might result
in unnecessary compilations when you make a small change. You might be able to
validate this using the same compiler toolchain on Windows.

~~~
buster
Why should he run the ls command on ntfs rather then a native file system? In
all it was a "windows vs linux" test and not a fileystem test. Testing the
same filesystem wouldn't make sense here

~~~
jharsman
Presumably to find out whether the difference lies with the filesystem or
somewhere else?

If Linux is still much faster, even with the same filesystem, you have
eliminated one variable.

~~~
buster
Doing some profiling and system/kernel level analysis would be much saner,
imo. What's the sense in measuring how some non-native filesystem behaves? In
the end you'll be benchmarking how good is fuse-ntfs vs. in-kernel-ext4 and
figuring it's slower... I say, profile some code and see how much time is
spent in filesystem calls.

------
blinkingled
1) Windows FS operations are slower than Linux in general but when you add
'Realtime' Antivirus on top it gets worse.

2) Linux forks significantly faster than anything else I know. For something
like Chromium the compiler is forked bazillion times and so is the linker and
nmake and so on so forth.

3) Linux, the kernel, is heavily optimized for building stuff as that's what
the kernel developers do day in and day out - there are threads on LKML that I
can't be bothered to dig out right now but lot of effort goes in to optimizing
for kernel build workload - may be that helps.

3) Linker - stock one is slower and did not do the more costly optimizations
until now so it might be faster because of doing lesser than the MS linker
that does incremental linking, WPO and what not. Gold is even faster and I may
be wrong but I don't think it does what the MS linker does either.

4) Layers - Don't know if Cygwin tools are involved but they add their own
slowness.

------
prewett
I suspect it has something to do with NTFS updating access times by default.
So every time you do anything with a file, it gets its access time updated
(not modification time, access time). I don't have windows to test on, but you
could try the suggestions [1][2] below.

[1] [http://msdn.microsoft.com/en-
us/library/ms940846(v=winembedd...](http://msdn.microsoft.com/en-
us/library/ms940846\(v=winembedded.5\).aspx)

[2] <http://oreilly.com/pub/a/windows/2005/02/08/NTFS_Hacks.html> (#8)

~~~
shintoist
But most Linux distros do this by default as well. You have to mount with
noatime to get rid of it.

~~~
wazoox
In fact nowadays most (all?) linux filesystems use relatime as default, which
carries most of the advantages of both atime and noatime. See
<http://kerneltrap.org/node/14148>

I don't know if something similar exists under windows (I suppose it doesn't).

------
ervvynlwwe
The author doesn't mention whether he is using cygwin git, or msys git. msys
is faster. But even with msys, UAC virtualization is a common cause of
slowness with git: [http://stackoverflow.com/questions/2835775/msysgit-bash-
is-h...](http://stackoverflow.com/questions/2835775/msysgit-bash-is-
horrendously-slow-in-windows-7)

More details here: <http://code.google.com/p/msysgit/issues/detail?id=320>

~~~
obtu
Sure, Cygwin does a bit of extra wrapping. The official site and just about
anybody link to msysgit, however.

------
johnx123-up
FWIW, try disabling your AV

~~~
einhverfr
I have found Windows a bit slower but never to the extent the author suggests.
But then I have generally tested on clean systems without antivirus. So I
suspect that this is a huge factor, esp. because it intercepts all FS calls
and checks them. That's really not what you want when compiling code.

------
barrkel
Probably a big reason for him seeing slowdowns in incremental builds with MSVC
is because of link-time code generation. What seems to be link time is
actually code generation time, and it's delayed because intra-procedural
optimizations can be run. This kills off a lot of the benefit of incremental
building - you're basically only saving parsing and type analysis - and
redoing a lot of code generation work for every modification.

NTFS also fragments very badly when free space is fragmented. If you don't
liberally use SetFilePointer / SetEndOfFile, it's very common to see large
files created incrementally to have thousands, or tens of thousands, of
fragments. Lookup (rather than listing) on massive directories can be fairly
good though - btrees are used behind the scenes - presuming that the backing
storage is not fragmented, again not a trivial assumption without continuously
running a semi-decent defragmenter, like Diskeeper.

------
markokocic
I'm not sure if it is related, but the fact is that file system operation on
windows are much slower that on Linux. I remembering that copying large ISO
image from one windows partition to another windows partition using Total
Commander under Wine on Linux was faster that doing it directly on Windows.

I also remember that I was able to create file copy utility in assembly as a
homework assignment that was couple times faster than windows/dos copy
command.

The only two reasons I can think of that explain this are: 1 - noone cares
about windows fileystem performance. 2 - someone decided that it shouldn't be
too fast.

~~~
m_for_monkey
In Total Commander you can configure the buffer sizes used while copying.
Maybe in your homework you chose the right buffer size too (and, of course,
asm is fast, but hard to write, I'm sure you didn't bother too much with error
checking and other "small" problems).

Moreover, the optimal buffer size is different for small and large files,
maybe Windows is not optimized for large size like a DVD image.

~~~
markokocic
As for Total commander example, that was out of the box experience, without
any tweaks. I just wanted to point out that even when using emulation layer to
access Windows native filesystem type, Linux was significantly faster on file
system operations.

As for my homework "copy" command, I know that it is not fully replacement to
file copy windows command, but if copy operations takes >10min, all those
checks and additional tasks shouldn't make IO bound operation take couple time
longer than what some student implemented as homework.

------
niyazpk
Here is a link from the comments:

NTFS Performance Hacks -
<http://oreilly.com/pub/a/windows/2005/02/08/NTFS_Hacks.html>

~~~
yread
Not sure about the other things, but this

 _The default cluster size on NTFS volumes is 4K, which is fine if your files
are typically small and generally remain the same size. But if your files are
generally much larger or tend to grow over time as applications modify them,
try increasing the cluster size on your drives to 16K or even 32K to
compensate. That will reduce the amount of space you are wasting on your
drives and will allow files to open slightly faster._

is wrong. When you increase cluster size you will definitely not "reduce the
amount of space you are wasting". 100B file will still occupy the whole 16KB (
so you will waste 15.9KB on it instead of 3.9KB with 4KB clusters.

Also I would be very careful with taking advice like that from an article
which is 6 years old (before introduction of Win7 or XP SP3!)

~~~
maaku
That's not the point he's trying to make. Smaller cluster sizes leads to
larger amounts of file-system metadata keeping track of where those clusters
are laid out on disk, as well as the overhead of generating, accessing, and
updating those data structures.

~~~
muyuu
If files are typically bigger than 16KB, 16KB clusters can potentially save
space vs 4KB clusters, by needing smaller cluster indices. Not sure if that's
the case for NTFS though.

------
false
Git under cygwin is so painfully slow, and gets exponentially slower as number
of tracked files grow. Even SSD can't fully smooth out the difference :(

------
frooxie
What I can't wrap my head around is the amazingly slow file search (I'm using
Vista). Searching for a filename I know exists in a small directory (say, 100
files) often leads to Windows searching for several minutes and then NOT
FINDING THE FILE. How can that happen when Windows is able to list the
contents of the directory (including the file I'm looking for) instantly?

~~~
Lagged2Death
I am also frustrated by the indexed searching in Windows. This should have
been a brilliant signature feature, and it's just execrable.

If you look at the indexing options in the Vista control panel and click the
"Advanced" button, you'll find a dialog box with a "File Types" tab. This
horrible dialog may show you (it did for me) that some file types (i.e.,
filename extensions) are deliberately excluded from indexing. For some reason.
You know, because you may not want to find certain things when you look for
them. I guess.

You'll also find the world's worst interface for specifying what kinds of file
should be indexed by content. But never mind.

If searching by filename and/or path is all you're after, check out
Everything:

<http://www.voidtools.com/>

If you're not using Windows as an Administrator, (and you shouldn't be)
Everything won't seem very polished. But it is terrifyingly fast, and it's
baffling that Microsoft's built-in search is this bad if something like
Everything is possible.

------
tintin
Maybe this has something to do with the file indexer? 2 years ago I heard a
lot of XP users complain that Windows was suddenly getting very slow. After
some digging around I noticed that they turned on the file indexer by default
after an update. Since then I always turn it of (properties of your disk) and
shut down the service (Indexing Service).

------
jcromartie
Don't forget forking.

To benchmark the maximum shell script performance (in terms of calls to other
tools per second), try this micro-benchmark:

    
    
        while true; do date; done | uniq -c
    

Unix shells under Windows (with cygwin, etc.) run about 25 times _slower_ than
OS X.

~~~
Mavrik
Well, Unix shells under Windows don't even implement forking, so this test is
kinda meaningless isn't it?

It's like racing cars where one of the cars has its wheels taken off.

~~~
koenigdavidmj
Cygwin has fork. The problem is that Cygwin's fork has to copy the entire
address space into the new process, whereas Linux uses copy-on-write to make
forking much faster (you only need to copy the page tables).

------
malkia
A possible faster way to read directories with one (okay, few at most kernel
calls) is to use GetFileInformationByHandleEx.

Here is some example:

<https://gist.github.com/1487388>

------
idspispopd
separate point:

while photoshop isn't on linux, there are plenty of replacements for that
unless he's doing print work, which I don't think is the case, as photoshop
isn't the beginning and end for print. (actually, TBH, photoshop is pretty
shit for pixel work.)

Also maya is available for linux, autodesk just doesn't offer a free trial
like they do with windows/mac os. (Including the 2012 edition.)

With no offence intended to the 3dsmax crew, as it has it's merits, but a
sufficiently competent maya user won't find much use for 3dsmax.

------
chris_gogreen
Hard drive swap file usage, my windows machines always have a huge swap file
going, my Linux and OS X machines almost never do.

------
peterwwillis
I'm pretty sure the "dir" takes longer than "ls" because "dir /s c:\list.txt"
sorts the entire c:\ drive before looking for "list.txt". "ls -R c:\list.txt"
first checks if "list.txt" exists, and fails if it doesn't. Just take out the
"list.txt" and run both commands again.

------
nikcub
This is a comparison of file systems, not operating systems.

------
WayneDB
I run Windows 7 in Boot Camp every day and it easily outperforms OS X on the
same exact hardware for most common tasks (browsing files, the web, starting
up apps, etc).

The Windows desktop GUI system is more stable than anything else out there
(meaning that it's not going to change drastically AND that it's a solid piece
of software that just works) and it's as flexible as I need it to be, so
that's why I stick with Windows. With virtual machines, WinSCP, Cygwin and
other similar utilities, I have all the access to *nix that I need.

~~~
r00fus
> The Windows desktop GUI system is more stable than anything else out there
> (meaning that it's not going to change drastically AND that it's a solid
> piece of software that just works)

So you assume that Windows8 metro-mode won't really catch on? Also comparing
OSX to Windows over the past 10 years, it's Windows that has changed more
drastically, so both future and past evidence to the contrary...

~~~
WayneDB
How has Windows changed more drastically than OS X since 2001? To me drastic
means that something very basic has changed and/or compatibility has been
lost. (Things like resizable windows, full screen apps, broken finder plugin
compatibility, changes to expose, addition of mission control, etc.)

I don't assume, know or care to know anything about Windows Metro. It's not
replacing the desktop system that I use.

------
jstclair
Wow, so if you want to test _file system_ speeds, you do it by listing files -
I know this just an example, but perhaps it has something to do with the speed
of the _terminal_?

There are a plethora of disk benchmarking tools - I doubt that they
consistently show 40x differences.

Hooves -> horses, and all that.

~~~
shabble
Maybe I'm misunderstanding (or he's since changed his post), but:

    
    
        dir /s > c:\list.txt
    

is piping it into a file. Where does the speed of the terminal affect that (in
any significant fashion)? I know what you're getting at - tar --verbose can
slow things down for me by sometimes a factor of 2 (for huge tarballs), but I
don't think it's an issue in this situation.

~~~
jstclair
Yes, I saw the redirection. But "dir" is an built-in command in the Windows
shell; is the speed of that command a benchmark-able number? Is the point to
compare the speed of "ls" vs "dir", or the underlying OS/file-systems (i.e.,
posix vs. win32 /Ext3 vs. NTFS)? If someone tells me that "dir" is slow, I'd
agree -- but _that_ \-- in itself -- doesn't imply that the _filesystem_ is
slow.

~~~
shabble
True, I agree it's not necessarily a good way to test the filesystem, but it's
only the shell that's being hit, nothing really to do with the terminal.

I pointed it out mainly because terminals _can_ have a significant impact on
performance, because dumping millions of lines a second isn't their intended
purpose,[1] whilst the shell can be reasonably expected to do that.

Having it entirely as a shell built-in possibly actually better than the
equivalent '/bin/ls > somefile' since it doesn't need to context switch back
and forth as the stdout buffer fills up and the shell has to write it.

[1] I recall there being a Gentoo-related thread about why "Gentoo-TV" --
having the output of gcc scroll past as your background with a transparent
xterm -- was actually slowing down package builds significantly.

------
vmmenon
"Why is Windows so slow? I’m a fan of Windows, specifically Windows 7."

????

------
brudgers
Overall, the author's argument is somewhat dependent on a premise that Windows
7 should be optimized for edge cases such as compiling code written for
multiplatform implementation (e.g. Chrome) rather than using the managed code
model around which Microsoft's development of Windows has been centered for
many years.

If one were optimizing Windows performance, none of the specific areas used as
examples would receive much attention given user demographics. What percentage
of Windows users use the command line, much less compile C programs, never
mind using "cmd" shells to do so?

Windows command line gurus will be using Powershell these days, not the legacy
encumbered "cmd" - elsewise they are not gurus.

------
iradik
Hmm.. I've had an opposite experience on an atom netbook. Tried Windows Vista
and Ubuntu on a netbook. The Windows netbook worked great while Ubuntu
regularly would crash. Ubuntu on the netbook was unusable. Now probably I did
something wrong? But just installed the latest version with default settings.
Anyway I returned the netbook.

~~~
alextingle
Were you running compiles on it??

~~~
iradik
No I was running a web browser and a chat program.

