1. Windows was never originally designed to work as a Server side operating system at the first place. They started to drive it on the server side when they first discovered the Internet had a huge commercial potential to sell machines on the backend. There fore all means of getting things done on a windows machine for a developer have to go somehow through a set GUI's to get work done programatically. This sucks from a programmers perspective, programming is all about level of customizability.
2. Command line on windows sucks, apart from just removing, adding files/directories and running commands anything else is just a pain. The UNIX command line is a complete interpreter in itself(bash).
3. The UNIX operating system is more than an OS, its a complete programming ecosystem in itself. The whole concept of everything being a file or a process is just so elegant. You can endlessly leverage native tools like sed/bash/awk/cut/tr/perl and other text processing utilities to solve any problem with a combination of text files and processes. Which is not easily possible with windows, heck using those tools on windows is big pain. They are often ported with limitations.
4. Debugging, is a breeze. Checking logs is a breeze. Text processing utilities and endlessly configurable tools make it very easy for system administration with the help of pipes. This is very crucial for system administrators. They often want to do stuff without the help of programmers to get quick solutions when they get paged at 2 in the night.
5. Many other development features like Inter process communication with tools like DBus. Sockets et al are vastly superior in UNIX than windows.
6. Many programming languages were developed(Perl/Ruby/C) with entire context of UNIX in mind. Therefore they natively work very well with UNIX.
7. Vast resources of knowledge of troubleshooting and maintenance available for UNIX. Which makes things newbies easier to deal with it.
8. Unix is open source, its freely available. And will be there for a long time. People who supply it do it on passion and pure volunteer effort and will do it for fun and because they like. Windows can be killed by anytime for profit.
9. Vendor lock in problems. I don't understand why I should use all MS specific software all over. I can't scale horizontally due to cost issues. Also apart from .NET developing for any other technology sucks on windows.
10. Lack of multiuser login, Servers need many people to login and work at the same time. For testing and for development reasons.Servers are just so much more than deployment only boxes.
11. GUI overhead, Why should I spend my computing resources on OS and GUI when I should I actually be spending them on my applications?
12. Registry is a pain on Windows, I don't have to worry about those hassles on UNIX.
The list goes on and on...
"There fore all means of getting things done on a windows machine for a developer have to go somehow through a set GUI's to get work done programatically."
This alone is enough to discard anything you say about this topic. You obviously have no idea what you're talking about. Everything in Windows is programmable, through a standard object model, and the facilities to put them into any program are standardized, too.
"Which is not easily possible with windows, heck using those tools on windows is big pain."
Windows != Unix. If you are a bricklayer and you get into gardening, would you complain that your concrete mixer doesn't work well for shoveling a garden? Unix tools on Windows is a crutch for people who refuse to adjust to the environment they're in (or as a band aid for a quick and dirty port of Unix functionality).
"tools like DBus. Sockets et al are vastly superior in UNIX than windows."
Windows != Unix. The concurrent tasks model in Windows is based on threads, not process spawning. Don't take your Unix prejudices to Windows when you write software for Windows. Are you seriously suggesting there are no working ipc mechanisms in Windows? There are vast amounts of functionality to do so, and on a much deeper level than just 'pipe text from one process to the next' (i.e., a proper object model that can be used to share code written in several languages and with which you can pass objects and not just text).
"Lack of multiuser login,"
WTF are you talking about? Have you ever seen a Windows box since Windows 95?
"Registry is a pain on Windows, I don't have to worry about those hassles on UNIX."
What? Are you saying you prefer 25 different file formats, spread out in non-standard ways, without a standardized layout? Or are you saying that editing Apache config files with sed and awk is a good idea? If so, you're clearly off your rockers. Of course you can hack together something that 'mostly works', but at least with the registry you have a standard format, standardized and cross-language APIs and a (more or less) standard organization of data.
Now I'm not defending the implementation of the registry; it has outlived its design. But being against the idea is lunacy - why do you think the Gnome guys realized in the early 2000's that they needed something similar?
UNIX is about being generic. Yes, it means Apache and Varnish have different config file formats. But it also means that I already have the tools I need to automate my configuration so I don't have to care.
(Yes, Windows is programmable. But when you start having to compile software to automate your deployment, it becomes engineering and becomes a task of its own. Compare this to a quick command-line oneliner, and you'll see why people prefer UNIX. Engineering is about knowing how much you need to get something accomplished. Sometimes you do need to write highly-advanced configuration software. But other times, you don't. Windows doesn't give you that choice.)
Secondly, it's only true in the most simple cases that you can edit config files easily. First, all config formats are different - from the bizarre (Sendmail) to fairly sensible (Apache), but each one requires separate tools/scripts. Secondly, most of them are quite hard to automate - for example most config formats ignore white space, but writing a robust 'parser' in bash/sed/awk is a major pita and something you can never quite get right. (this is what I alluded to in my previous post). I don't see how you can say 'I already have most of the tools' - you need to learn the syntax and then write a complete program to parse the files. For example, you need somewhat of a state machine to parse/edit Apache VirtualHost directives. You need to write a complete editor from scratch each time.
I'm not sure what you mean with the last line. Just as with a properly set up make environment, you can compile a whole Visual Studio project with a single command from the command line. There is no way to do a bunch of things 'automatically' on Linux either (compile, run test, deploy, whatever), you still need to code them into your makefiles/deployment scripts.
(I've written software on and admin'ed Linux for coming on 15 years and I've written Windows software for over 10 - I have quite a bit of experience with both. They both have good and bad sides, and I run my personal servers on Linux myself. That said, the arguments used here against Windows are plain false and reek of Slashdot-style fanboyism).
The idea of diffing a registry dump fills my heart with horror.
> First, all config formats are different - from the bizarre (Sendmail) to fairly sensible (Apache), but each one requires separate tools/scripts.
I am quite happy editing them with vi or emacs (when available). I also like joe a lot - it reminds me of WordStar.
> you need to learn the syntax and then write a complete program to parse the files.
In about 10 years of Unix, I never had to build anything like this. And, when I wanted to parse my own config files, I always had libraries to do it ready.
> For example, you need somewhat of a state machine to parse/edit Apache VirtualHost directives. You need to write a complete editor from scratch each time.
I think you may be approaching the problem from the wrong angle. Are you trying to build a GUI tool to edit Apache configuration files?
Sure, so am I (well except for Sendmail configs). We were talking about programmatically editing here.
"And, when I wanted to parse my own config files, I always had libraries to do it ready."
Really? How do you, in bash, write a script to change, or if necessary add, an 'IndexAllowed' directive to a certain specific VirtualHost? Mind you, Apache config files can Include other files (and many distros ship with default config files that use this).
"Are you trying to build a GUI tool to edit Apache configuration files?"
I'm not building anything, I was just using this as an example of things you'd want to script, for example in the context of a web hosting provider who wants to automate the creation of new customer setups. (Yes I realize that there are many way to attack this specific problem, but most of them are very specific to Apache and would have to be re-engineered for each problem)
I am not sure it's a good idea. Just generating the files from a CMDB and placing them in the servers seems the simplest approach. I do it. This way I have the nice side effect that anything a sysadmin did directly and manually on the server bypassing the config database (something that shouldn't really be done) gets wiped out as soon as possible.
> in bash
Almost every Unix out there has Python, Perl and Ruby already installed. You don't need to use bash unless you really want it.
Why is it so difficult for people to make a point without making personal remarks?
Seriously, if you claim that a drawback of Windows is that it doesn't allow multi-user login, then it's hard not to ask "WTF are you talking about? Really, what are you talking about? Are you stuck in some circa-1995 reference frame? That doesn't even make sense!"
So I'm not sure that it was a personal remark. It may have just been honest lack of comprehension.
2. This may be true for the good old cmd.exe but.. have you tried PowerShell? I've been playing with PowerShell 2 on Windows 7 and found that it leaves little to be desired. It is self-documenting (a-la Emacs). It can be extended using .NET. You can pipe entire objects instead of unstructured text streams. Coming from a strong UNIX background I /am/ impressed and actually think it is way better than a POSIX-compatible shell. I even wrote a couple of scripts  to post-configure my Windows 7 installation in a similar way I do on Linux with Puppet .
7. This is true even for Windows, in my experience. Every time I do a Google search for troubleshooting I am directed to Microsoft's Knowledge Base or the (free) MSDN website.
11. AFAIK, you can uninstall the GUI component on Windows Server 2008 (you will be left with a heavily stripped down GUI, without the usual graphical shell)
12. It may be a pain but at least it is a consistent way to store configuration settings and it is widely adopted as such. Compare it with the plethora of different configuration file formats used on a typical Linux/Unix workstation (Mac OS X being the exception since they seem to consistently use XML-based property lists almost everywhere). Each system has its strengths and weaknesses but I wouldn't call the Windows Registry "a mess".
I don't comment on your other points either because I don't have enough first-hand experience with it (1, 4, 5, 9, 10) or because I partially agree with you (3, 6, 8).
Anyway, it seems that you're a coming from a strong UNIX mindset and that you try to forcefully shoehorn it to Windows (3rd and 4th points), along with (my guess) lack of experience in certain areas of Windows administration (2nd point).
As for me: I was really impressed by Windows 7. Some things I really miss are a decent Window Manager (but this is true of every commercial OS I've ever tried) and a good software management solution (either a decent, standardized, package manager, an AppDir mechanism like OS X or both).
You know there is a fallacy there, don't you? It's perfectly possible to ignore something for decades and still fall in love with it later. I've seen lots of Windows and Linux fanboys fall for OSX and become very annoying in the process.
I can believe you are no Windows fanboy without you presenting credentials.
> [...] I can believe you are no Windows fanboy without you presenting credentials.
You're right. Sometimes I believe it is better to point out where I am coming from (especially when replying to the "anti-something" kind of posts) to avoid to be seen as a fanboy. I always try to be as unbiased as possible. Maybe I'm just being overly considerate.
I come from backend server based mindset and find it difficult to shoehorn the concepts there to a desktop operating system. For no justifiable reason. I still see no reason why I must use Windows on my backend.
I fail to see the need to endlessly shoehorn Windows for all my backend tasks(powershell included) when I can get everything of that in a Vanilla linux installation.
If you say Windows is good desktop operating system, you are correct in your own right. But literally there is no comparison between the UNIX and Windows on the server end.
On the similar lines, there are many things in Windows (like directory share on network, printer settings) which are far more easier to use than on Linux.
That's not exactly correct. Windows NT was designed to compete against Unix in the desktop workstation and non-dedicated server market. It was designed by a team formed mostly by DEC alumni. I call it "the bastard child of VMS" for a reason.
> 8. Unix is open source, its freely available.
Linux and BSD are, but OSX and Solaris are only partially open source and AIX and HP-UX are very proprietary.
> 10. Lack of multiuser login,
I believe Windows servers can currently host more than one user session. I used this with NT TSE and I don't think this feature was removed since the late 90's (when I used it). It may be some idiotic license restriction.
> 11. GUI overhead
Windows' GUI is rather primitive. I can't imagine the resources it consumes are relevant these days. I have seen more sophisticated stuff on Symbian phones.
Yes. Windows XP and Server 2003 Standard allow one local login plus two Terminal Server (remote desktop) sessions. Windows Server 2003 Enterprise and Datacenter Editions allow more (but nobody ever bought those flavors because Standard was so much cheaper.) I'm not familiar with Windows 2008 Server but the policies are probably similar.
The limit of two remote sessions in the regular editions is arbitrary, but was chosen mostly for RAM constraints; the windowing environment for each user plus the tasks they're likely to run will consume a few hundred MB of RAM or more. The advanced editions support RAM beyond the 4 GB 32-bit limit.