Ok then, show me the server farms...
I'm not even really talking about HPC, just the massive datacentres that run everyone's lives. All for the most part running monolithic kernels. I doubt the thousands of engineers who work on such systems consider the "huge monolithic kernel" "undebuggable". And I don't see examples of microkernel OSs that are able to cut it in these circumstances.
Even in a mobile device, you don't really want to waste battery doing context switches inside the kernel.
Microkernels have their place, but believing that the world that chooses not to use them are just clearly dumbasses is bullshit dogma.
(As an aside, I'll grant that even a high-throughput microkernel seems likely, to me, to have a lower throughput relative to a more tightly-coupled monolithic kernel. That's just one of the architectural trade-offs involved here.)
As I see it, there are technical (e.g. hardware drivers, precompiled proprietary binaries) and social (e.g. relative lack of QNX expertise = $$, proprietary licensing) reasons for many people to choose one of the more popular OSes, running monolithic kernels.
I can't say what's technically superior, but even if QNX was, nobody's a dumbass for choosing something else -- and I don't think the fellow you're replying to was saying so. There are, of course, reasons and trade-offs.
An OS's adoption is a social thing, and proves nothing technical about it. If it wasn't for licensing (a social problem), BSD might have taken off, and Linux been comparatively marginalized.
Just sharing my perspective here.
You're getting all red in the face using some really dubious arguments to back you up here.