I used Linux for more than a decade before switching to OpenBSD precisely because Linux developers believe in features to the point where how well they're implemented is no longer relevant.
The arrogant, know-it-all kids that we so lovingly nurtured in the Linux community grew up to be its forefront developers today. It shows.
Edit: I was hesitant to write this because it always leads to very unproductive results, but what the hell, I'll bite.
Systemd was the last straw for me, not because something something Unix philosophy (after all, OpenVMS disdained the Unix philosophy, and it worked better than Linux ever has) but because it's so bug-ridden that its interface's simplicity and elegance are next to useless.
Maintaining a non-trivial network & filesystem setup (e.g. I have a few dozen Qemu VMs, because writing software for embedded systems is like that) became a nightmare. It broke with every other update. Great if you're doing devops and this is expected and part of your job, terrible if you're doing actual development and you want an infrastructure that doesn't break between compiles.
I ragequit one afternoon, put FreeBSD on my workstation and OpenBSD on my laptop. I have not touched anything in my configuration in almost a year now and it works flawlessly. I don't think I've had it work for a whole month without having to fiddle with whatever the fuck broke in systemd, somethingsomethingkit or God knows what other thing bolted on top of the system via DBus. I can write code in peace now and that's all I want.
These are all great technologies. Systemd in particular was something I enthusiastically used at first, precisely because after Solaris' SMF -- XML-based as it is -- even OpenRC seemed like a step back to me. But, ya know, I'd actually want it to work.
I don't think it's a simple problem, and I don't think all the blame should be laid on Freedesktop.org, where a lot of good projects originated. I do think a lot could be solved by developers being a little more modest.
Thus you could go from bare kernel, to CLI to GUI in a layered manner (and fall back when a higher layer had issues).
With Dbus etc the CLI has been sidelined. Now you have a bunch of daemons that talk kernel at one end and dbus at the other.
Never mind that they tried to push a variant of dbus directly into the kernel. And as that failed, is now cooking up another take that yet again is about putting some kind of DE RPC/IPC right in the kernel.
Unfortunately, there is a lot of weird interaction between all these processes. It's often badly (or not at all) documented, and what plugs where is extremely unclear. It's very messy, and it doesn't stand still long enough for someone to fix it. They just pile half-done stuff over more half-done stuff.
It's really unfortunate because the Linux kernel is, pragmatically, probably the best there is. It may not excel in specific niches (e.g. security), but overall, it does a lot of things better than, or at least about as well as BSDs, on systems where not even NetBSD boots.
One problem with message passing as such is that messages are like function calls, but you can't put a breakpoint into the system and see a call stack!
If we call a function f on object a, which calls b.g(), a breakpoint in b.g() tells us that something called a.f() which then called b.g(). If we send a message f to a, which then sends a message g to b, a breakpoint in b on the receipt of message g tells us diddly squat! The g message came in for some reason, evidently from a. Well, why did a do that? Who knows; a has gone off and is doing something else.
OpenVMS's days of glory more or less coincided with the Unix wars. Unix was brilliantly hacker-friendly, but a lot of basic things that we now take for granted in Linux -- virtual memory, high-speed I/O and networking -- were clunky and unstandardized. Others (like Files-11, VMS's excellent filesystem) were pretty much nowhere to be found on Unices (or, if they were, they were proprietary and very, very expensive). An Unix system included bits and pieces hacked together by people from vastly different institutions (often universities) and a lot of the design of the upper layers was pretty much ad-hoc.
OpenVMS had been a commercial project from the very beginning. It had a very well documented design and very sound engineering principles behind it. I think my favourite feature is (well, technically, I guess, was) the DLM (Distributed Lock Manager), which was basically a distributed concurrent access system with which you could do concurrent access to resources (such as, but not only, files) in a clustered system. I.e. you could acquire locks to remote resources -- this was pretty mind-blowing at the time. You can see how it was used here: e.g. http://www3.sympatico.ca/n.rieck/docs/openvms_notes_DLM.html .
Also, the VAX hardware it ran on rocked. The whole thing was as stable and sturdy as we used to boast about Linux in comparison to Windows 98, except at a time when many Unices crashed if you did the wrong thing over NFS.