A general point, the changes described here have been over the course of something like 15 years. So the article seems to be making a "stuff keeps changing!" point... but we are talking about over 15 years. Think about changes to hardware, the Internet, etc. over that time. And most indicators are that the Linux desktop has moved much too slowly compared to say Windows, Mac, Android, and iOS.
Some examples of errors:
"So the Gnome developers wanted to reduce the complexity of their protocol as well and started working on a protocol which was supposed to join the advantages of DCOP and CORBA. The result was called the Desktop Bus (dbus) protocol. Instead of complete remote objects it just offers remote interfaces with functions that can be called."
This is false on several levels. dbus was mostly a kind of cleanup of DCOP for general use, with no intent to "join the advantages of CORBA" which were essentially none. I can make no sense of "instead of objects it offers interfaces" - it has both objects and interfaces, and pretty much can implement the same kind of API that DCOP does (I believe KDE even did that). Basically this paragraph doesn't mean anything I can relate to the actual technology.
"APIs to abstract the uses of OSS, esound and ALSA: gstreamer for Gnome and Phonon for KDE"
This is wrong. GStreamer is for making graphs of elements, where elements are decoders, encoders, effects, filters, etc. and can be both audio and video. There is one kind of element ("sound sink") that does abstract sound output, as you would imagine. There are some other elements that use sound APIs too. But GStreamer is not the same thing as a sound API like ALSA, in any way shape or form. It's for building multimedia _apps_, sort of a media toolkit.
Moreover, the main reason to replace the older tech here (OSS, esound) was just that it didn't work very well and didn't support a lot of the things sound cards do. It's not like keeping that old stuff was an option, since it could barely play beeps.
"it is no longer possible to run the system without a graphical user interface"
I'm just not sure what planet that's on. There sure are a lot of headless Linux servers out there in the world, and it's pretty obvious that the large Linux distributions care about this intensely.
Re: NetworkManager, if it's somehow needed when headless and not configurable headless, that would be considered a bug by all involved. Just a matter of tracking down the details and reporting them if they have not been. All the Linuxes aspire to (and in my experience do) support headless operation.
"they don't implement the original X11 protocol directly and rely on so-called window manager hints."
This sentence is total word salad. X11 has had window manager hints for two decades. What's new is "extended window manager hints" which are some new hints in the same spirit ... in order to do new things. They don't "wrap" anything, so "directly" is just gibberish. Kind of like how CSS 2.0 isn't the same as CSS 1.0, you know? This complaint is equivalent to bitching because you can't use IE5 on the modern web anymore. The protocols are documented, and you have to use an implementation that implements something from within the last 5 years. The extended window manager hints range from 6 years old to 10 years old, so that's how old a crap we're talking about.
An almost exact translation of this claim to the web is: "they don't implement the original CSS 1.0 directly and rely on so-called CSS 2.0 properties" ... see how that makes no sense?
"Writing X11 programs with xcb and proper RPC APIs like SUNRPC or Thrift should be more than good enough."
This 100% misunderstands why dbus is used. The first goal of dbus is not to send a message from process A to process B; it's to keep track of processes (help A find B, have them each know when the other goes away). The messaging is important but in many ways secondary.
Overall, the article doesn't understand the big picture of why all this new stuff was needed. I think there's one big reason: dynamic change. The old ways of doing things almost all involve editing a text file and then restarting all affected applications. But to implement the UIs that people expect (as you'd find on iOS, Android, Windows, Mac), everything has to be "live"; you change a setting in the dialog, and the whole system immediately picks up on the change. You unplug a cable, everything notices right away. etc. The daemons are because so many pieces of dynamically-updated live state are relevant to more than one process or application. That's why you have a "swarm of little daemons" design. And guess what: some other OS's have the same design.
That's (at least one of) the major problems being solved. And the author here gives no indication he knows it exists, let alone his proposed alternative approach.
I sort of get the inspiration for the article: Linux has been trying to keep up with modern UI expectations without having enough staffing for that really, and certainly regressions have been introduced and there have been bugs and things that could have been better. On the 6-month distribution release cycles, users are going to see some of that stuff. It's software, people. And it's understaffed open source software to boot. So yeah, legitimate frustration, shit changes, sometimes it breaks. I get it.
But there's no need to wrap that frustration up in pseudo-knowledge as if it were a technical problem, or say inane things about getting back to the "unix way"; if someone could show up and make the desktop UI stuff behave well with the "unix way" they would have done it. Or maybe they did do it, and the critics understand neither the problem requirements nor the "unix way." Just saying.
"However, the Linux incarnation of OSS was a particularly simplicistic one which only supported one sound channel at the same time and only very rudimentary mixing."
That's incorrect. The sound channel limitation depended on the hardware you had installed. So did the mixing capabilities. If the hardware supported it, OSS exposed the additional capabilities.
Those of us with SoundBlaster cards remember very well why we looked using them on Linux (because, unlike most cards, they supported multiple applications outputting audio simultaneously).
Modern linux distributions are doing many things in a way that is, at best, surprising and, at worst, undebuggable.
"Any sufficiently advanced technology is indistinguishable from magic" comes to mind quickly.
Working with Linux in the 90s was surely not as easy as it is today, and probably for the better. But I also find myself longing for the old days at times. Examples are lack of NetworkManager (like lack of Bridge support in the version on my laptop, no clue if it's been fixed upstream, I'm using distro packages) or certain hald/dbus automagic things. And no, I won't go into details and that can be held against me, but there frustrations and annoyances - surely partly to be blamed onto me and partly to the software. Coming from that, I feel with the author.
Then again I'm also glad I don't have to wade knee-deep into config files every time I want to change something. :)
Everyone wants a simple system... as long as it has just this one thing that they need... and this one other thing...
This author seems to feel there was some way in which the software could do everything it does and there would be no downsides... you know, here and there in some detail it's probably true that the tradeoff is wrong. But that's just saying "all software could be better" or "all software has bugs" or something - true, but not an actionable insight.
I get the guy's frustration. But you know, there's no need to wrap the emotion up in non-factual hypotheses about source code that one is not familiar with.
Software sucks. We all know it. Using your imagination to diagnose why isn't going to get anyone anywhere ;-)
There probably are some improvements possible if we all go look at the source and get the real info.
The commercial version of OSS supported software-based mixing when the hardware didn't support it.
Same as the version of OSS found in Solaris today.
This is an exemplar from your comment that leads me to this observation, you write (first quoting the author):
""Writing X11 programs with xcb and proper RPC APIs like SUNRPC or Thrift should be more than good enough."
This 100% misunderstands why dbus is used. The first goal of dbus is not to send a message from process A to process B; it's to keep track of processes (help A find B, have them each know when the other goes away). The messaging is important but in many ways secondary."
I would suggest that it points out a 100% misunderstanding on remote procedure calls. Your defense of dbus asserts that its primary function is to 'keep track of processes' which would suggest its name should be 'dlocate' or 'dmonitor' but its 'dbus' because most of the traffic on it is like a 'bus' where data goes from point A to point B.
The original author points out that all of the 'features' of dbus which are not directly related to interprocess communication could have been implemented on top of the existant architecture. People have done that, they called them 'location brokers' back in the 80's. And what the folks who invented dbus missed was all of the research about what makes for good network protocols like Andy Birrell's seminal paper on the RPCs or work done at Sun, Xerox, Apollo and elsewhere.
"Overall, the article doesn't understand the big picture of why all this new stuff was needed."
But that wasn't what the article is about at all, it was asking the question why is all the substructure re-invented every time? The author rants at how Linux's tendency to constantly recreate every wheel every time is hugely destructive and wasteful.
The real problem, which is not mentioned explicitly but I suspect is at the root of this entire rant, is that it is infinitely easier to create in Linux than it is to fix. When there is a problem with the way desktop events get delivered you can either fix the broken system or you can invent an entire new one. Too often, for reasons which are not well reasoned or supported, people create. I see three reasons for that:
1) It is hard to have two smart, outspoken, and opinionated people work on the same piece of code.
2) If you can choose between "the person who fixed Y" or the "the person who created Z" on your resume, inevitably people lean to the latter.
3) When all you want is feature 'X' which should notionally come from system 'Y', it takes less work to create a new system Z which does all the things you personally need from Y and has X as a new feature. Than it is to understand all of the users of Y and what they need and then incorporating X into that.
And lets close with this bit, you wrote:
"if someone could show up and make the desktop UI stuff behave well with the "unix way" they would have done it."
They have, Motif and SunTools were both such then, Windows and MacOS are examples today. I think you could successfully argue that Linux is on the brink of proving that Free Software is a fundamentally broken model of software development, and use the window system as the exemplar of that argument. The closest counter example we have is Canonical which, as well all know, is well loved by all folks who work on Free Software.
The linked rant boils down to 'Linux sucks because nobody can be bothered work with somebody else's code.' which is of course an exaggeration (but what are rants if not emotional expressions of frustration through hyperbole rhetoric?). If you cannot see the danger that poses to its lively hood then yes, you are by definition clueless.
"It is hard to have two smart, outspoken, and opinionated people work on the same piece of code."
But all the stuff discussed here - dbus, gstreamer, EWMH, GNOME, etc. - has had dozens (e.g. EWMH, dbus), hundreds (e.g. gstreamer) or even thousands (e.g. GNOME, Fedora, Ubuntu) of contributors. And that's not counting all the people that build on top of those things, it's only counting the ones who contribute to them directly.
"it is infinitely easier to create in Linux than it is to fix"
I've always found in open source that it's harder to find people to create, than to fix. I mean yeah, there's a background noise of a thousand 1-person projects being born and dying every day. But the big projects with momentum are full of dedicated people primarily interested in incremental change.
Most of the technologies we're talking about here are in the range of 6-12 years old, with no significant overhaul or replacement in that time. For perspective, Firefox (as "Phoenix") appeared 9 years ago, and Mac OS 10.0 is 10 years old. It feels tough to argue that Linux is moving faster than Apple, Microsoft, Google, web tech, etc. It's relatively stable as OS's go.
Sure, Solaris and IRIX are (were?) even older and there was prior art on all sorts of fronts. If you'd like to argue that the original Linux desktop efforts should have copied more from those: you're probably right on some of the specifics. It's easy to say this or that could be slightly better if you look at a huge piece of software like a full Linux distribution. What counts is the software that exists, not the software we all coulda woulda shoulda written.
There were a few hundred people who probably worked on or around Linux desktop IPC back then, and I think zero argued that SUNRPC was a good option. Maybe it was, and someone could have showed up to prove it in code. They did not. Instead, a number of other systems were coded and tried (MICO, ORBit, DCOP, IPC-over-X11, even SOAP), and in the end dbus caught on as a working solution. By that time everyone had a lot of hard knocks and knew what problems they were trying to solve. All the solutions people tried worked fine for sending a message. That was not what differentiated these approaches. The problems to solve included things like how to cross boundaries between systemwide daemons and user session; how to discover, activate, and track other apps and daemons; licensing issues; a least-common-denominator implementation that all the projects were willing to use; security model; etc. At some point dbus cleaned up everybody's ad hoc hacks and experiments, and now Linux is pretty uniform about using it and has been for years. Is it perfect? Not at all. It was just the first thing to be good enough and it stuck.
If someone comes along and does something legitimately better and worth switching to, then I'm sure Linux will do so, and take a lot of heat for it too.
"So rather than point out how wrong he is, ask 'what is he trying to say?' and deal with that."
Well, I think he's trying to say what he says, which is "please don't write software which requires any of the Gnome/KDE and DBus API. Writing X11 programs with xcb and proper RPC APIs like SUNRPC or Thrift should be more than good enough."
This is nonsense.
The idea to use raw xcb rather than GTK or Qt or HTML: come on. You'd spend months getting to the point where you had crappy buttons and scrollbars working. Replicating user-expected and mandated functionality provided by the toolkits is a multi-year task to do _poorly_. You'd never, ever finish writing your app (and it'd suck, too).
On the IPC front: you'd be adding yet another way to do it and thus more complexity. It's fine to say SUNRPC should have been chosen in 2001, but it wasn't, and rewriting hundreds of apps today is nuts. Whatever your dbus annoyances, you could solve them in one place and fix the whole system.
More importantly, most of the newfangled (= 6-12 years old) crazy ideas that this post complains about, exist for some good reasons that the author of the post doesn't seem to be aware of. You could certainly build a system _involving_ SUNRPC or Thrift that would work. But you'd have to innovate on top with an understanding of the problem space. And what's the end-user benefit of that, at this point in time?
I'd argue it's a big old zero.
But if someone shows that there's enough benefit, I hope a new idea wins on the merits (and the running code).
Doesn't make any sense either, NetworkManager is a service that can be stopped like any other one.