That was a thought-provoking article. Something that interests me is how software systems become more complex over time, gradually deviating from the vision the original developers had. I believe the modern Linux ecosystem (Linux kernel, systemd, dbus, Wayland, GNOME/KDE, etc.) is almost unrecognizable from the Unix ecosystem of 30 years ago, let alone the quintessential classic v6 and v7 Unix systems from Bell Labs. But it’s not just Linux. I think any system that isn’t guided by a “keeper of the vision” eventually becomes huge and complex, especially if it gets widespread use. For example, C++23 is far more than “C with classes.” Common Lisp is quite a beast compared to LISP 1.5. Think of how many features are in Microsoft Office, which has been developed for decades.
I think there’s value in a system that technical users could understand from top to bottom. This is what attracts me to small systems like Minix, Scheme, and Standard ML, just to name a few. However, I’m curious about how the complexity of big systems can be tamed. A system that is hard to reason about even with the source code being accessible is more expensive to maintain and to modify.
> However, I’m curious about how the complexity of big systems can be tamed. A system that is hard to reason about even with the source code being accessible is more expensive to maintain and to modify.
By removing the misfeatures that make layers of abstraction seem necessary. Linux has already gone thru this several times with things like the transition from HAL to udev, or the potential simplifications falling out of well written copy on write file systems (gefs is one I particularly have my eye on here). The main things which dishearten me in this regard are, respectively, the complexity of GPUs (and thus their supporting infrastructure in software), and what is needed to support the OS-within-an-OS that is the modern web browser.
I kinda agree, especially in terms of commercial involvement. There's just too much steering done in Linux by big companies. I think it's lost the grassroots aspect. It's all big business now. And general opinion seems to side with it, I've read a lot of comments siding with RedHat's narrative of CentOS (and recent spinoffs) users as 'freeloaders'. Protecting someone's revenue stream was never the idea behind Linux.
Some people might say Linux has grown up but for me it's off-putting. I don't want my OS to be for everyone because my wishes are pretty different than most people's.
I moved to FreeBSD for my desktop and I'm really happy for it. It's simple and has some unique insights like the ports collection, jails and ZFS as a true first class citizen. Of course to each their own and I'm glad for some Linux-driven initiatives like KDE. I'm happy some people like Linux and I still use it here and there too, like for premade docker containers.
>Some people might say Linux has grown up but for me it's off-putting. I don't want my OS to be for everyone because my wishes are pretty different than most people's.
Linux CAN be for everyone; that's precisely why so many different distros exist, to focus on different groups of users who have different needs and preferences.
It would be nice if ZFS was a true first-class citizen in Linux-land though.
I'm with Andrew Tanenbaum, who said, “It is sometimes said [Unix] Version 7 was not only an improvement over all its predecessors, but also over all its successors.” I got my start with V6, and I understood it thoroughly. There are large parts of my current driver, Debian Bookworm, that I only vaguely understand. Now nobody is trying to prevent me from understanding them (hello, Microsoft and Apple!), but there is no actual roadmap to the system.
So, yes, Debian is far more complicated than I need or want. I have pondered on building my personal Linux system, using say Alpine or Void Linux, but that is work that seems unnecessary to me. Better off just to use Debian as it is.
And, by the way, this bloat isn't just due to the distros. I build Emacs from source, and you should see the bizarre dependency set it needs, many of which have nothing to do with anything I use Emacs for. I have both GTK and Qt libraries on my systems. TeX Live's distribution media would fill up 20 IBM 2314 disk storage devices (circa 1970), each containing 8 washing-machine-sized disk drives (plus a spare). Install an application, and you might find yourself installing (different versions of) Perl, Python, and/or Ruby. It goes on and on.
I have felt for a long time that dependency management is one of the big unsolved problems of software engineering. It's not surprising that the resulting systems have the appearance (and texture) of big balls of mud.
Well, none that I've seen. Some of them make perfect sense (e.g., image processing libraries, etc), while others show up as recursive dependencies.
In my experience, people don't write up lists of dependencies, especially since some are recursive. Instead, they look at the available tools and libraries, determining which are useful for their purposes.
I'm not picking on Emacs (though its earlier icon as an overflowing kitchen sink gives a clue). This is a phenomenon of lots of software. If I want to use Asciidoctor-pdf, I must install Ruby, for example. The very fact that we have a wide variety of tools, languages, and libraries guarantees this sort of explosion. I'm as guilty as anyone of this, as my preferred language is Scheme. So if you want my programs, you have to install a compatible Scheme system.
My real point here is that, given our understanding of dependencies, this sort of explosion is inevitable. So if I had a perfect, svelte Linux system, it would start picking up cruft almost from the moment it was created. That's just where we're at.
Maybe you don't know everything that happens in your gnu linux system, and I think that's okay (to some degree).
But the difference to windows and macos is that you can know whats going on.
With looking at the documentation, man pages, looking at the configuration files and maybe even reading source code. If you really want to know what exactly happens, you will know on a FOSS linux system.
With the closed source systems like windows and macos, you'll probably never know 100% of whats going on.
With linux you are also free to create an installation from scratch where you exactly know every piece because you wrote every configuration file for it etc. This is much work, but otherwise you will need to use magic (like the author described it) programs, which do most of the heavy lifting for you.
> Maybe you don't know everything that happens in your gnu linux system, and I think that's okay (to some degree).
I think a lot of this is because there's been an effort to provide more QoL functionality out of the box in dsitros and a lot of the features are inherently complex things with complex code. You can understand it, but understanding each individual piece takes a lot more time than it used to because each piece is solving for a wider use case.
The audio stack overhaul is a good example of one that's positive. PipeWire is actually game changing in terms of UX and how well _it just works_ to the point that there's no longer any kludging of asoundrc or any other configs and praying that an audio device you want to use works as expected.
There seems to be a tendency (for better or for worse) in Linux communities to want to deal with these inherently complex things by ignoring the complexity and taking the stance that building your own solution to those problems as they arise is better, but it really feels like a gatekeeping stance. Maybe it's a worse is better situation, or there's some level in the middle, but the current state of Linux is far better than it's ever been imo.
>You just aren’t going to get the same quality documentation.
In my observation over several decades, documentation in general has gotten really bad, or non-existent, all across industry. It's everywhere: no one wants to write stuff any more.
Sometimes I feel like this author, but then consider it may just be me getting older and resisting learning something new. I've never become an initv nor systemd expert but over time I have gotten to a point where I don't think systemd is out to make my life difficult, though I do wish creating a service was easier since I never ever remember how to do it and where the documentation on the options are.
But, I also think that modern OSs have somewhat outgrown their users. I'm not sure I actually need a multi-user OS, nor secure signing of various things. Though if I were to use Haiku and get what I asked for, maybe I'll get exploited and regret it somehow. Yet the likelihood seems so low versus the complexity involved to protect me.
Anyway, for those who think Linux is growing to have more than needed, you'll always have the distros that refuse to change, like Slackware. But, I think saying the old linux was better is nostalgia, since Slackware really isn't fun to use.
I find freebsd pleasant for server use. But using it for desktop, probably would not enjoy.
As for slackware, yes :^)
I'm glad you're having fun. But I think most people would give up at the lack of good package management before getting into further shortcomings.
slackware was one of the first linux distros w nice package management that so happened to be similar to a herein unnamed bsd approach that works well :-)
An interesting read, I'm sympathetic to this viewpoint but I also feel like it's kind of an inevitable trend.
> Granted there are other Linux distributions like Gentoo, Alpine, Void, NixOS, etc. that are still conservative in some of these regards... However, these are not the popular ones, thus not the ones where the most development effort goes into.
Has this ever not been true of hardcore/transparent "Linux"? Yes Ubuntu is trying to be a free Windows-like, yes Gnome is pretty locked down. But alternatives still exist, it's just they're unpopular (and specifically for the reasons the author prefers them). Yes the big names get most of the development attention, but in terms of overall hours contributed are the smaller names actually getting less as an absolute number? Or just a percentage?
> In fact, the overall dumbed-down and closed-up -- sorry I meant to say "optimized for the enjoyment of our customers" -- Apple OSX seems to be a better alternative...
This seems to be meant as snark, but isn't this kind of the core tradeoff driving this whole dynamic? Sure simple and transparent software can be fun and useful, but much of the modern computing stack is simply too complicated to be genuinely transparent. Simplifying the bootloader won't make WebRTC simpler for an amateur to understand.
We would all do well to remember we make things for users. The system is complex so that the user doesn't have to deal with it, and if we want Linux to get more than its current (pathetic) desktop market share, this is the way things will go. You can lead a horse to water, but you can't make it drink, and you can't make users care about this stuff when they just want to do their taxes, manage their businesses, talk to their friends, and look at their photos.
>and you can't make users care about this stuff when they just want to do their taxes, manage their businesses, talk to their friends, and look at their photos.
The thing is, you can easily do all that stuff right now on Linux, even if you're a casual computer user. Unlike 15 years ago, ALL that stuff is now done inside a web browser, so the OS is really nothing more than a platform to run Chrome (or better yet, Firefox). Any idiot who knows how to use a PC can start a web browser, and from there it's no different whether you're on MacOS, Windows, or Linux.
And Linux has the huge advantage that it won't suddenly do a forced update and reboot your computer while you're doing a presentation at work.
I agree, for most people the system is a bootloader for Chrome. But this article is about one guy's disappointment he can't hold the whole system in his head anymore, and I'm just saying the user never cared about that in the first place.
But also, if you've only used one operating system, there hasn't been any need to generalize what you've learned from that system interaction wise. We had to learn a bunch of computing conventions. Eventually you learn things like "most command line tools have -v,--verbose". Grandpa doesn't think computers work this way, he thinks Windows works this way and how should he know how Linux works.
'What games are like for someone who doesn't play games' is a pretty good video about a similar thing. It's easy for you, you know computing vernacular.
Granpa doesn't need to use command line tools; he certainly doesn't on Windows, even though he's old enough that he might have used MS-DOS or Windows 95 and had to use command lines then. Grandpa just starts his PC, clicks on the Chrome (or Edge etc.) icon, and uses the web. It's really no different for a modern Linux distro like Ubuntu.
Admittedly as a macOS lifer moving to linux I was a bit confused, or at least couldn’t resonate with the criticisms, specifically systemd (openrc), along with other random overhead and cruft.
Then got to the part where they mentioned this is targeting more mainstream distros and thought “ah so this is why I chose gentoo.” I never considered any of the mentioned mainstream distros because I just assumed given the popularity, there’s more fuckery. Less granular control == more “user friendly” which seems like a natural byproduct of something becoming more popular.
I certainly don’t have the historical perspective of the author, and one can can debate the merits of the myriad running processes, but just wanted to share the perspective of someone entirely green to Linux; that what was outlined was what I already assumed
>The main reason I've switched to Linux was because it was an operating system that was so transparent, so (relatively) simple to debug and experiment with…
>However, [Free/Open BSD] have shortcomings of their own, mainly being so conservative that they've almost been left behind by the overall open-source community.
I feel like the author doesn't understand that these are connected…
I wish! It's got a long way to go still before it's as robust and easy to use as Mac and Windows. Systemd has certainly been the biggest improvement in that direction for a while though.
It's a modern Linux, closely follows upstream projects, and when you wonder how to set up some network configuration or something else you just open the script in question and it's mostly obvious.
It's a small community so hasn't got the same amount of eyeballs and might not always be as quick with security fixes as the bigger projects, but it might be a bit of stale, old, yet somehow fresh air for the author.
I am using Linux rn because my Windows computer is busted. On Windows, I was using WSL for development. It was amazing to have a good dev environment as well as a real computer at the same time!
I can't even use my large MSI 1440p monitor with this computer because Linux (also perhaps because of the actual gfx card too, to be fair). I had to manually install Discord's update yesterday and now there are 2 Discord apps on my computer because Linux (and only one works).
From the thread it seems you are on Ubuntu. There are basically two package managers out of the box. One, the traditional combo of dpkg + apt on top is deeply embedded into the system. You can install a Discord .deb package using `$ sudo dpkg -i ~/Downlaods/discord-0.0.28.deb` or whatever because it is not in the operating system's repositories. If it was, it would be `$ sudo apt install discord` or something along those lines.
The Snap packaging system also includes a layer of software that can sandbox the packages it installed. The idea is, a single package is less distro-specific and also is limited in the damage it can do. Ubuntu is the main user of Snap. Many other distributions, especially Fedora and the like seem to lean more towards Flatpak, which is a different take on what Snap does. Yes, it is complicated but the idea is to increase the security and portability of software packages for Linux. To manage Snap packages from the command line, you can use the snap command.
Yeah, I think I may have got the second installation working via the sudo dpkg route, but I'm not sure. I know I definitely tried it as part of troubleshooting the issue. If that is what I did to eventually get Discord working again, then it's strange that I now have two installations, because the download of the .deb file is what the Discord client wanted me to do in the first place.
That being said, I probably initially installed it via Snap--but then, shouldn't it have auto-updated? Yet the Snap store version is still stuck on v 0.28, and the Discord client is what insisted I download the .deb for 0.29. :shrug:
I do not think either of the above routes is too onerous or complicated, but I think the situation is made more complicated by the multiple options. If I am understanding things correctly, APT is a third option in addition to Snap and downloading .deb directly--actually, there is a fourth, which is a variety of things we can lump together under "execute an an installation script"
Apt is the default thing that manages dpkg for you, does dependency resolution, etc. dpkg is quite low level actually. Snap and Flatpak and others are en vogue now, they are basically more app-storey approach to application installation, permissions and more. So apt/ dpkg or if you are on Fedora dnf/yum/rpm are the traditional approaches and Snap/ Flatpak is a very different approach altogether.
I would probably just deinstall the Discord from Snap and keep using the .deb/ dpkg approach.
Fam, you probably installed Discord through the package manager or the AUR and then when it said there was an update you said "I'll figure it out" and downloaded the tarball that option offers and built it.
you should remove them by doing `sudo pacman -Rns discord` followed by trying yay -R discord.
Then install it from either the AUR with yay or Arch repo with pacman.
I don't remember how I first installed it, but for the update, I downloaded the .deb file they offered by default, couldn't do anything with it. Don't remember how I solved the problem.
Edit: actually, I think I installed it via Snap Store. And the current Snap Store version is .28, but the new update is .29. So the .28 client is just in a loop of "there is a new update and I am out of date, so now I am just a modal that downloads the packages at these URLs". Not sure where my other installation came from, if I somehow did a fresh install or what.
I don't really get why there isn't just one obvious reliable and centralized way to install software on this thing. My Slack, VSCode, and other apps auto-update without my having to manually do anything, which, IMO, is how it should be by default.
I'm on Ubuntu--is there benefit to using Pacman on there? My understanding is that people do.
> I don't really get why there isn't just one obvious reliable and centralized way to install software on this thing. My Slack, VSCode, and other apps auto-update without my having to manually do anything, which, IMO, is how it should be by default.
The shame is Linux used to have this, and old fashioned and geeky distributions still do. In fact it used be be on of the big advantages of Linux over Windows and OS X. You installed everything from the repos, everything auto updated from the repos.
Yeah I know, I'm kind of expecting more from Linux in that respect.
But I'm also just genuinely wondering if people more experienced in software engineering would agree that it is better, ideally, to have one clear, right way of installing software.
I don't think there is anything in life for which "one clear, right way" ever works.
For software at least I have very different requirements for
- the kernel on my work machine, coreutils, base system, etc (I want system updates and good stability and one single version installed on my system in a central location, can be read-only)
- development toolchains (I want multiple versions side by side in standard folders, e.g. multiple ABIs, cross toolchains, etc.)
- productivity apps (calendar, mail, etc). (I want one version in a central location with possibility of rollback)
- end-user apps, video, drawing & music apps, etc. (I want multiple version side by side installed in local folders as a very basic practice when doing some artistic work in a computer is to pin the artwork to a specific version of the software)
>I can't even use my large MSI 1440p monitor with this computer because Linux (also perhaps because of the actual gfx card too, to be fair)
This part makes no sense. Linux works with any monitor; monitors connect via DP/HDMI and don't need device drivers. I have two 1440p monitors on my system. If you're having a problem here, it's undoubtedly the graphics card, and Linux has long had issues with drivers there (mainly Nvidia), but even here most Nvidia users say it works fine for them.
My experience with 1440p monitors on linux is terrible, and I'm being generous.
Starting with the obvious, dpi detection is mostly non-existing and it seems the default text subhinting configurations aren't actually good for high dpi monitors - so you either get shitty fonts and graphics with good screen real estate or FHD experience with acceptable quality. And then you have to choose - do you want color management or different dpi settings per monitor? Because you can't have both - Wayland doesn't seem to have proper color management yet, and X/Xorg doesn't have different scaling settings per monitor.
Did I mention Wayland supports different dpi settings per monitor? Well sometimes it gets confused, and doesn't work well. Getting my kubuntu (I know, running kde doesn't help) to work with both my FHD and QHD monitor in an acceptable dpi setting took several hours, and forced me to switch from XOrg to wayland. Now instead of a robust desktop, I have a machine that needs to be rebooted every week because it starts forgetting to update screen regions - imagine a youtube video playing, but you only see the first frame.
I definitely do not recommend mixing monitor resolutions; I tried that on my work PC and it was a disaster. So for both work and home, I have dual 27" monitors with the same resolution (4k at work, QHD at home). The one at home works fine, but the one at work needed manual setting of the DPI, so there do seem to be problems with DPI detection as you say.
Seems to work ok for me with a 16:10 Lenovo T14 Gen3, which is 1920x1200 connected using a Thunderbolt Docking station to a LG 43" 4K screen and a portrait 1200x1920 Dell 24" display. All of that with Debian 12 and Gnome. Yes, there is no scaling in this configuration. With my previous T470 I used a 27" 4K screen for some time in otherwise the same configuration with 1.5x scaling. I had to turn some Gnome setting on to make it available but it worked.
I am on Wayland since at least 4-5 years back. The setup was a bit funky those 4-5 years back, sometimes it wouldn't come back up after suspend/ resume with the same configuration or when hot-plugging the docking station. But that is mostly a thing of the past and the setup overall is quite stable.
I still think the best experience was during the Dell Latitude E6420/E6430 days when used with the proprietary docking station. That worked every single time even when I, at that time experimented with XMonad. Yes, I haven't used fractional scaling then and the laptops were much bulkier (however also completely quiet when idling, which isn't the case with the Lenovos). Good times.
Overall, I am very happy with the setup and Debian is giving me no unwanted surprises even when I used Debian Testing over the almost 10 years now. My tasks and interests don't seem to require using Windows or macOS in a way I couldn't work around. That definitely helps but I also really feel that the operating system does not bother me and I can get my work done. I also don't suffer from all the things I remember when using Windows or still to this day seeing other people use it. Even a clean installation of Windows on proper hardware like the Thinkpad T470 with just Intel components is "an experience". The touchpad does not work nearly as well as on Linux out of the box. When you install the official drivers for that there is a Window with some inexplicable error with no useful information popping up. And having an uptime of more than 30 days on Windows personal computers seems to be pushing the envelope nowadays. I am not an uptime masochist but rebooting really isn't something that bothers me on Linux, I do it occasionally for a newer kernel when it suits me.
> I can't even use my large MSI 1440p monitor with this computer because Linux
ridiculous, Is it also fords fault I cant use an f150 because I live on a mountain without roads? You may have purchased hardware that has explicitly chosen to make life difficult for anything but windows, that is hardly "because linux". Nobody is even asking THEM to make drivers, which allthough would be nice, everyone would settle for "tell us how to use the hardware", or at best "dont actively prevent us from making drivers".
And why can you not use the monitor?
Your discord problems are almost certainly caused by you doing it wrong. Linux is not a magic solver of problems, you know your way around windows, but if you invested same time into learning linux as you did windows, you'd know how to do things there aswell.
I think this comment exemplifies the attitude that leads Linux to be this mostly niche thing that even most devs don't want to deal with.
> Your discord problems are almost certainly caused by you doing it wrong. Linux is not a magic solver of problems, you know your way around windows, but if you invested same time into learning linux as you did windows, you'd know how to do things there aswell.
No bro, it requires zero investment of time to "know your way around windows" for basic stuff like installing consumer programs--that's the thing. Not sure how you're missing that. I don't think I've ever installed anything incorrectly on Windows. I'm not sure that's even possible.
The issue is not that I expect Linux to be a "magic solver of problems"--it's that there are a lot of problems it has which are ridiculous in the first place and don't exist on other operating systems.
> No bro, it requires zero investment of time to "know your way around windows" for basic stuff like installing consumer programs--that's the thing. Not sure how you're missing that. I don't think I've ever installed anything incorrectly on Windows. I'm not sure that's even possible.
hmm, is that why theres SOOOO much stuff around "customer support"?
its funny how the "issues" affecting linux (mostly that it isnt EXACTLY like what they're used to, windows) is a huge blocker issue, but windows gets a free pass for everything.
Linux isnt ready for the desktop because of tiny things, but windows is, despite issues unimaginably bigger.
hell, when people mostly said "linux isnt ready for the desktop" in the early 2000s, windows was unable to be installed on modern computers without custom creating a floppy disk with drivers for your sata drive. Yet somehow it was ready for the desktop.
Cannot agree more. Using linux since Slackware in the '90s but still now and again Linux for everyday use can be so frustrating. Currently using Ubuntu 22.10, because 23.04 has a problem with multiple window management. But 22.10 on my PC have mono sound, but it is stereo on 23.04. Stream works fantastic on 23.04, but I have problems login in to some games on 22.10. Having problems using Cuda on 23.04 but it works 100% on 22.10, and on and on it goes. Did go back to 22.04, but there were other problems then, so decided 22.10 will be it for now. As so many people said before, Linux is 95% there, it is just the last 5% that is so frustrating.
I was commenting on this just last week. Gaming on Linux has exploded since the Steam Deck was released (just as all Linux users were hoping it would).
All the effort which went into Proton to get games working on Steam Deck has done wonders for the desktop.
It used to be that you could play AAA games up to about 2 years prior (same story like macOS today), but now you can play almost anything on desktop Linux. I even run HiDPI / 5K monitor and it all works now, at least on Arch.
The only notable exception (truly, a surprise), at the moment, is Starfield not working on day 1. This was surprising, but the reasons net down to GPU drivers - and progress is already being made.
In 1989, I became a Unix user because I was in college. I had been a DOS user, not much of an Apple fan, Commodore lover with a VIC-20 and C=64, and even an Atari BASIC user before that. I knew systems and I knew architecture, and I easily adapted to the CLI.
I was immediately entranced by the simplicity and elegance of Unix. I got a sense of gigantic systems humming beneath (or above) the terminal room, because this was an OS capable of scaling massively. Yet it allowed the end-user to piece together simple building blocks, in shell scripts, pipelines, and C or Pascal programs.
So I grew and adapted to Unix, and I lived through the heady proprietary days with Sun Microsystems, HP and DEC. Then I saw that bubble burst as Linux and the Wintel architecture supplanted them and dominated the market. I used all sorts of GUIs, from OpenWindows, CDE, Cygwin, etc.
I ran Unix or Linux at home between 1991-2022. I loved to tinker! If a gearhead always has his hotrod up on blocks in the driveway, I always had the cover off my computer and I was poking it in some fashion. I loved to play with software and configuration, and play sysadmin to my home lab. Until a few years ago, this was OK.
However, my days of tinkering came to an end. My days of wanting to self-host services ended. I became even more of an end-user than a Power User, and as of 3 years ago, I needed systems that work and are supported. I would tinker no longer.
So I said farewell to Linux. Now don't get me wrong. I fully support Linux in all its forms, from Android to the data center to hyperscale supercomputers. I just don't find a space for Linux or Unix at home anymore. Thanks for all the lovely years.
If you ever read windows internals, you’ll see that overall windows is surprisingly simple, the hidden-proprietary things like encryption/authentication/ads etc run on top of everything else.
Linux and osx are as well still not complex at all, what have changed the most over the years are firmware/hardware and drivers and security related items like addition of sensors, IcS for encryption and etc that is indeed very opaque by nature.
That was painful. Dude clearly hates systems despite not wanting to be labeled a systemd hater. Wake me up when Linux decides not to work on systems without a TPM and we can talk then.
I believe Lennart explained things well enough - being _end-user_ targeted means much more dynamic things vs servers targeted.
> Hardware and Software Change Dynamically
> Modern systems (especially general purpose OS) are highly dynamic in their configuration and use: they are mobile, different applications are started and stopped, different hardware added and removed again. An init system that is responsible for maintaining services needs to listen to hardware and software changes. It needs to dynamically start (and sometimes stop) services as they are needed to run a program or enable some hardware.
Complex systems are less reliable, so, the author is right that by increasing complexity, Linux is becoming unstable like Windows... prior to Windows 11, which is great, to be honest.
I agree with everything the author said. I could have written that. But having said that, I’d add this.
I’m astounded at the customization that’s still possible. Gnome-tweaks still works on Ubuntu 23.04. I moved the activities bar to the bottom and merged it with the dock (some extension). I changed the fonts to Helvetic’s everywhere in the system ui — just gnome tweaks.
Only annoyance was that I had to create a symlink to make the Firefox snap see the fonts I had installed.
It’s pretty cool that hidpi
even works. I now think things like dbus are intrinsic to the problem of IPC for GUIs.
——
I can see why snap exists. Its worth it to make it hard for a random ‘curl | sudo bash’ from breaking important programs.
But yes it’s ridiculous that I can’t update Firefox snap while it’s still open. And that it won’t auto update if I just close Firefox after getting the pop up about needing an update and wait a few seconds. I’m optimistic that will happen.
Even in 22.04 I could entirely disable the dock using the same tools to disable third-party extensions.
——
I’m a huge systemd hater. But I recently noticed it has something to run an arbitrary one-off program as a unit with all of the isolation/logging facilities that provides. It’s a pretty great from a tech perspective.
So I’ve softened my criticism somewhat. :)
——
It’s amazing how compatible the various distributions still are.
I’ve concluded these problems are not easy. The complexity in modern Linux is probably fine within a constant factor (analogy with time complexity — I’ll edit to clarify).
If we didn’t have multiple daemons etc we would have a single giant “unified” system like systemd for gui also — like windows.
—-
Finally, I think if we really want a simple gui stack we need to:
1. Get rid of graphical login and go back to VT login + startx.
2. User switching by alt-ctrl-f2 + startx.
3. Screen lock performed by actually logging out using the gui equivalent of tmux. (Jwz’s post linked here was educational.)
——
Thanks for reading so far. And THANK YOU Ubuntu and Red Hat for keeping the flame still burning.
PS: one final pet peeve. Why the heck does Ubuntu think it’s ok to half-ass booting other OS’s? Was super impressed that Oracle Linux (RHEL clone) actually listed all my OS’s and booted all of them after install. My hat is off to you.
> But yes it’s ridiculous that I can’t update Firefox snap while it’s still open.
Agreed, it really could use some kind of staging installation that it can quickly switch to.
> And that it won’t auto update if I just close Firefox after getting the pop up about needing an update and wait a few seconds. I’m optimistic that will happen.
This does seem to happen now. I'll get a notification that something has an update ready, I'll close it, and a few moments later a notification that the update has been installed and I can click that notification to relaunch it.
This is all very adjacent to the suckless philosophy: https://suckless.org/philosophy/ . I sometimes wish we had a reset button to figure out where we went wrong; though I’m not a fan of systemd, but it seems like it was even before that.
Nah, I know what you mean, but the author has a real point actually, explains it well while is fuzzy, and the article has also good other reads. It is not just "I don't understand how the Linux boot process works" and doesn't serve the complaint and concerns at all.
It is still very far from Windows though, that is really a bad tag line..
The funny thing is, the boot processes on Windows and Linux are remarkably similar, and have always been.
On Windows, firmware loads bootloader, bootloader loads ntoskrnl (NT kernel), ntoskrnl starts first user process smss.exe, smss starts security subsystem lsass.exe, lsass starts winlogon.exe, winlogon logs user on and starts the shell, explorer.exe
On Linux, bootloader loads vmlinuz (kernel), vmlinuz starts first user process (init), init starts display manager (eg gdm), gdm
logs user on and starts another copy of the X server and launches user's favorite window manager.
These have essentially stayed the same since the first NT 3.1 release and Red hat 0.9. I'm not familiar with Mac OS X or other Unices but I imagine they aren't that different. Modern desktop OS design has essentially converged to a point where there is no fundamental difference between any modern desktop OSes, because after all they are all trying to solve more or less the exact same set of problems. When a user inserts a flash drive he expects it to be mounted automatically (you can't honestly expect most end users to type in a sudo command to access his flash drive, can you), and this in fact requires careful coordinations from a ton of system components to implement securely, because mounting a filesystem is a privileged operation (you certainly don't want the end user to be able to unmount the root filesystem do you, but he should be able to unmount his flash drive). This is why things like polkit are necessary. People in Linux land love to complain about dbus --- oh it's turning Linux into the big evil M$ Windowz --- but how do you propose to solve the problem of two talking processes having to send objects between each other? With plan9's 9p? How about authentication? How do I make sure the process I'm talking to is really the process I think it is? If you try to solve all these problems, you end up with something that's essentially no different from dbus.
The funny thing is, the boot processes on Windows and Linux are remarkably similar, and have always been.
>On Windows, firmware loads bootloader, bootloader loads ntoskrnl (NT kernel), ntoskrnl starts first user process smss.exe, smss starts security subsystem lsass.exe, lsass starts winlogon.exe, winlogon logs user on and starts the shell, explorer.exe
Any idea where I can find more about this?
Made a blog post about this my start-up is different. (Talking about Windows)
This is what is moving me away from Linux. It's been a good run, but the more it becomes like Windows and such, the less appealing and useful it is to me. If I wanted to be using those types of operating systems, I'd just use them directly.
dbus, systemd: these become building blocks because they allow the software on top to be split into smaller, replacable and inspectable components. wayland offers the same thing in contrast to X11. the side effect to having two-thousand 5k LoC userspace projects instead of two-hundred 50k LoC projects -- the side effect of _making a system more inspectable_ -- is that the complexities are made more visible (the interfaces/boundaries are more plentiful).
> Granted there are other Linux distributions like Gentoo, Alpine, Void, NixOS, etc. that are still conservative in some of these regards... However, these are not the popular ones, thus not the ones where the most development effort goes into.
NixOS has _by far_ the most software packaged and the most packaging activity. moreover, postmarketOS is likely the most widely-deployed distro for FLOSS mobile phones, and it's a downstream of Alpine. so i just disagree with this statement. but back to the proliferation of userspace services/interfaces, because IMO that's what's enabled these frontiers:
any project can ship a systemd service and/or a dbus file declaring which interfaces it implements. now your OS doesn't have to do anything special to connect the different components. your chat client announces "i'd like to speak to an <org.freedesktop.Notifications> implementation please", dbus says "oh, i've got a file here saying that dunst can do that for you, let me launch its service", and 100ms later a chat bubble notification hits your display.
now you hear about a new notification handler, "SwayNotificationCenter". uninstall dunst, install SwayNC, and now the notifications have fancy inline replies, a panel that lets you mute specific applications, which you can open from the tray icon, and so on. actually, it's new enough software that your OS doesn't package it. so you write a PKGBUILD, or APKBUILD, or nix file for it. because the systemd/dbus descriptions are shipped by the package, your package script is all of 10 LoC. easy enough to open a PR against your distro for that, it takes all of 5 minutes for the maintainers to review something this small and standardized.
now that it's in a distro, it gets more traction. your distro (or a relative of it) decides they'd like to make SwayNC the default notification daemon for their users. but they want it to be consistent with the rest of the desktop, so they'll have to theme it a bit. peeking at the dependencies, they see it uses gtk, so they launch it with `GTK_DEBUG=interactive swaync` to view the document tree and modify the component styles until it fits in. SwayNC didn't have to do anything special to do that, maybe their devs didn't even know that feature existed -- they got it for free just by using a standard toolkit.
now i hope the author might understand why dbus/systemd/wayland are appealing to distros and packagers. there's a long chain between upstream/developer and downstream/user. i figure the author spends more time near the latter than the former. perhaps someday they'll experience the full cycle: maybe they'll find no music player meets their needs and decide to author their own, watch as it gets adopted into different downstreams without any action on their part, enjoy unexpected bug fixes and improvements that their users send back upstream, and walk away with a broader understanding of the different tradeoffs at play here.
It's all gotten too complicated, clearly the direction of Linux is controlled by big companies who are mainly interested in the datacenter. I used OpenBSD because it's still Unix, it's not trying to become something it's not, or change itself to suit whoever pumps in the money.
I think there’s value in a system that technical users could understand from top to bottom. This is what attracts me to small systems like Minix, Scheme, and Standard ML, just to name a few. However, I’m curious about how the complexity of big systems can be tamed. A system that is hard to reason about even with the source code being accessible is more expensive to maintain and to modify.