These days, as many others have pointed out, choose the right hardware and everything just works.
At my company, we are currently in the middle of upgrading aging Windows 10 desktops in the Customer Service department to Ubuntu LTS. So far the feedback is universally positive from the CS agents. Ubuntu runs faster on the existing hardware and that's about all they notice. Chrome is still chrome and that's what they use for all the CS apps, including voip calling.
I use linux myself, but when people I know try it out, they immediately leave after encountering problems with audio, dpi scaling, etc
But here’s the thing. Linux has won. Desktop is a dying market. Linux is literally the most used operating system on phone and in the data center, which is where it counts.
The DE has gotten much better transitively from all the work that’s gone into those other use cases. But we still have a gamer market that is incredibly proprietary holding up the middle finger to those of us who think proprietary drivers and codecs are evil relics of a less open world.
If you really want to see Linux on the DE, boycott NVIDIA, or write them a letter.
Actually, tablets are a dying market. Laptops are a declining market. Fewer desktops are being sold, but they're lasting longer because there's no reason to replace them.
And your 3 year old hardware probably will work a lot better than it does under Windows ;-)
What makes this such a victory? The fact that Android uses a Linux kernel is totally meaningless to the overwhelming majority of Android users. The bootloaders on their phones are almost always locked down and even if that weren't the case, if their kernels were swapped out with something BSD derived how many people would actually notice?
What is the objective with promoting Linux? Adoption for the sake of adoption? Getting the Linux kernel onto the largest number of CPUs, just to make that number go higher and higher? Is pursuit of some number the real goal, or is the goal actually something to do with user empowerment and liberation/freedom? A billion tivoized smartphones with linux kernels certainly optimizes for that number of CPUs, but from a user liberation/freedom standpoint Linux/Android is virtually moot.
That Android gives the user more freedom than iOS is completely incidental to what kernel it uses. Android could be a platform open to third party app stores and sideloading that ran on a BSD kernel or even something completely different. To the average Android user, their Android phone is actually less open than Microsoft Windows, which also allows third party 'app stores' (e.g. Steam) and 'sideloading' (so normalized on Windows that it doesn't even use a special term like that.) Whether a system will be open to third party developers without any gatekeeper is a matter of company politics, not something that's emergent from what kernel is chosen.
So yeah, pop open the champagne. Linux won and I'm going to need a lot of champagne to enjoy this hollow victory.
Google's implementation is derived from Apache Harmony. Is the same true for Apache Harmony?
In neither case it is 100% compatible with fulll standard Java API and JVM bytecodes.
Partially is the keyword here.
It "counts" for whom? It sure doesn't count for me as an end user who wants to run some bloody programs on my laptop.
Also whether it's Linux or anything else on the phone it's totally irrelevant to the end users. The actual layers the users see are the Google stuff and the Google Java etc APIs. By porting these they could -- and will -- move Android to Fuchsia tomorrow and nobody will even notice.
When we wanted Linux to "win" in the 90s it was the whole desktop Linux (or, if that was there at the time, a mobile phone Linux), complete with desktop environment, userland, etc.
Not as a backend that might or might not be Linux for all the user's care.
Amazon could rebuild AWS on top of hypervisors, BSD or what have you and none of those users would notice, unless they would be doing some native FFI.
> BSD or what have you and none of those users would notice
FaaS and CaaS doesn't negate that there is a kernel API you're interfacing with. It absolutely makes a difference. Claiming that kernels are completely interchangeable is extremely naive, but then again, your first statement about hypervisors is so absolutely, confusingly incorrect that it's difficult to take what you say seriously.
>FaaS and CaaS doesn't negate that there is a kernel API you're interfacing with
Only in as much that it can't itself be emulated...
I was watching a video on youtube (can't find it now) about gaming on Linux, and it suggested to update the PPA and install the latest drivers. (this was after I refreshed my desktop to be Ubuntu Budgie)
So I did that, installed latest NVidia Drivers. Tried out some steam games, and went to bed. Next day I couldn't boot into the desktop, got stuck in a loop, couldn't remove the drivers and get into desktop at all. It completely bunged my computer. Apparently alot of people had similar issues.
I have NEVER had a plug and play solution for display from my laptop. I do expect to be able to plug in an HDMI cable and be able to give a presentation. And I expect to be able to do this with an nvidia card. But they have always failed me and I don't understand why this problem still exists (I would actually be interested if someone knows).
 This one: https://newegg.com/products/N82E16814150794
Not really. The problem is NVIDIA doesn't give a crap about good user experience. The AMDGPU driver is pretty good and open-source to boot.
You wouldn’t be super happy if you just bought that PC, you’d understand thou. Another example if you buy a car and it doesn’t have android or Apple integration you’d be “damn I should have researched it”
Unfortunately, Nvidia's so dominant (and rightly-so from a price-per-performance unit standpoint).
Also, though: would someone on Ubuntu or a point release distro (which is the majority of Linux end users) have these new firmwares, ever, until they reinstalled to the next OS version? Since hardware driver support seems so tied to the kernel version, I would think not unless they manually updated the kernel (which, not only is that scary on a distro meant to run with a specific kernel but it's something they have to go out of their way to do), which also means they'd miss out on DE improvements and other hardware/protocol support by as much as a year or more, depending on their upgrade cadence. TL;DR: rolling release provides the best Windows-like experience, and an Arch-based distro that emphasizes usability and mitigation of updates that break the system is what best fills that niche, so Manjaro master race.
I'm not sure about the price-per-performance. I just bought a new system with a Vega 64, and if you take into consideration I could use FreeSync instead of paying an extra $200 for GSync in a monitor, it works out better.
On Windows sure (I assume), but that goes out the window when you use Linux. On Linux with an Nvidia card you're paying for more downtime/breakage, much worse performance with the FOSS drivers, incompatibility with wayland, etc.
You know... If the thing doesn't work, price doesn't matter that much. I prefer something more reliable that's not as fast (I've been using Intel GPUs for many years now)
And rightly so, like you said. That's why I can't take the other user's advice and just shell out money to AMD.
And while I do love Manjaro, I still have this problem. In fact I get no detection of my HDMI port. When I had Ubuntu I could (sometimes) get it to display if I restarted the computer with the HDMI plugged in.
I'm also not sure why the linux devs don't take this (or at least not that I've seen) as seriously. Linux is in such a good state now that it is easier to convert people. But this problem prevents A LOT of people from switching, and rightly so.
As for the NVIDIA sucks and doesn't play nice. There needs to be a better argument than that. Like why? There's so many people developing on linux with their cards. They are dominant. Most gpu programmers use cuda and a significant amount of ML researchers are using linux boxes. It doesn't make sense (to me) to just say f you to all those developers. Nvidia doesn't have a motive to push people to Windows.
One thing that doesn't seem to work is hardware acceleration for Youtube videos in Firefox. Playback in a window is fine, fullscreen isn't. Another is that dual-link DVI did not work, so I had to change to using DisplayPort instead, though that issue may have been fixed in the current driver, I haven't checked. For me that's fine, I wanted to switch to DisplayPort anyway, to use the audio output on my monitor.
Those are incredibly minor issues compared to the woes of using the proprietary Nvidia driver. Not to mention the sordid history of graphics drivers on Windows.
[¤] KDE Neon, which is based on Ubuntu LTS.
So what happens if/when Google switches to Fuchsia on their phones?
This would just cause even more fragmentation in the Android ecosystem.
Apple's PPC to x86 transition was successful, as was their Carbon to Cocoa transition. If you want device driver specific examples, Microsoft's Win9x to XP transition was successful, if lengthy, as was its transition to WDDM for video drivers.
SoC and other hardware vendors go where the money is; they may grumble and drag their feet but they will make the transition if it's required of them.
I doubt very much that SoC/smartphone would bother to contribute to microG instead.
Android might run on Linux kernel, but that is a minor detail to userspace, and might be on the roadmap to be replaced by Zirkon.
So no, it wasn't won anything besides server room.
That is more due to how OEMs lock out non-windows OSes from running on their hardware. If users were given a choice, the "windows tax" alone would be enough to convince people to give linux a try, particularly in the low-end segment.
* Android is technically linux, but typically smartphones are locked in.
* All apple products are notoriously locked in.
- Linux is an implementation detail in Android, isn't exposed to userspace (not part of Java/NDK official APIs), Treble made it even less upstream like and who knows, it might even be replaced by Fuchsia's Zirkon
- Apple products being locked in doesn't have anything to do with laptops and 2-1 devices being the future desktops
Screen rotation, pen support, virtual keyboard, they're all there and get the job done. The only thing that doesn't happen out of the box is that my keyboard remains activated when I switch to tablet mode.
I'm having way more issues with proper scaling on my 4K screen than I do with the 2-in-1 support.
Since phones are completly locked down, and Linux is the symbol of freedom by software, I'm not sure of that.
Besides, companies still needs desktops (or laptops) to work. Nobody is going to do accounting on Android or iOS. And I'm certainly not going to dev on something else than Ubuntu.
Do you have any stats on this? As far as I know Windows and Linux are very competitive on the server and depending on the analysis Windows comes ahead.
I assume this is true for most other cloud providers. But that doesn't take into account all the companies running racks of Windows servers in their basements.
I'm not sure that is the win OP was looking for.
I have a Dell XPS15, specifically chosen because the XPS line is supposed to work well with Linux. Numerous problems, all related to drivers. But the main problem is that every time Windows updates it wipes out GRUB.
I figured that my problem is that every laptop I've bought has been designed for a non-Linux OS. So I've ordered a Purism laptop, which should arrive any day now. Hopefully actually buying a laptop designed and built to run Linux will provide a better experience.
It really does feel like a OSX level premium feel. The only bit I am missing is fractional dpi scaling which is apparently on the way but turning on big text in accessibility mode works well enough for now.
Distros are all either Ubuntu LTS or Linux Mint with default DEs. I am pretty sure one fellow switches to a tiling WM some of the time, but I forget which.
Can’t promise projectors will work, but I would expect a thunderbolt -> HDMI adapter will work just as well as the Displayport one does.
From my experience, this issue is from trying to use an MBR on Linux while Windows uses an EFI. I've never seen Windows mess with another EFI on the ESP. This issue is made worse by programs like unetbootin being terrible at using EFI.
The only think that doesn't 100% is the fingerprint reader, apparently you need to setup your fingerprint on Windows first.
I am swearing much less now I don't use Windows so much (even though Windows was in a VM).
If your problem is dual-booting Windows, then why would you dual boot? For games, get a gamer PC. For work, use a VM.
It seems a little harsh to blame Linux for Windows killing your boot ;-)
I'm travelling at the moment, so a desktop games machine is not an option. And I have to test stuff on Windows. But when the Purism turns up I'll relegate the XPS to games machine duty ;)
From the moment I bought a Dell Developer Edition that was built specifically for compatibility, the experience had been fantastic.
IMHO, Ubuntu is particularly bad about this. All I can say is what I use everyday with no problems.
So, no. It does not have "the best" hardware support and no distro has "by far" more hardware support than all other distros.
Disagree. At least traditionally package choice and configuration by distro maintainers made a huge difference, as proven by the fact that problems could be solved by just fixing a config file or adding a package from the standard repo.
The issue isn't even remotely just poor hardware support.
At the kernel level and close to it, the areas that consistently give trouble are video/GPU support and audio support. GPUs are hard, but there's no excuse for the mess in audio persisting for a decade. Video/GPU support is tough, but the current situation, where you have a choice of five different NVidia drivers for the same board, all with different bugs, is not good.
As the author points out, regression failures are a big problem. The sheer bloat of Linux has made it unmaintainable. And who wants that job? Big chunks of important code are abandonware.
That's not right. We all know support for some hardware is spotty and we all have learned to avoid that. My laptops tend to use Intel GPUs, for instance, because I want to work on them, not fix them.
I'm eyeing that new Lenovo thingie with an epaper keyboard, but I know it'll run Windows and probably never be upgraded because nobody will write the drivers to keep that thing alive past Windows 12.
> you have a choice of five different NVidia drivers for the same board, all with different bugs
Stop buying NVidia hardware. They actively sabotage Linux development. AMD is much better in that regard. Buy AMD instead (https://www.phoronix.com/scan.php?page=news_item&px=AMD-Hiri...).
> The sheer bloat of Linux has made it unmaintainable.
Nope. It's still moving forward and it's still quite reliable. All my workloads run on it (except my pets that run on FreeBSD and OpenIndiana because I get a kick out of managing different OSs).
> Big chunks of important code are abandonware.
There is a process to move obsolete codebases out of the kernel. That's why you can't use one of those half-IDE CD-ROMs that came with "multimedia kits" of the early 90's.
That's the denial there.
I'm a Linux guy. I post this from a linux distrib.
But realistically, we do have a huge amount of technical debt, and less and less incentive to work on them.
Case in point: every time we touch something to improve it, we break things for one year or two. Pulse audio ? Took 4 years to be stable. Systemd ? 3 years at least. Network manager crashed for 6 good years, and still can't work decently with sleep mode.
We manage to provide features because the linux kernel devs are incredibly competent. They also limited the bloat to a manageable stack on their side. But around that it's the far west.
Though non-intel graphics are still shit on it afaik.
Honestly, it seems to me like what you're complaining about is the nature of FLOSS development i.e. we do it in the open and collect feedback from users rather than spending billions we don't have on focus groups.
Also, I remember early PA days and it certainly did not take 4 years to be usable, but it did took Ubuntu about that time to get it right. That's however a problem of holding Ubuntu as the Linux disto, which is honestly a whole separate rant I could get into.
As for systemd and 3 years, I am honestly not sure what you're talking about. I've been on it since 2012 and it has been mostly smooth sailing since the beginning.
Sadly if you work with Deep Learning buying AMD is a surefire way to make most of the work published by others unusable without serious effort.
I made that mistake...
If people want linux on the desktop to offer a more polished experience the only way about it is for everyone to put their money where their mouth is.
If "Linux" were a company we would be justified in demanding a fully finished product before buying but free software is a resource that already benefits billions even if they only interact with it via android, or web services.
"As the author points out, regression failures are a big problem. The sheer bloat of Linux has made it unmaintainable. And who wants that job? Big chunks of important code are abandonware."
This seems to be unsupported supposition that we are supposed to take as received wisdom.
Lack of hardware support happens mostly on the other end of the spectrum, with very high-end graphics cards no kernel developer has ever seen from up close and multi-card setups that are essentially unique. I feel sorry for the people who need that but unless hardware manufacturers start to properly document their stuff, make it available to Linux kernel developers and START PAYING MONEY to have drivers developed (like vendors do with Windows) it'll not improve.
You mean people with economical pressure to be profitable ?
As a Linux user, I know I represent a fraction of the market. I'm grateful when people invest in us, because I know it's a great move in principle, but it's not always one money-wise.
Building hardware is HARD. Selling is HARD.
Being dismissive to the people not being able to provide linux support is not going to win them to our cause.
Selling more hardware is usually good for a hardware manufacturer. So is having fewer people returning their equipment because it doesn't work.
> not being able to provide linux support
Most kernel developers would be ecstatic just with proper documentation of the hardware being sold.
It all depends of the ROI. In hardware the economy at scale is at play, and it doesn't go well with niches.
> Most kernel developers would be ecstatic just with proper documentation of the hardware being sold.
I agree. That's mostly a matter of culture. Many companies won't publish docs, because they are either afraid of competition, pirating, or looking stupid.
If people so desire a 'unified experience', you have Windows/macOS. I came to Linux as a refugee from these and am tired of the attempts to pull it in the same direction. Not everything has to be the same, in fact that's a terrible world to live in.
I am also tired of people saying things on Linux are "broken", they aren't any more broken than on macOS/Windows. Granted, you may have to get compatible hardware, which is only fair considering that's what you're doing when purchasing a Win/macOS machine and yet on Linux there's somehow this grand expectation that any random cram HW should just work. You don't expect anything not designed with macOS in mind to work there, so why Linux?
I use macOS at work and experience not so rare kernel panics. There was also just a bug in Premiere blowing up speakers on the MBP, being allowed to log in without a password, APFS logging encryption password in plain text etc. Yet somehow no-one is as strict on that as insisting on Linux somehow 'not working' even as am sitting here being productive on it for over a decade.
Just look at Windows land, where they do have one implementation to rule them all, more or less (Windows 10). Everyone who dislikes its telemetry or almost touch-only interface is either stuck with Windows 7, which won't be an option for much longer, or is stuck venting against Microsoft and grumpily installing hacks like ClassicShell to make things a little more bearable.
When Gnome 3 came up, everyone who liked the new direction kept using it, everyone else moved to Cinnamon or Mate (or XFCE, KDE...)
It's not just about competition, it's about being able to pursue different visions and different objectives.
Users only see this in terms of choice, but there's a great deal of value about it for developers as well.
I find the lack of options to customize GNOME irrelevant - I'm way past the day I cared about the wallpaper or the icons or the colors of the window chrome. I pay attention to what's inside the window, not its border.
The other day I fired up a Solaris 10 VM so I could give Wikipedia a proper Solaris/CDE screenshot and I was surprised it's actually still usable - the terminals are responsive and the rest, oh well... You don't want to use a GUI to copy files, do you?
The latest updates, show there is still work left to do in this area,
... is this true? As a long time desktop Linux end user, whatever these lower level issues are, they've not been visible to me in at least a decade.
I think many of the points raised in the article affect people making desktop software for Linux rather than end-users of desktop Linux. It seems like a global list of issues for the entire desktop Linux ecosystem - which is totally valid but I think a more accurate title of the article might be "Why developing desktop software on Linux sucks" or "Why creating a desktop Linux distribution sucks" because I think my desktop Linux setup rocks!
Some features I'd like:
- Being able to open the context menu for the current folder, even if there are enough files to fill the view, without going up a level
- Being able to jump to files/folders in the current directory by name without opening search results
- Being able to add functionality to the context menu
You can, the windows context menu key/Shift-f10 work. If you have something selected, deselect it with Ctrl-Space before.
Some items in the context menu have shortcuts of their own (new folder: Ctrl+Shift+N, file/folder properties: Alt+Enter or Ctrl+I, rename: F2, etc).
> Being able to jump to files/folders in the current directory by name without opening search results
Not a solution, but a workaround: disable recursive search, and treat search as filtered down list. (I consider this one annoying too).
> Being able to add functionality to the context menu
Extensions can add menu items into context menu; for example, syncthing-gtk does exactly that.
I use Linux in a VM for very hobbyist level embedded development (think Arduino and the like). Driver problems are non-existent, all of the technical problems are non-issues in this environment. The problems that I see are all to do with the lack of a common set of services for building a GUI application.
Why do my text editor and Arduino IDE use different file pickers? It's because my text editor uses the KDE API, but the Arduino IDE uses something else. GIMP uses yet a different file picker from the other two. LibreOffice uses yet another file picker, that's similar to Kate's but slightly different. I'm sure that installing Atom and VS Code would introduce me to two more file pickers.
The reason for this is that each of these programs uses a different GUI toolkit and, as a result, has a different concept of what a file picker needs to look like. Some of them don't even agree on which order the Open and Cancel buttons should be.
Network transparency is another thing that suffers from this. On Windows, you can basically use a UNC path (\\server\share\path\to\file.txt) almost anywhere because the entire system from the file picker all the way down to the file APIs knows about UNC paths. In Linux, KDE apps do this one way, GNOME apps do it a different way, and command line tools need you to somehow mount the target server before you can even think about it. I last seriously used Windows about 14 years ago and I still miss this greatly.
None of these are insurmountable problems, but it needs someone to make a decision about the one true way to do things.
Things don't really work this way in free and open source development. There is no one person to make decisions, consensus is reached when the quality of something raises "above the bar" and actually improves things for all involved parties. If someone wants there to be an über-library that serves everyone's use case then it's up to them to go and do the work to build that.
And it has been getting better in this regard. For example KDE and GNOME used to have their own IPC, multimedia & audio mixing backends, but now both have converged on DBus, GStreamer and PulseAudio, in part because these were intentionally built to be flexible low-level solutions. I'm sure there are more examples of this too but those are the first that come to mind.
With the assumption that the goal is for "vi //server/share/file.txt" to work the same as "notepad.exe \\server\share\file.txt" does on Windows, here are my thoughts.
First off, notepad.exe doesn't really care about the fact that it's a UNC path. It just opens the file with CreateFile (either CreateFileW or CreateFileA).
There would need to be replacements for the libc file functions. These could be a shim in front of libc, or baked right into libc. Note, there's a LOT more needed than "just" new file functions - any functions that do anything with paths need to be looked at. Shells would likely need some changes to work properly, though it's not like the Windows shell can truly do much with UNC paths - copying files to/from works, but you can't cd into them.
How does it ask for credentials? If it's via DBus, a desktop environment provide the authentication prompts, but what about a pure-commandline system? Maybe the transport is just SSH and relies on the existing public key authentication? But what if you're just doing a one-off thing and don't want to set that up? Using SSH is probably a decent idea since it's got authentication, security, and a file transfer protocol, already built in.
On top of all of this, when you open //server/share/file.txt for writing, what does that actually mean? Is there a file descriptor? How does that work with the kernel? Does libc now manage all file descriptors with only a subset corresponding to kernel file descriptors? Could a pure user-space solution fake this well enough to actually work? Would this need to be a FUSE filesystem along with some daemon to automatically unmount the remote servers when the mount is no longer needed? Would it be something like the automounter, just a lot better? Does a kernel need changes for any of this to work?
This is one of those things that touches so many layers and potentially interacts with so many parts of the system, potentially all the way down to the kernel.
My guess, and I don't actually think this will happen, is that Apple will do something like this on Mac OS X and have a reasonable mapping to the BSD world underneath, then someone in the Linux community will come along and do something similar in a way that's better suited for Linux. As a parallel, Apple came out with launchd in 2005 to replace init scripts, systemd made an appearance in 2010 - both do very similar jobs, with launchd tailored to the needs of MacOS and systemd tailored to the needs of Linux. Maybe something similar could happen with UNC-like file sharing.
For the more complicated stuff it can be done but not everything is available via a simple GUI. GNOME and KDE have their own virtual filesystem layers in userspace, GVfs and KIO, I don't know what KIO does but GVfs supports a bunch of network backends and has a FUSE driver that can mount its own virtual filesystems and expose them to outside applications. So the features are there but I don't think they are well-presented right now, maybe someone can prove me wrong though.
It would have been nice if the kernel had better support for fine-grained control over filesystems like HURD or Plan 9 do. But instead it was decided that it was better to handle those things with userspace daemons, so that's where we are now.
Being able to mount a CIFS filesystem is fine, but it's not the same thing. In Windows, you can basically use a UNC path anywhere because CreateFile knows how to deal with it. The point is that you don't need to mount the remote filesystem (the Windows-equivalent being mapping a network drive).
What I'm really looking for is the user experience, not the underlying protocol. On Windows, I can just go "notepad.exe \\server\share\file.txt" and edit the file, on Linux I need to either use a KDE application or go through the ceremony of mounting the remote filesystem. It's the fact that the feature is silo'd into GNOME and KDE (and the fact that it doesn't even exist on Mac OS, but that's another issue) that bugs me.
I don't know why I didn't remember this earlier, but I actually explored this a number of years ago and came up with two things that are close, but not quite there:
First was to use a systemd automount unit, but I didn't really get anywhere with it. From the looks of it you have to know all the possible things you could want to automount, it can't do wildcards. Being able to do some kind of pattern matching on the requested path and translate that into a mount command would go a long way to making this work.
I also explored the good old automounter, but it has a lot of the limitations that systemd's does. It does have the advantage of supporting host maps, which gets me a bit closer to what I'm looking for. The unfortunate thing that remains is that this is NFS instead of a modern protocol. If this were somehow backended on sshfs, I suspect it would be quite useful. Of course, sshfs is missing the concept of shares but that's not a showstopper by any means. Authentication becomes a problem since the automounter probably can't ask the user for a password, and may not even know which user is requesting the mount.
I have no idea how well either will work in practice. Modern Linux on the desktop is a very different environment than the one the automounter and NFS were built for. The systemd automounter looks like it serves a very specific purpose and can't currently do what I want.
Maybe all we really need is a modernized automounter and/or some extra features in systemd's automounter. These could lead to to "vi /net/server/share/file.txt" working as expected which, quite honestly, is basically the same as what I suggested earlier.
What limitations affect you?
(At home, I have linux running on an HP MicroServer as my NAS, it exports filessytems via NFS. Other machines run autofs with the hosts map, so for example my wife's desktop - and mine for that matter - auto-mounts NFS shares on-demand and she can open any file directly in any application by accessing /net/$hostname/$path).
NFSv4 is pretty modern ...
I believe this should also work for CIFS, if the server-side supports unix extensions (to do user mapping on a single connection), but I haven't had time to try it in the past day in my limited time at home.
> Authentication becomes a problem since the automounter probably can't ask the user for a password, and may not even know which user is requesting the mount.
If you have Kerberos setup, NFSv4 does the right thing ...
If you don't have Kerberos setup, then you're probably ok with just normal NFS user mapping.
The last time I tried it was years ago, so I can't remember what limitations I found. If I get a chance to do this in the near future I'll report back.
For the dbus/polkit authentication prompts, I've seen it work on the command line but have no idea how it works. If anyone wants to donate, I'll spend a day and half a bottle of good whiskey and come out with a blog post.
Having just checked, both Atom & VS Code use the GTK/Gnome file picker. LibreOffice is also using the GTK/Gnome file picker.
How about NFS in Linux? It's a lot more transparent than SMB will ever be in Windows. SMB is and will remain a Windows feature.
In Windows, you can basically use a UNC path anywhere because CreateFile knows how to deal with it. The point is that you don't need to mount the remote filesystem (the Windows-equivalent being mapping a network drive).
I just use NFS and autofs. Sure, it's a few seconds more effort to set it up, but it's a once-off cost.
Both KDE and GNOME have user interface guidelines. If Software doesn't follow it, well...
It makes me sad. Years ago programs written for either operating system tended to follow the UI standards pretty well, with the main exception being games. Microsoft started to try new things with Office, so if you wanted to see where the standard was going you just had to look at where Office was.
If your motive is to sell lots of copies to lots of people and make lots of money to it, investing energy into UI refinement makes total sense.
If you just have to scratch an itch, maybe it doesn't.
I’m sure they will both use the same one.
I don’t like Electron because slow and consumes too much RAM, but I have to admit they do have that unified API. It’s quite high-level, and relatively stable because there’s only a single implementation.
I've been using Linux on the desktop for nearly 20 years and I'll have to say it's fantastic, despite the occasional headache, which seems to be at a far less frequency than other major desktop operating systems.
IMO Linux on the desktop is in a remarkably fantastic state. There’s A LOT of really great distros and software that just works. My daily driver is 7 year old Chromebook with xubuntu 18.04, I do java development on this thing!!!
It doesn't work for people who:
- need a locally installed copy of MS Office or other Windows only software
- IT admins that stopped learning a long time ago
As others point out, for som of us using Windows or Mac is a hassle. They're slow (30% longer compile cycles, don't even get me started on git), missing important customizability,m no built in universal package management etc.
Funnily enough the main reason is probably that compiling the Linux kernel is one of the most filesystem taxing workloads and also one of those kernel developers care most about.
And for me, I find I have way more trouble doing things on Windows/Mac than Linux. I think it's really more a way of thinking about how a "desktop OS" is supposed to work. People coming from Windows expect things to work the same, and that's just not the case.
Likewise, when I unluckily find myself on some closed-source box, _very little works how I expect_. And man is troubleshooting harder, because there are so many "surprises."
My point is, I think blaming the operating system is not the answer - users need to adjust their expectations and open their mind a little.
This is a somewhat poor analogy, but it's sort of like a Chinese citizen (closed-source user) becoming a citizen of a democracy (open-source user). The government is going to work differently, and you can't claim democracy is broken just because it's so different from authoritarianism.
I like that analogy (even though I am still looking for that pure democracy/open source government).
In Linux you have the freedom to do allmost anything with the system, but you have to know what you are doing, as the system usually does not stop you, when you are about to do anything stupid.
Windows makes me mad, when it tries to manage me. Like "Yes I really want to use this computer without firewall or antivirus, because it is not connected to the internet and never will be because it serves another purpurse."
To do this you need to mess with obscure registry settings, the default behavior of windows is enforcing it and nowdays also updates, because most users don't know or care what they are doing and are used to be told what to do.
So I believe it is good that I can do anything with my system, but everybody started as a newb once so a more beginnerfriendly version could be helpful.
But Linux main problem is hardware support, and fixing
broken audio/graphic/wifi driver is something which can drive away very experienced people. (it drove me to ChromeOS for my laptop)
I definitely do agree that "onboarding" could be improved. How I dunno. To me at least, it seems like I hear a lot of success stories from the tails of the spectrum - power users and developers on one side / the complete opposite on the other. And then for everybody in the middle, there's no other way to put it than it's almost a shit show:
On the software side there a million and a half different ways to do everything, and often an insane amount of "noise"/outdated info that needs filtering through to find what's relevant to your specific needs. Even at the lowest levels of the stack there is no "the one way", and I think all that uncertainty (especially from the beginner perspective) can make it feel like climbing a mountain.
Hardware, as you mention, is tricky if you don't know what to look for (and why would most people). At least from a longtime Linux user's perspective, it's incredible how much better things have gotten (since the 2.2 days in my case). But there's a ways yet to go, and it's by far the roughest where it's the most visible (ie the trendy bleeding edge). Part of that is just the nature of "lag" in open source development between code getting written, released, and finally showing up in your distro. That cycle can sometimes take 6 or 8 months, especially for hardware :(
Not that this helps users with existing hardware, but
* definitely always google before you buy (model name + "linux" and read the first page or two of results)
* stick with a non-high-DPI resolution screen
* WiFi, I've had the best luck with Qualcomm/Atheros, Intel, and Realtek (in that order)
* Graphics, get AMD. NVidia cards can work well enough with their proprietary driver, but the out-of-the-box experience is crap. Intel works great too, as long as you don't need it for anything heavy.
* Audio, for me the last time I had trouble was with one of the earlier Sound Blaster Audigy cards. Have stuck with onboard codecs since and honestly never had a problem.
I actually thought about that analogy for a while before, but rather used anarchy/libertarian vs. authorian/dictatorship ...
(basically the same point, only more radical)
"On the software side there a million and a half different ways to do everything, and often an insane amount of "noise"/outdated info that needs filtering through to find what's relevant to your specific needs. Even at the lowest levels of the stack there is no "the one way", and I think all that uncertainty (especially from the beginner perspective) can make it feel like climbing a mountain."
Yes. Even for simple things like a screenshot, there are a million ways. Not a problem in itself, but when you come from windows where this is a "print" command and I did not think there could be a reason to do it differently, but on some distros it is. I run into it a few times, "print" did not work, so googling:
You want to do a screenshot? No problem, just install this via terminal, or this, or type in those commands and there you go..
WTF? I just want a screenshot? How is this not standard?
Now this seems to be mostly solved, on XFCE he even asks me what to do with the just taken screenshot, after I hit print (save, view, ..) Oh in general, I really love XFCE).
"Not that this helps users with existing hardware, but
* definitely always google before you buy (model name + "linux" and read the first page or two of results)"
this is not for newbs either. Newbs do not know the difference between gpu and cpu.
And they certainly do not order single components to put together their PC.
Newbs need a company who does that for them. Compose a PC/Laptop which components who are supported and work well together, so what purism does.
But suddenly we are not on the mass market anymore ... and we see the price difference.
So the problem remains complicated, with no easy solution.
Sorry, this is impossible to do once you tried HiDPI. The difference is overwhelming; I consider non-HiDPI screens an obsolete technology like CRTs.
I do use Linux desktops; thankfully, the HiDPI support is much better these days than it was even 2 years ago. Both Gnome and KDE work relatively fine.
Glad to hear support is getting better! I guess I'd be most worried about non-GTK/QT apps
The only thing that ever crashes is the web browser. I keep my computers no less than five years and they run just as fast as new, usually physically failing rather than becoming computationally incapable due to anti-virus and bloat slowing a system down.
The main thing a non-tech savvy user needs to worry about when considering Linux is to generally understand that hardware support lags a bit. So they should do a few web searches on the make/model of hardware they want to use + 'Linux support' before they dive in. If they don't see page after page of glowing success stories, they have their answer and should steer clear of that configuration for the time being. If they do see lots of success stories, read a few of them to see if their eyes glaze over at what is written or if it seems pretty straight-forward and they can follow what's being said. No tech savvy required.
Many, many years ago a decision regarding Window's software management was made and ever since it means that updates take sodding ages and sometimes require multiple reboots and are generally unpleasant. One day that will be fixed - it is not normal.
If you want someone to select the hardware for linux compatibility consider buying hardware that comes with linux.
So, the crap that sucked in 1998 is the same stuff that sucks today. Inconsistent clipboards, graphic driver/X11 support, multi monitor support and debugging/positioning issues, poorly documented or improperly configured out of the box network management tools, firewall, etc. tools. Boot loader failures, and (more significantly recovery). Inconsistencies between Qt, Gnome, Kde apps, graphic sub system freezes. Pulse or whatever sound system of the month suddenly failing one day. DVD playback, MDADM failures, Drive partitioning and resizing difficulties, File system corruption (is it disk based sure but windows is less prone to it on the older file system types, ZFS, XFS, etc. are nicer).
If you want to compile an application, run a server, etc. nix beats out windows any day. If you want to be able to install this great new linux thing you heard about on your existing computer, surf the web, manage your photo collection, hook up your scanner to copy in those old pictures of your kids, setup your Nvidia card and play some games on Steam, find and install the latest or a specific version of an app with out hitting the command line and typing things out like madison, then Linux is not the desktop for you.
edit and don't get me started on high end xeon and intel chip support/speed step handling, or version upgrades running successfully.
This could be a long rant, so I'll keep it short... but someday I'm just going to rip the concept of users out of Linux and see what it looks like. Oh no, you say, malware will get you! Unlikely. Malware running as my user can fuck over my life just as easily as malware running as root. So why even pretend that that's a good isolation model? It doesn't prevent any attacks.
(As for how Linux in 2019 is doing... I recently switched back to Ubuntu for a desktop. Whenever I lock the screen and have DPMS enabled, it forgets that I have two monitors and that I want 200% DPI scaling when it wakes back up. What? Back in my day you had to hard-code the resolution and monitor configuration in the X11R6 config and there was no way to change it without restarting the X server. May I please have those days back? At least once it started working, it kept working.)
Mobile OSs got this right: on a personal device, the permissions model should be applied to the applications.
True for home users; not necessarily true for corporate users-- where computers are IT-managed (i.e. "don't let end users fuck them up") and may be shared (which is highly situational-- the degree to which computers are shared varies highly from company to company, or even deployment to deployment).
Heck, it's not even unheard of to end up with multiple "simultaneous" users on a single-seat desktop machine-- every major OS these days supports some form of fast user switching, which will leave one user's programs running while another user's physically sitting at the machine.
We managed to share home desktop computers in the 90s without significant problems, even though the OSs we used didn't support multiple user accounts at all. And there's no reason you need user accounts to accomplish what you're describing. You can still have profiles (preferences, application configs, etc), and you can encrypt them with a passphrase if you have any reason not to trust others using the same device.
> Heck, it's not even unheard of to end up with multiple "simultaneous" users on a single-seat desktop machine
A vanishingly small use case inside an already vanishingly small use case.
I believe there is now an option (or maybe it's the default) in Windows to run IE under a hypervisor, to totally isolate it from the local machine. This is moving in the direction of providing something useful.
Though to be fair protecting the OS doesn't make much sense to me. I guess it's nice to guarantee that your computer will boot no matter what you do to it, but again, that is not the problem people are actually facing.
The corporate threat model involves things like protecting people from getting an email that says "click this OAuth button to give this malware access to your email". None of the critical software is running on a user's workstation, so whatever is going on there doesn't matter.
I cannot agree. If I want to run another browser instance, I cannot do that on a mobile system. Maybe it would possible to do with some support from browser's developers but with multiuser system I need no support from developers, I can create new user and run another browser that would believe that it is the only browser instance running.
It is not just browsers. I can easily experiment with program configs, for example. Something doesn't work, I want to check is it due to some application configs or it's installed plugins. All I need is to create one more user and to start program instance from that user.
Android relies on SELinux, and SELinux allows much more than old user system, but SELinux is much bigger headache when you are trying to use it in a way that was not supposed by Google. So in reality SELinux on Android allow me to do nothing, I even cannot run app that requires access to a contact list without allowing it to access a contact list. It would be nice to create one more user on Android with empty contact list and to run this program from that user. Moreover I'd like to create user with faked contact list, faked browser history, with faked all the private information, and to run most of the apps from that user.
Multiuser model is settled, simple and transparent model, that just work. You have abstraction of user and abstraction of file access rights, and it is all that you need.
namespaces have no such a simple model. How can I run program from other user but give it some special rights to access this git-repo in my home directory? Do I need to write a special C-program for that? Or maybe existing tools already can be configured with some obscure xml-file? I do not know, and maybe I'm mistaken, but knowing the general laws of linux software development, I guess that the best software for it I can find is a complex overengineered corporative tool with bells and wistles, and the easiest way to use namespaces in my case is to write C-program. (If I'm mistaken, please, correct me at least by stating my mistake aloud, or better point me to a docs, please.)
And here we came into real issue. To write a good C-program for my tasks, I need to start thinking as a software designer, to invent new simple model of process separation, that allow me to solve 90% of my tasks with ease, and the rest with some headaches, but everything must be possible. The only way to do it in a week is to refuse to think, and to replicate multiuser model on top of namespaces. But I need no replica of multiuser model because I have an existing one. What the point of discarding multiuser model, just to move to another implementation of multiuser model?
The only good thing I see is a lack of need to log into a root shell to create or delete users and groups. It would be nice, but I'm not ready to spend a week to write a C program and then unknown amount of time to maintain that program, just to stop using su/sudo for such tasks.
So, I can agree that multiuser model is bad for a linux desktop. But we have no real alternative. And the mobile OS approach is the worst. It reminds me of DOS, where you can work with one process at a time, where you cannot run two copies of a program, where all is done in the single global namespace, and any process can do anything it wants. The only choice you have is "to run program or not to run".
In Linux terminology, launch the program in a new mount namespace with a rw-bind mount to your home directory. You can do this with firejail, bubblewrap, or minijail easily and without a config file.
> "We managed to share home desktop computers in the 90s without significant problems, even though the OSs we used didn't support multiple user accounts at all."
That's not true. Window 9x series had user accounts (with no security between them.) This was beneficial to users because computers were expensive (and still are to most people..) so personal computers very often weren't personal. Having separate accounts, even without security, allowed individual users to configure the system to their personal preference and helped with file organization.
Your hypothesis that XP somehow forced the concept of separate accounts on regular home users because NT was used on servers is just bizarre. People who wanted separate accounts were doing it on 98, and people who didn't simply ignored it and all shared one account. The UX for different family members sharing a single computer by signing into it existed before the NT kernel was in use around the home. The implementation changed when Windows went to NT, but the UX did not. And given that the UX of separate accounts was already appreciated by users, the more robust implementation made possible by NT was a no brainer.
>Mobile OSs got this right: on a personal device,
When it comes to a PC, "personal" is a misnomer. Failure to understand that is the root of your confusion. You are presumably at a place in life where your computer is your computer, not shared with others, like your cell phone. But when it comes to PCs, that perspective is a privileged one. It's evidently not important to you that numerous people be able to use your computer, but it is important to others. The UX of the device that lives in your pocket needs to be different from the UX of a device that sits in the middle of your living room for the whole family to use, like a television.
Alright, maybe that's a regional thing or something. I knew of no one who did that.
Regardless, even you admit that it was not about security, so why have user accounts? Simply being able to change the profile is sufficient.
> The UX of the device that lives in your pocket needs to be different from the UX of a device that sits in the middle of your living room for the whole family to use, like a television.
I still contend that this is an incredibly tiny use-case today, precisely because mobile devices have largely supplanted the role the 'family computer' used to serve. More importantly, that use case can be served without user accounts.
Huh? Those 134 vulnerabilities were found because people can see the code. If it were closed source, they would probably still be there today.
Linux is administered by ssh therefore administrators don't know how to check so therefore they don't bother to update systems because "they're afraid that something will break." C'mon.
But could designing good desktops need more than just good code?
Good kernels successfully run code.
Good desktops successfully help users. I guess different goals require different designs?
Edit: to clarify, I didn't mean desktops don't require well designed software. Just had in mind that a desktop also have to take human psychology and human limitations into account.
Linux is powering servers and high performance computing because it is good at these things: mostly static hardware configuration, set up once during system installation high performance, modularity and the ability to inspect a deeply running system of you are an expert. It ticks all the boxes for these specidic environments.
On the desktop, not so much. For example, the concept of device files is hindering use cases that should "just work". When I plug in USB headphones, a new audio device is created. Fine. But I need to enter the device file name or ALSA device string onto half a dozen programs to use it. All I would want is to have the audio rerouted automatically. Pulseaudio was touted as the solution to that problem, but ar what cost? We're now literally stacking audio systems on top of audio systems and sacrifice to arcane gods to have it work.
When I plug in a USB drive, I now have to look up its device file name in order to mount it manually. The software stack required to automount it from a desktop environment is atrociously complex, because it requires root privileges to mount a device not listed in /etc/fstab with a user flag. And because any number of drives can be connected in any possible order, no entries in fstab can be made.
This clash of UNIX-like concepts and modern user expectations is what is holding Linux back. The underpinnings are not bad. They were just designed for a different task.
So, yes, you can build a user friendly OS. Yes, it can have a clean design. But it won't be called Linux anymore.
It did not stop OSX from achieving exactly what you're talking about, and if you look closely, its core is very explicit about its Unix underpinnings.
In my opinion it's not a "Unix" problem, but a bazaar/scale problem. In the bazaar world, ideally multiple competing solutions would pop up, ideas would be merged and which one or a few would end up top. The problem is that to implement even a single a good desktop system - and I mean top to bottom, not just a DE - would require a staggering amount of resources under a single unified goal and vision. The Linux desktop market is simply not big enough to support even one, never mind multiple competing systems. In the server space, Linux is absolutely massive, and doesn't have this problem.
I disagree with the phrasing. I'd say that good desktops enable users. One of my big peeves about Linux Desktop culture is that they see users as beneath them. They want to "help" users by wrapping them in straight jackets to keep them from hurting themselves and shining a laser pointer on the wall to entertain them. Have a problem with the Linux Desktop? Well, "normal users" don't do whatever it is you're trying to do, and you're not a C graybeard or you'd fix it yourself, so you don't exist according to their model of the universe.
A good desktop needs to run code successful, otherwise every small bug can make a big annoying glitch.
I guess it is more a question of optimizing.
The hardcore linux user uses the terminal and a text editor mainly and kind of despise GUIs. Linux seems to be optimized for them as they are the most active ones using it.
And this use case works perfectly.
GUIs are mostly a thing that was added because of the "newbs" but does not get used so much by the core - so they suck as the core group are the ones who know how to fix things.
This was the situation when I started to explore the linux world and some things changed, but not much.
But Linux main problems are hardware issues, which is partly because of linux enforced OpenSource nature which wants drivers to be opensource and included in the kernel. And traditional industry does not like that approach. And given the small marketshare of linux desktop ... they don't really have to.
Like if you want to install a new software, you usually don’t get .exe or .dmg. If you are lucky, the developer or some fans took the time to package it. Then you can do ‘apt-get’ ‘yum’ or ‘pacman’. However, packages got stalled and sometimes don’t match the original author intent. You can also build from sources, but it takes time and you have to know a bit of CLI. It never felt true freedom to me. But more like whatever the community feels make sense for whatever distribution weird dictactorship. Just a feeling and I still love and support Linux.
I can understand trying to resolve dependencies can be a royal pain, espescially trying to find the distro's specific naming conventions, like libpq-dev vs postgresql-libs
As far as solutions goes, this problem is solved. They just need to be better known by software distributors.
AppImage can do that too, since it is a lot less over-engineered, but sadly very few developers use AppImage and it even distributions like Nitriux that claim to support AppImages don't display icons for them. Could be trivially solved with a standard for embedding icons in ELF, but the unix world hates the very concept of a program that isn't spread all over the file hierarchy and isn't full of hard-coded paths so it'll never happen.
But, it works, especially for the use case of the "I need the latest and greatest versions of two packages".
> AppImage can do that too, since it is a lot less over-engineered, but sadly very few developers use AppImage and it even distributions like Nitriux that claim to support AppImages don't display icons for them.
So, use one of the other 20 distros where AppImages just work, perfectly, out-the-box.
The OS should give you only some essential end user applications (a bare-bones text editor, a terminal, a browser) and then the user should get their specific use applications (DAW, 3D modelling software, game engine, etc) from the application developer, not the OS maker.
It's just less convenient.
How did you come to this conclusion? Did you request the package be included in the distro you use? If so, and it wasn't provided, provide the bug report/feature request link ...
And it isn't insecure. If I trust the developer and get the software from them, it's just as good as trusting a repo maintained by random internets who have been known to not only not keep software in the repo up to date, but actually introduce vulnerabilities that weren't there before!
I think you just haven't used it enough to understand the advantages. If you don't need the lastest-and-greatest released last week versions, the package system is much more efficient that individually downloading hundreds of packages.
While Mac OS X has homebrew, it is still deficient in my opinion compared to most distros (because casks don't get upgraded by default).
> Like if you want to install a new software, you usually don’t get .exe or .dmg.
Why would I want either? Both have lots of issues. For example, by default:
* No auto-update
* Duplication of libraries and other files I already have
* Spotty updates (e.g. who can be sure whether all libraries used have been patched by the latest version you have)?
> If you are lucky, the developer or some fans took the time to package it.
This is the case for > 99% of the software I use, even obscure stuff. For the other < 1%, I package it for the distro I use, and submit it so it is available by default in future releases.
These days, many popular packages also provide .appimage files (similar to .app files on Mac OS X) or publish Flatpak's (including Slack, Spotify, VS Code, Skype etc. etc.), and these can be used on any recent distro.
> Then you can do ‘apt-get’ ‘yum’ or ‘pacman’. However, packages got stalled and sometimes don’t match the original author intent.
Sometimes the original author doesn't know best ... in most cases packagers upstream their changes or discuss them with upstream.
> You can also build from sources, but it takes time and you have to know a bit of CLI. It never felt true freedom to me. But more like whatever the community feels make sense for whatever distribution weird dictactorship. Just a feeling and I still love and support Linux.
It seems like you never exercised your ability to vote, and think that everyone else is dictating to you ...
If I wanted to not get any work done I would be using Windows instead.
However, one strength they all share is: "you usually don't get .exe or .dmg"! Absolutely! Apps are integrated and not simply add-ons as they are in Windows or Apple land. When I want to install say libreoffice or wireshark I simply ask the system to install them. I absolutely do not browse the internet and download something, extract it and run some "installer". When I update my system, all apps and the OS are updated in one go.
My system is curated for me, end to end, to a greater or lesser extent. When I update, all my system is updated - OS, apps and all.
I don't think yours is (whatever it is).
Can you please edit out personal swipes from your comments here? This one would be fine without that first sentence.
c.f. apt-get upgrade or similar, which takes a few seconds.
Except when the package isn't included, or it isn't the version you needed, because then you have to spend 40 minutes trying to install all the dependencies and building the package from source, instead of the 3 minutes it would have taken to install an .exe on Windows.
No, you take 2 minutes to install it using flatpak, or download a .appimage file.
And if neither of those are available, you spend that 40 minutes packaging the software, and submitting it to the distro you use for inclusion.
(If it was already packaged, but not new enough, that's a ~5 min job to do the update for your distro)
Ignoring feature enhancements and bug fixes for a moment, do you really think it improbable that there are security issues in a piece of software whose entire job is to sit on the network and record everything that it sees and then translate and interpret it?
Except for when the distro "gods" don't bother updating a package for years, so you always get the "stable" 5-year old version.
Yes, you can install software using tarballs, but it's not usable for 90% of users, and not because of the distribution model, but because of the lack of standardization in a good, easy to use application-installing API.
tarball installs is pretty much close to the state of Windows app installations. You have to find the bloody things, download them each time and hope you have found the right one and not a trojaned one. Each one needs its own update routine and will not be updated when the rest of the system is updated.
The Windows and Apple software distribution model is archaic compared to all Linux/BSD etc distros.
Except that they're hard to install for a novice user.
>You have to find the bloody things, download them each time and hope you have found the right one and not a trojaned one. Each one needs its own update routine and will not be updated when the rest of the system is updated.
None of those are problems for me, the actual problems are: installing all the dependencies and then executing the make commands. Both of those could be solved with a well designed API.
>The Windows and Apple software distribution model is archaic compared to all Linux/BSD etc distros.
The Windows and apple software distribution works for all my use cases, the Linux model doesn't (eg: if I want many versions of the same package, if I want a package that isn't included in the repos, etc).
Also, from a philosophical and aesthetical point of view, the Windows & Mac distribution model is better: you get your OS from the OS developer, and you get your specific-use applications from the developer of said application. You are not dependent on a single entity that supposedly knows what you need better than you.
This right here is what Linux Desktop doesn't seem to understand. The whole culture is ingrained with the attitude that they do, in fact, know what you need better than you.
You don't. Yes, but if you don't take advantage of the freedom, that's not freedom's fault.
> Yes, you can install software using tarballs, but it's not usable for 90% of users, and not because of the distribution model, but because of the lack of standardization in a good, easy to use application-installing API.
See Flatpak. Go look at what software is available at https://flathub.org/apps , but both GNOME and KDE have application managers that can install from Flatpak repos (configured to use Flathub by default), and possibly the distro's native package manager as well (via PackageKit).
Or you can also use AppImage files.
I think a lot of this is learned, or conditioned behavior ... there are a great many UNIX packages that could very easily (and very nicely and conveniently) exist as static linked, single file executables.
This is how a lot of software distribution worked in the 90s - for say, some Solaris tool or whatever. Granted, the distribution mechanism (some .edu FTP site somewhere) was totally insecure, but the packaging mechanism was great.