Hacker News new | past | comments | ask | show | jobs | submit login
Linux Problems on the Desktop (2018) (altervista.org)
287 points by iron0013 27 days ago | hide | past | web | favorite | 391 comments



I have been a daily linux desktop user since 1997 and I will freely admit that part of the enjoyment for me was getting everything working and setting up my perfect desktop environment.

These days, as many others have pointed out, choose the right hardware and everything just works.

At my company, we are currently in the middle of upgrading aging Windows 10 desktops in the Customer Service department to Ubuntu LTS. So far the feedback is universally positive from the CS agents. Ubuntu runs faster on the existing hardware and that's about all they notice. Chrome is still chrome and that's what they use for all the CS apps, including voip calling.


What I get out of this is that while each of these issues may be individually dismissed, they together make for a very "non-premium" experience.

I use linux myself, but when people I know try it out, they immediately leave after encountering problems with audio, dpi scaling, etc


I have found its super dependent on what hardware you get, Especially on laptops. I always pick hardware that works perfectly with linux and I am left with a very premium experience but when using linux on bad hardware like macbooks and broadcom wifi everything just doesn't work right.


Yep. Exactly this. If you get a laptop that a GNOME developer uses, chances are you’ll run into maybe two issues a year. But try to get GNOME to run on two 4K displays and an RTX.

But here’s the thing. Linux has won. Desktop is a dying market. Linux is literally the most used operating system on phone and in the data center, which is where it counts.

The DE has gotten much better transitively from all the work that’s gone into those other use cases. But we still have a gamer market that is incredibly proprietary holding up the middle finger to those of us who think proprietary drivers and codecs are evil relics of a less open world.

If you really want to see Linux on the DE, boycott NVIDIA, or write them a letter.


Desktop is a dying market.

Actually, tablets are a dying market.[1] Laptops are a declining market. Fewer desktops are being sold, but they're lasting longer because there's no reason to replace them.

[1] https://www.statista.com/statistics/272595/global-shipments-...


This seems right. Apart from store PoS systems I haven't seen a tablet in years and my desktop is 5 years old without any signs of needing replacement. I think it might last another 5 years.


So, as we move forward, all these desktops will end up having stable and mature support. Drivers are often lacking for bleeding edge components, but stuff that was released 3 or 5 years ago tends to be very solid unless it's something obscure nobody has ever seen.


Microsoft doesn't want you to have stable and mature support. They want you using the latest everything and paying for it each month. Microsoft pushed hard to force users to upgrade to Windows 10.[1] Even today, many enterprise users refuse to convert. They don't want Microsoft's new rent, rather than own, software. They don't want Microsoft looking into their machines. They don't want to store corporate data in Microsoft's cloud. But they will convert, like good sheep.

[1] https://www.forbes.com/sites/tonybradley/2016/01/22/resistan...


Come to Linux. We have cake.

And your 3 year old hardware probably will work a lot better than it does under Windows ;-)


I'm on Linux. Xorg crashes on every login, and the crash reporter crashes complaining the dump file is too short.


> Linux is literally the most used operating system on phone

What makes this such a victory? The fact that Android uses a Linux kernel is totally meaningless to the overwhelming majority of Android users. The bootloaders on their phones are almost always locked down and even if that weren't the case, if their kernels were swapped out with something BSD derived how many people would actually notice?

What is the objective with promoting Linux? Adoption for the sake of adoption? Getting the Linux kernel onto the largest number of CPUs, just to make that number go higher and higher? Is pursuit of some number the real goal, or is the goal actually something to do with user empowerment and liberation/freedom? A billion tivoized smartphones with linux kernels certainly optimizes for that number of CPUs, but from a user liberation/freedom standpoint Linux/Android is virtually moot.

That Android gives the user more freedom than iOS is completely incidental to what kernel it uses. Android could be a platform open to third party app stores and sideloading that ran on a BSD kernel or even something completely different. To the average Android user, their Android phone is actually less open than Microsoft Windows, which also allows third party 'app stores' (e.g. Steam) and 'sideloading' (so normalized on Windows that it doesn't even use a special term like that.) Whether a system will be open to third party developers without any gatekeeper is a matter of company politics, not something that's emergent from what kernel is chosen.

So yeah, pop open the champagne. Linux won and I'm going to need a lot of champagne to enjoy this hollow victory.


People saying "Linux won" because of Android would never say "Java won" which is every bit as true.


A large majority of Java developers wouldn't say "Java won" regarding Android, because it is in fact Google's J++, not compatible with a large amount of random packages out from Maven Central.


>Google's J++, not compatible with a large amount of random packages out from Maven Central.

Google's implementation is derived from Apache Harmony. Is the same true for Apache Harmony?


Google's implementation was partially derived from Apache Harmony, nowadays it is partially derived from OpenJDK.

In neither case it is 100% compatible with fulll standard Java API and JVM bytecodes.

Partially is the keyword here.


>But here’s the thing. Linux has won. Desktop is a dying market. Linux is literally the most used operating system on phone and in the data center, which is where it counts.

It "counts" for whom? It sure doesn't count for me as an end user who wants to run some bloody programs on my laptop.

Also whether it's Linux or anything else on the phone it's totally irrelevant to the end users. The actual layers the users see are the Google stuff and the Google Java etc APIs. By porting these they could -- and will -- move Android to Fuchsia tomorrow and nobody will even notice.

When we wanted Linux to "win" in the 90s it was the whole desktop Linux (or, if that was there at the time, a mobile phone Linux), complete with desktop environment, userland, etc.

Not as a backend that might or might not be Linux for all the user's care.


And even as backend it is irrelevant for the large majority of cloud/serveless users using managed languages.

Amazon could rebuild AWS on top of hypervisors, BSD or what have you and none of those users would notice, unless they would be doing some native FFI.


> Amazon could rebuild AWS on top of hypervisors

Um, what?

> BSD or what have you and none of those users would notice

FaaS and CaaS doesn't negate that there is a kernel API you're interfacing with. It absolutely makes a difference. Claiming that kernels are completely interchangeable is extremely naive, but then again, your first statement about hypervisors is so absolutely, confusingly incorrect that it's difficult to take what you say seriously.


Or, you know, it was a slip or a bad syntax, and you took the least charitable interpretation possible ("so absolutely, confusingly incorrect") and then took that to further extremes "that it's difficult to take what you say seriously".

>FaaS and CaaS doesn't negate that there is a kernel API you're interfacing with

Only in as much that it can't itself be emulated...


If you prefer the full description, type I hypervisors with unikernels.


Of course, how could I have not inferred unikernel from hypervisor, how silly of me.


It is, isn't it?


> If you really want to see Linux on the DE, boycott NVIDIA, or write them a letter.

I was watching a video on youtube (can't find it now) about gaming on Linux, and it suggested to update the PPA and install the latest drivers. (this was after I refreshed my desktop to be Ubuntu Budgie)

So I did that, installed latest NVidia Drivers. Tried out some steam games, and went to bed. Next day I couldn't boot into the desktop, got stuck in a loop, couldn't remove the drivers and get into desktop at all. It completely bunged my computer. Apparently alot of people had similar issues.


That's just Nvidia. I had the same in Windows two years ago. Windows update automatically installed the latest Nvidia drivers, and after a reboot the screen was black after logging in. I assumed something got messed up (because Windows) so reinstalled Windows. It worked fine for a couple of hours, then I rebooted and had the same thing. Apparently that version of the driver was buggy with my hardware (GTX 1070), so I had to go into safe mode, downgrade and stop Windows automatically upgrading it.


This is literally my only (big) complaint about Linux. Video drivers just SUCK.

I have NEVER had a plug and play solution for display from my laptop. I do expect to be able to plug in an HDMI cable and be able to give a presentation. And I expect to be able to do this with an nvidia card. But they have always failed me and I don't understand why this problem still exists (I would actually be interested if someone knows).


About a month ago, I bought an RX 580 [1] and it worked plug-and-play (zero setup) on Fedora and NixOS. On the Windows side of things, I needed to go to the AMD website and download drivers in order to get out of reduced resolution.

[1] This one: https://newegg.com/products/N82E16814150794


> This is literally my only (big) complaint about Linux. Video drivers just SUCK.

Not really. The problem is NVIDIA doesn't give a crap about good user experience. The AMDGPU driver is pretty good and open-source to boot.


It's an Nvidia problem. Nvidia's proprietary drivers suck and they're nasty to anybody who tries to develop open source drivers for their cards. AMD and Intel GPUs have been flawless plug-and-play for years now. AMD plays nice with linux developers and you should give them, not Nvidia, your money.


Exactly if s wireless card didn’t work in Linux you’d hardly say Linux is a problem. You’d say don’t use that card

You wouldn’t be super happy if you just bought that PC, you’d understand thou. Another example if you buy a car and it doesn’t have android or Apple integration you’d be “damn I should have researched it”


I was surprised when I saw the newest Vega firmwares show up in my /lib/firmware/amdgpu/ with a kernel update only 6 days after the release of the last AMD GPU -- and on Manjaro, where updates are supposedly delayed a little. AMD touted "day one Linux support", which I don't know if they fulfilled, but they're certainly more committed to Linux than Nvidia.

Unfortunately, Nvidia's so dominant (and rightly-so from a price-per-performance unit standpoint).

Also, though: would someone on Ubuntu or a point release distro (which is the majority of Linux end users) have these new firmwares, ever, until they reinstalled to the next OS version? Since hardware driver support seems so tied to the kernel version, I would think not unless they manually updated the kernel (which, not only is that scary on a distro meant to run with a specific kernel but it's something they have to go out of their way to do), which also means they'd miss out on DE improvements and other hardware/protocol support by as much as a year or more, depending on their upgrade cadence. TL;DR: rolling release provides the best Windows-like experience, and an Arch-based distro that emphasizes usability and mitigation of updates that break the system is what best fills that niche, so Manjaro master race.


>Unfortunately, Nvidia's so dominant (and rightly-so from a price-per-performance unit standpoint).

I'm not sure about the price-per-performance. I just bought a new system with a Vega 64, and if you take into consideration I could use FreeSync instead of paying an extra $200 for GSync in a monitor, it works out better.


Pascal and newer Nvidia cards support Freesync as of driver version 417.71.


Oh yeah, they announced that a few weeks after my new PC arrived.


> price-per-performance unit standpoint

On Windows sure (I assume), but that goes out the window when you use Linux. On Linux with an Nvidia card you're paying for more downtime/breakage, much worse performance with the FOSS drivers, incompatibility with wayland, etc.


> price-per-performance unit standpoint

You know... If the thing doesn't work, price doesn't matter that much. I prefer something more reliable that's not as fast (I've been using Intel GPUs for many years now)


So that being said, I may not be able to use the HDMI port, but CUDA works great.


You can install a beefy "GPU" to do number crunching and still use the built-in Intel graphics ;-)


The problem isn't the display, it is an __external__ display.


> Nvidia's so dominant

And rightly so, like you said. That's why I can't take the other user's advice and just shell out money to AMD.

And while I do love Manjaro, I still have this problem. In fact I get no detection of my HDMI port. When I had Ubuntu I could (sometimes) get it to display if I restarted the computer with the HDMI plugged in.

I'm also not sure why the linux devs don't take this (or at least not that I've seen) as seriously. Linux is in such a good state now that it is easier to convert people. But this problem prevents A LOT of people from switching, and rightly so.

As for the NVIDIA sucks and doesn't play nice. There needs to be a better argument than that. Like why? There's so many people developing on linux with their cards. They are dominant. Most gpu programmers use cuda and a significant amount of ML researchers are using linux boxes. It doesn't make sense (to me) to just say f you to all those developers. Nvidia doesn't have a motive to push people to Windows.


I was really hoping the Radeon 7 was going to be better than it was. Because I want to move to AMD to avoid the Nvidia driver issues on linux.


I'm using an RX 560 on Linux[¤], I added a PPA to get the newest stable driver and Mesa, and overall it works fine, with a couple of minor annoyances that may be fixed as I'm writing this.

One thing that doesn't seem to work is hardware acceleration for Youtube videos in Firefox. Playback in a window is fine, fullscreen isn't. Another is that dual-link DVI did not work, so I had to change to using DisplayPort instead, though that issue may have been fixed in the current driver, I haven't checked. For me that's fine, I wanted to switch to DisplayPort anyway, to use the audio output on my monitor.

Those are incredibly minor issues compared to the woes of using the proprietary Nvidia driver. Not to mention the sordid history of graphics drivers on Windows.

[¤] KDE Neon, which is based on Ubuntu LTS.


The TFA mentions video issues as Nvidia/AMD's fault. To a reader, it might seem they are equally bad.


While it says that, it doesn’t actually link to anything about the modern amdgpu driver - developed officially by AMD and now part of the vanilla kernel - and the majority of the complaints are regarding nvidia.


> Linux is literally the most used operating system on phone...

So what happens if/when Google switches to Fuchsia on their phones?


At this stage, I’d say I’ll believe it when I see it. I don’t doubt that Google has the capacity, but replacing a kernel that works is fraught with peril, and it could very well be that Fuschia remains yet another Google research project.


I think the point is just that the Linux kernel is irrelevant to the Android operating environment. It could run on anything Google wants it to and may do so in the future.


From what I've heard, Fuchsia is a "senior engineer retention project".


Then check Android repo, they are actively adding Fuchsia support.


What does it mean to add Fuschia support to the Android repo?


Means to enable Android userspace to run on top of Zirkon instead of Linux.


They can't. I mean they can on their phones. But that's a tiny percentage of the total Android HW. All the big SoC vendors would have to get on board and rewrite their milions of lines of their kernel code on top of Fuchsia. Android is also not just phones, but also on tablets, tv boxes, etc.

This would just cause even more fragmentation in the Android ecosystem.


> All the big SoC vendors would have to get on board and rewrite their milions of lines of their kernel code on top of Fuchsia.

Apple's PPC to x86 transition was successful, as was their Carbon to Cocoa transition. If you want device driver specific examples, Microsoft's Win9x to XP transition was successful, if lengthy, as was its transition to WDDM for video drivers.

SoC and other hardware vendors go where the money is; they may grumble and drag their feet but they will make the transition if it's required of them.


Google Play Services access contract.


Threats don't magically make the port a cheap or reasonable proposition to the SoC/smartphone vendors.


It is a business decision, take or leave it.

I doubt very much that SoC/smartphone would bother to contribute to microG instead.


Laptops and 2-1 devices are the future desktop, and Linux has hardly won there.

Android might run on Linux kernel, but that is a minor detail to userspace, and might be on the roadmap to be replaced by Zirkon.

So no, it wasn't won anything besides server room.


> Laptops and 2-1 devices are the future desktop, and Linux has hardly won there.

That is more due to how OEMs lock out non-windows OSes from running on their hardware. If users were given a choice, the "windows tax" alone would be enough to convince people to give linux a try, particularly in the low-end segment.


iOS and Android are non-windows OSes.


I'm sure you are aware that:

* Android is technically linux, but typically smartphones are locked in.

* All apple products are notoriously locked in.


I'm sure you are aware that:

- Linux is an implementation detail in Android, isn't exposed to userspace (not part of Java/NDK official APIs), Treble made it even less upstream like and who knows, it might even be replaced by Fuchsia's Zirkon

- Apple products being locked in doesn't have anything to do with laptops and 2-1 devices being the future desktops


As someone who owns a 2-in-1 (Lenovo Yoga 900 series), it works almost perfectly out-of-the-box.

Screen rotation, pen support, virtual keyboard, they're all there and get the job done. The only thing that doesn't happen out of the box is that my keyboard remains activated when I switch to tablet mode.

I'm having way more issues with proper scaling on my 4K screen than I do with the 2-in-1 support.


> Linux is literally the most used operating system on phone and in the data center, which is where it counts.

Since phones are completly locked down, and Linux is the symbol of freedom by software, I'm not sure of that.

Besides, companies still needs desktops (or laptops) to work. Nobody is going to do accounting on Android or iOS. And I'm certainly not going to dev on something else than Ubuntu.


> and in the data center, which is where it counts.

Do you have any stats on this? As far as I know Windows and Linux are very competitive on the server and depending on the analysis Windows comes ahead.


At least on Azure, Linux is leading as the top OS:

https://www.zdnet.com/article/linux-now-dominates-azure/

I assume this is true for most other cloud providers. But that doesn't take into account all the companies running racks of Windows servers in their basements.


From that url in September it was about half Linux "but sometimes slightly over half of Azure VMs are Linux."

I'm not sure that is the win OP was looking for.


Keep in mind that this is Azure, the cloud provider with almost certainly the most Windows installs. They try to push all their existing corporate clients to the cloud, and from what I've seen in enterprise environments, they're very successful at that. I do not expect AWS or Google to have anywhere near Azure's numbers of Windows VM's deployed... I wouldn't be surprised it their hosted Windows VM's would be a rounding error compared to MS/Azure's install base.


I have a MacBook that is malfunctioning under MacOS (it's running at 60-80% CPU usage when idle, known problem, no fix available). I tried installing Linux on it, couldn't get it working at all.

I have a Dell XPS15, specifically chosen because the XPS line is supposed to work well with Linux. Numerous problems, all related to drivers. But the main problem is that every time Windows updates it wipes out GRUB.

I figured that my problem is that every laptop I've bought has been designed for a non-Linux OS. So I've ordered a Purism laptop, which should arrive any day now. Hopefully actually buying a laptop designed and built to run Linux will provide a better experience.


I'm using a dell XPS 13 (one of the new ones) and it works wonderfully. I don't run windows on it so I don't have any issues with my bootloader and I saw advice on the internet to get one with no nvidia gpu so I got one with intel graphics.

It really does feel like a OSX level premium feel. The only bit I am missing is fractional dpi scaling which is apparently on the way but turning on big text in accessibility mode works well enough for now.


Can you plug it into a 4K display or all the shitty projector type things?


At work most of our developers run Linux. The hardware mixture is Dell (XPS 13, XPS 15), some Thinkpads and two who use System76. Several of us use 4K displays, and I can't remember the last time anyone had trouble with a projector. Maybe the older XPS 15 might have, once upon a time. It has an Nvidia GPU and so I want to believe it has been flaky at least once, but I cannot recall a discrete incident. All the rest have the standard Intel graphics and everything's peachy.

Distros are all either Ubuntu LTS or Linux Mint with default DEs. I am pretty sure one fellow switches to a tiling WM some of the time, but I forget which.


My XPS13 is plugged into a 4K display (thunderbolt -> displayport adapter) right now. Works fine.

Can’t promise projectors will work, but I would expect a thunderbolt -> HDMI adapter will work just as well as the Displayport one does.


>But the main problem is that every time Windows updates it wipes out GRUB.

From my experience, this issue is from trying to use an MBR on Linux while Windows uses an EFI. I've never seen Windows mess with another EFI on the ESP. This issue is made worse by programs like unetbootin being terrible at using EFI.


ThinkPads usually just work with Linux. I bought a second hand T470s last year, and nearly everything works without any issues under Linux. Even my TB3 dock (some Chinese unbranded device) just worked when I plugged it in.

The only think that doesn't 100% is the fingerprint reader, apparently you need to setup your fingerprint on Windows first.


The XPS 15 doesn't have a linux "developer edition", but it still plays quite well with Linux (even if it's less "plug and play" than the 13 model). It comes with Nvidia Optimus and that is still not supported on Linux (thanks to Nvidia) so you may want to either turn off the dGPU or use bumblebee. The HDMI port is wired to the Intel GPU so it should just work. In the future, just to be sure, I advise you to check the Archwiki: the most popular laptops have a page there, reporting all the working and non working stuff and eventually some work-arounds


I have an XPS15 (9570 i9 with Ubuntu 18.04.01) and I have hardly run into issues and certainly not serious ones.

I am swearing much less now I don't use Windows so much (even though Windows was in a VM).

If your problem is dual-booting Windows, then why would you dual boot? For games, get a gamer PC. For work, use a VM.

It seems a little harsh to blame Linux for Windows killing your boot ;-)


Oh I'm not blaming Linux - the problem is definitely manufacturers not testing their machines for Linux. I'm especially mad at Dell because the XPS line is supposedly "Linux friendly" and yet not. Last time I buy a Dell.

I'm travelling at the moment, so a desktop games machine is not an option. And I have to test stuff on Windows. But when the Purism turns up I'll relegate the XPS to games machine duty ;)


Yep, agreed. Whenever I tried installing on old hardware I had laying around it was always very quirky.

From the moment I bought a Dell Developer Edition that was built specifically for compatibility, the experience had been fantastic.


If you don't want to have to think about it, the answer for laptops is to buy a thinkpad.


I can remember plenty of "premium" experiences with Windows, too.


Try OpenSuse, it has the best hardware support by far.


This is another issue. Have a problem? Don't worry, someone will helpfully tell you to try another distro. Have a problem with that one too? Don't worry, someone will helpfully tell you to try another distro.


Yeah, and there is no dearth of low quality instructions on Linux.

IMHO, Ubuntu is particularly bad about this. All I can say is what I use everyday with no problems.


Why would it? Hardware support boils down to new kernels, mesa stacks etc and how many non-free drivers the distro wants to include. Opensuse (Leap) is not particularly cutting edge and is more restricted concerning non-free components than many other distros (e.g. why does it need pacman?).

So, no. It does not have "the best" hardware support and no distro has "by far" more hardware support than all other distros.


> Why would it? Hardware support boils down to new kernels, mesa stacks etc and how many non-free drivers the distro wants to include.

Disagree. At least traditionally package choice and configuration by distro maintainers made a huge difference, as proven by the fact that problems could be solved by just fixing a config file or adding a package from the standard repo.


Did you read the comment you were replying to, or TFA?

The issue isn't even remotely just poor hardware support.


Your wrong, a good 70% is hardware in the article.


He's right, of course. The Linux community has been in denial about this for years.

At the kernel level and close to it, the areas that consistently give trouble are video/GPU support and audio support. GPUs are hard, but there's no excuse for the mess in audio persisting for a decade. Video/GPU support is tough, but the current situation, where you have a choice of five different NVidia drivers for the same board, all with different bugs, is not good.

As the author points out, regression failures are a big problem. The sheer bloat of Linux has made it unmaintainable. And who wants that job? Big chunks of important code are abandonware.


> The Linux community has been in denial about this for years.

That's not right. We all know support for some hardware is spotty and we all have learned to avoid that. My laptops tend to use Intel GPUs, for instance, because I want to work on them, not fix them.

I'm eyeing that new Lenovo thingie with an epaper keyboard, but I know it'll run Windows and probably never be upgraded because nobody will write the drivers to keep that thing alive past Windows 12.

> you have a choice of five different NVidia drivers for the same board, all with different bugs

Stop buying NVidia hardware. They actively sabotage Linux development. AMD is much better in that regard. Buy AMD instead (https://www.phoronix.com/scan.php?page=news_item&px=AMD-Hiri...).

> The sheer bloat of Linux has made it unmaintainable.

Nope. It's still moving forward and it's still quite reliable. All my workloads run on it (except my pets that run on FreeBSD and OpenIndiana because I get a kick out of managing different OSs).

> Big chunks of important code are abandonware.

There is a process to move obsolete codebases out of the kernel. That's why you can't use one of those half-IDE CD-ROMs that came with "multimedia kits" of the early 90's.


> Nope. It's still moving forward and it's still quite reliable. All my workloads run on it (except my pets that run on FreeBSD and OpenIndiana because I get a kick out of managing different OSs).

That's the denial there.

I'm a Linux guy. I post this from a linux distrib.

But realistically, we do have a huge amount of technical debt, and less and less incentive to work on them.

Case in point: every time we touch something to improve it, we break things for one year or two. Pulse audio ? Took 4 years to be stable. Systemd ? 3 years at least. Network manager crashed for 6 good years, and still can't work decently with sleep mode.

We manage to provide features because the linux kernel devs are incredibly competent. They also limited the bloat to a manageable stack on their side. But around that it's the far west.


If complexity's got you down, you might have better luck with OpenBSD. Coming from Linux, you'll be amazed by how simple it can be and how much of it just works.

Though non-intel graphics are still shit on it afaik.


Linux has already usability issues because it's a niche. I'm not going to use a niche of a niche.


> Case in point: every time we touch something to improve it, we break things for one year or two. Pulse audio ? Took 4 years to be stable.

Honestly, it seems to me like what you're complaining about is the nature of FLOSS development i.e. we do it in the open and collect feedback from users rather than spending billions we don't have on focus groups.

Also, I remember early PA days and it certainly did not take 4 years to be usable, but it did took Ubuntu about that time to get it right. That's however a problem of holding Ubuntu as the Linux disto, which is honestly a whole separate rant I could get into.

As for systemd and 3 years, I am honestly not sure what you're talking about. I've been on it since 2012 and it has been mostly smooth sailing since the beginning.


>Stop buying NVidia hardware. They actively sabotage Linux development. AMD is much better in that regard. Buy AMD instead.

Sadly if you work with Deep Learning buying AMD is a surefire way to make most of the work published by others unusable without serious effort.


Is the number crunching side as terrible as the visualization side?


OpenCL is very limited versus CUDA in terms of language support, tooling and libraries.


> AMD is much better in that regard. Buy AMD instead

I made that mistake... https://bugzilla.redhat.com/show_bug.cgi?id=1562530


Sorry to hear that. Is returning the machine an option?


People aren't in denial people are by and large aware of the issues an average person experiences when using Linux and the resources or lack thereof available to reverse engineer an entire universe of unfriendly hardware vendors work to enable joe blow to take his $399 walmart special and install linux on it.

If people want linux on the desktop to offer a more polished experience the only way about it is for everyone to put their money where their mouth is.

If "Linux" were a company we would be justified in demanding a fully finished product before buying but free software is a resource that already benefits billions even if they only interact with it via android, or web services.

"As the author points out, regression failures are a big problem. The sheer bloat of Linux has made it unmaintainable. And who wants that job? Big chunks of important code are abandonware."

This seems to be unsupported supposition that we are supposed to take as received wisdom.


Sorry, but chances are the $399 Walmart special uses an Intel chip with integrated GPU and, because of that, will work out of the box. Perfectly.

Lack of hardware support happens mostly on the other end of the spectrum, with very high-end graphics cards no kernel developer has ever seen from up close and multi-card setups that are essentially unique. I feel sorry for the people who need that but unless hardware manufacturers start to properly document their stuff, make it available to Linux kernel developers and START PAYING MONEY to have drivers developed (like vendors do with Windows) it'll not improve.


> unfriendly hardware vendors

You mean people with economical pressure to be profitable ?

As a Linux user, I know I represent a fraction of the market. I'm grateful when people invest in us, because I know it's a great move in principle, but it's not always one money-wise.

Building hardware is HARD. Selling is HARD.

Being dismissive to the people not being able to provide linux support is not going to win them to our cause.


Unfriendly is assholes who ship hardware that neither follows standards nor bothers to ship with documentation. I'm not dismissing them nor will we win them over to any side because they aren't even part of the conversation they only converse with oems buying millions of units.


> You mean people with economical pressure to be profitable ?

Selling more hardware is usually good for a hardware manufacturer. So is having fewer people returning their equipment because it doesn't work.

> not being able to provide linux support

Most kernel developers would be ecstatic just with proper documentation of the hardware being sold.


> Selling more hardware is usually good for a hardware manufacturer. So is having fewer people returning their equipment because it doesn't work.

It all depends of the ROI. In hardware the economy at scale is at play, and it doesn't go well with niches.

> Most kernel developers would be ecstatic just with proper documentation of the hardware being sold.

I agree. That's mostly a matter of culture. Many companies won't publish docs, because they are either afraid of competition, pirating, or looking stupid.


The nature of open source development is there are duplicates. The distribution usually takes chooses from the alternatives the better option and makes it the default. In the GPU driver issue you are raising what is the problem. Fedora comes with nouveau out of the box and you can install an alternative very easily. You may have a gripe but proliferation of alternatives is an unusual gripe, there are multiple browsers music players videos players file editors and even multiple DE options I guarantee if there was one music player option there would be a lot of people unhappy and a new project starter immediately


I've been using Linux on the desktop since Mandrake was a thing; if there's one thing I'm tired of it's having the "choice" of five broken implementations of an app rather than just one that works (the reverse is also annoying, when one broken monolith crowds out five working predecessors).


As someone who also uses Linux since Mandrake, I am tired of people telling me there should be just one desktop, file manager etc. There needs to be competition in FLOSS also, nowdays I use both desktop environments, depending on the hardware and mix and match components. For example I prefer Okular even on GNOME.

If people so desire a 'unified experience', you have Windows/macOS. I came to Linux as a refugee from these and am tired of the attempts to pull it in the same direction. Not everything has to be the same, in fact that's a terrible world to live in.

I am also tired of people saying things on Linux are "broken", they aren't any more broken than on macOS/Windows. Granted, you may have to get compatible hardware, which is only fair considering that's what you're doing when purchasing a Win/macOS machine and yet on Linux there's somehow this grand expectation that any random cram HW should just work. You don't expect anything not designed with macOS in mind to work there, so why Linux?

I use macOS at work and experience not so rare kernel panics. There was also just a bug in Premiere blowing up speakers on the MBP, being allowed to log in without a password, APFS logging encryption password in plain text etc. Yet somehow no-one is as strict on that as insisting on Linux somehow 'not working' even as am sitting here being productive on it for over a decade.


I've been using Linux since about the same time, and while I find the five-broken-implementations thing rather exhausting at times, I am more than thankful for it. It's one of the reasons why Linux (and FOSS software) is so damn useful.

Just look at Windows land, where they do have one implementation to rule them all, more or less (Windows 10). Everyone who dislikes its telemetry or almost touch-only interface is either stuck with Windows 7, which won't be an option for much longer, or is stuck venting against Microsoft and grumpily installing hacks like ClassicShell to make things a little more bearable.

When Gnome 3 came up, everyone who liked the new direction kept using it, everyone else moved to Cinnamon or Mate (or XFCE, KDE...)

It's not just about competition, it's about being able to pursue different visions and different objectives.

Users only see this in terms of choice, but there's a great deal of value about it for developers as well.


I really want to know how GNOME 3 even saw the light of day, and why distros like Fedora even went with it. Unusable.


I'm using Gnome shell since the early days and, so far, I'm very happy with it. But, then, I also use macOS on my Macs and I'm very happy with them too. All of them run everything I throw at them just fine.

I find the lack of options to customize GNOME irrelevant - I'm way past the day I cared about the wallpaper or the icons or the colors of the window chrome. I pay attention to what's inside the window, not its border.

The other day I fired up a Solaris 10 VM so I could give Wikipedia a proper Solaris/CDE screenshot and I was surprised it's actually still usable - the terminals are responsive and the rest, oh well... You don't want to use a GUI to copy files, do you?


Easy, Red-Hat used to pay the salaries for a big chunk of GTK/GNOME developers.


GNOME 3 is also far from universally deemed unusable, (I use it daily with great success), despite what the propaganda would have you believe.


I tried to use GNOME 3 after Ubuntu dropped Unity, and it never performed like Unity, hence I am now using XFCE.

The latest updates, show there is still work left to do in this area,

https://discourse.ubuntu.com/t/monday-25th-february-2019/995...


Pretty much this, I'd say. I know people who like its design and find it very comfortable. I'm definitely not one of them but fortunately no one heeded the calls to just build one desktop environment to rule them all :).


> At the kernel level and close to it, the areas that consistently give trouble are video/GPU support and audio support.

... is this true? As a long time desktop Linux end user, whatever these lower level issues are, they've not been visible to me in at least a decade.


I've been using Ubuntu LTS versions since 12.04 on Thinkpad T and X series laptops and I'm a very happy camper - out of the box Ubuntu doesn't suck for me, it "just works". I moved from OS X on latest Apple laptops to make my daily job (interaction design + web development) more productive (e.g. workstation running the same OS as servers, tooling etc) but now it's my preferred OS + hardware combo from a end-user perspective. I have to switch back to an Apple machine for testing and pairing with co-workers at least once a week and between the new Apple laptop keyboard, the random reboots (awaking from sleep), shitty web font rendering and intermittent errors relating to Apple ID, I don't miss it. I really loved OS X quite a few years ago but between the latest hardware (don't get me started about cords/dongles needed for a 2018 Macbook Air) and growing list of OS X quirks I'm always happy to return to Ubuntu 18.04 on my Thinkpad T450s.

I think many of the points raised in the article affect people making desktop software for Linux rather than end-users of desktop Linux. It seems like a global list of issues for the entire desktop Linux ecosystem - which is totally valid but I think a more accurate title of the article might be "Why developing desktop software on Linux sucks" or "Why creating a desktop Linux distribution sucks" because I think my desktop Linux setup rocks!


Just an aside, at a previous job, I worked with devs that used Macs, where I had a Linux VM on a windows laptop (and we deployed to Linux). Numerous times, I found bugs in coworkers code because they ignored case sensitivity in filenames. Yes, OS-X is BSD and unix based, but by default, the file system is cases-insensitive, like Windows, amd apparently if you make it case sensitive, you can brake a lot of popular Mac software.


What file explorer do you use? Nautilus pisses me off. I'm 100x more productive with Explorer on Windows.

Some features I'd like:

- Being able to open the context menu for the current folder, even if there are enough files to fill the view, without going up a level - Being able to jump to files/folders in the current directory by name without opening search results - Being able to add functionality to the context menu


> Being able to open the context menu for the current folder, even if there are enough files to fill the view, without going up a level

You can, the windows context menu key/Shift-f10 work. If you have something selected, deselect it with Ctrl-Space before.

Some items in the context menu have shortcuts of their own (new folder: Ctrl+Shift+N, file/folder properties: Alt+Enter or Ctrl+I, rename: F2, etc).

> Being able to jump to files/folders in the current directory by name without opening search results

Not a solution, but a workaround: disable recursive search, and treat search as filtered down list. (I consider this one annoying too).

> Being able to add functionality to the context menu

Extensions can add menu items into context menu; for example, syncthing-gtk does exactly that.


Try Dolphin.


Yep ^


Thunar and SpaceFM work pretty well for me. PCManFM is also another good option. They're all fairly light-weight.


I wish that there would be one unified API for creating desktop programs on Linux. Right now it's somewhat coalesced on GTK/GNOME and Qt/KDE, though there are a number of others out there.

I use Linux in a VM for very hobbyist level embedded development (think Arduino and the like). Driver problems are non-existent, all of the technical problems are non-issues in this environment. The problems that I see are all to do with the lack of a common set of services for building a GUI application.

Why do my text editor and Arduino IDE use different file pickers? It's because my text editor uses the KDE API, but the Arduino IDE uses something else. GIMP uses yet a different file picker from the other two. LibreOffice uses yet another file picker, that's similar to Kate's but slightly different. I'm sure that installing Atom and VS Code would introduce me to two more file pickers.

The reason for this is that each of these programs uses a different GUI toolkit and, as a result, has a different concept of what a file picker needs to look like. Some of them don't even agree on which order the Open and Cancel buttons should be.

Network transparency is another thing that suffers from this. On Windows, you can basically use a UNC path (\\server\share\path\to\file.txt) almost anywhere because the entire system from the file picker all the way down to the file APIs knows about UNC paths. In Linux, KDE apps do this one way, GNOME apps do it a different way, and command line tools need you to somehow mount the target server before you can even think about it. I last seriously used Windows about 14 years ago and I still miss this greatly.

None of these are insurmountable problems, but it needs someone to make a decision about the one true way to do things.


>someone to make a decision about the one true way to do things.

Things don't really work this way in free and open source development. There is no one person to make decisions, consensus is reached when the quality of something raises "above the bar" and actually improves things for all involved parties. If someone wants there to be an über-library that serves everyone's use case then it's up to them to go and do the work to build that.

And it has been getting better in this regard. For example KDE and GNOME used to have their own IPC, multimedia & audio mixing backends, but now both have converged on DBus, GStreamer and PulseAudio, in part because these were intentionally built to be flexible low-level solutions. I'm sure there are more examples of this too but those are the first that come to mind.


You're absolutely right. I wonder if something like DBus and PulseAudio could happen with my UNC pain point.

With the assumption that the goal is for "vi //server/share/file.txt" to work the same as "notepad.exe \\server\share\file.txt" does on Windows, here are my thoughts.

First off, notepad.exe doesn't really care about the fact that it's a UNC path. It just opens the file with CreateFile (either CreateFileW or CreateFileA).

There would need to be replacements for the libc file functions. These could be a shim in front of libc, or baked right into libc. Note, there's a LOT more needed than "just" new file functions - any functions that do anything with paths need to be looked at. Shells would likely need some changes to work properly, though it's not like the Windows shell can truly do much with UNC paths - copying files to/from works, but you can't cd into them.

How does it ask for credentials? If it's via DBus, a desktop environment provide the authentication prompts, but what about a pure-commandline system? Maybe the transport is just SSH and relies on the existing public key authentication? But what if you're just doing a one-off thing and don't want to set that up? Using SSH is probably a decent idea since it's got authentication, security, and a file transfer protocol, already built in.

On top of all of this, when you open //server/share/file.txt for writing, what does that actually mean? Is there a file descriptor? How does that work with the kernel? Does libc now manage all file descriptors with only a subset corresponding to kernel file descriptors? Could a pure user-space solution fake this well enough to actually work? Would this need to be a FUSE filesystem along with some daemon to automatically unmount the remote servers when the mount is no longer needed? Would it be something like the automounter, just a lot better? Does a kernel need changes for any of this to work?

This is one of those things that touches so many layers and potentially interacts with so many parts of the system, potentially all the way down to the kernel.

My guess, and I don't actually think this will happen, is that Apple will do something like this on Mac OS X and have a reasonable mapping to the BSD world underneath, then someone in the Linux community will come along and do something similar in a way that's better suited for Linux. As a parallel, Apple came out with launchd in 2005 to replace init scripts, systemd made an appearance in 2010 - both do very similar jobs, with launchd tailored to the needs of MacOS and systemd tailored to the needs of Linux. Maybe something similar could happen with UNC-like file sharing.


All that has been doable for quite some time, you could mount SMB shares like that with smbfs since early releases of Samba, and later with the CIFS fs driver. You do need root to mount things that way, so it isn't ideal.

For the more complicated stuff it can be done but not everything is available via a simple GUI. GNOME and KDE have their own virtual filesystem layers in userspace, GVfs and KIO, I don't know what KIO does but GVfs supports a bunch of network backends and has a FUSE driver that can mount its own virtual filesystems and expose them to outside applications. So the features are there but I don't think they are well-presented right now, maybe someone can prove me wrong though.

It would have been nice if the kernel had better support for fine-grained control over filesystems like HURD or Plan 9 do. But instead it was decided that it was better to handle those things with userspace daemons, so that's where we are now.


These aren't the same thing though. The GNOME and KDE VFS layers only apply for applications written for those APIs. It's not a universal thing.

Being able to mount a CIFS filesystem is fine, but it's not the same thing. In Windows, you can basically use a UNC path anywhere because CreateFile knows how to deal with it. The point is that you don't need to mount the remote filesystem (the Windows-equivalent being mapping a network drive).

What I'm really looking for is the user experience, not the underlying protocol. On Windows, I can just go "notepad.exe \\server\share\file.txt" and edit the file, on Linux I need to either use a KDE application or go through the ceremony of mounting the remote filesystem. It's the fact that the feature is silo'd into GNOME and KDE (and the fact that it doesn't even exist on Mac OS, but that's another issue) that bugs me.


There is currently no kernel interface that I know of to do that, and I don't think it would be too hard to hook into an open() on an invalid path and try to do something (mount a network fs, call out to GVfs or KIO, etc), but I can tell you you will meet resistance if you try to because things like "//stuff" and "smb://stuff" are already valid local file paths in Linux. So I leave it up to you to figure out how to do this without breaking things.


Yeah, this is definitely not an easy problem to solve given the design of Linux.

I don't know why I didn't remember this earlier, but I actually explored this a number of years ago and came up with two things that are close, but not quite there:

First was to use a systemd automount unit[0], but I didn't really get anywhere with it. From the looks of it you have to know all the possible things you could want to automount, it can't do wildcards. Being able to do some kind of pattern matching on the requested path and translate that into a mount command would go a long way to making this work.

I also explored the good old automounter[1][2], but it has a lot of the limitations that systemd's does. It does have the advantage of supporting host maps, which gets me a bit closer to what I'm looking for. The unfortunate thing that remains is that this is NFS instead of a modern protocol. If this were somehow backended on sshfs, I suspect it would be quite useful. Of course, sshfs is missing the concept of shares but that's not a showstopper by any means. Authentication becomes a problem since the automounter probably can't ask the user for a password, and may not even know which user is requesting the mount.

I have no idea how well either will work in practice. Modern Linux on the desktop is a very different environment than the one the automounter and NFS were built for. The systemd automounter looks like it serves a very specific purpose and can't currently do what I want.

Maybe all we really need is a modernized automounter and/or some extra features in systemd's automounter. These could lead to to "vi /net/server/share/file.txt" working as expected which, quite honestly, is basically the same as what I suggested earlier.

[0] https://www.freedesktop.org/software/systemd/man/systemd.aut...

[1] https://linux.die.net/man/8/automount

[2] https://linux.die.net/man/5/auto.master


> I also explored the good old automounter[1][2], but it has a lot of the limitations that systemd's does. It does have the advantage of supporting host maps, which gets me a bit closer to what I'm looking for. The unfortunate thing that remains is that this is NFS instead of a modern protocol.

What limitations affect you?

(At home, I have linux running on an HP MicroServer as my NAS, it exports filessytems via NFS. Other machines run autofs with the hosts map, so for example my wife's desktop - and mine for that matter - auto-mounts NFS shares on-demand and she can open any file directly in any application by accessing /net/$hostname/$path).

NFSv4 is pretty modern ...

I believe this should also work for CIFS, if the server-side supports unix extensions (to do user mapping on a single connection), but I haven't had time to try it in the past day in my limited time at home.

> Authentication becomes a problem since the automounter probably can't ask the user for a password, and may not even know which user is requesting the mount.

If you have Kerberos setup, NFSv4 does the right thing ...

If you don't have Kerberos setup, then you're probably ok with just normal NFS user mapping.


Interesting, I'll have to give automount another look.

The last time I tried it was years ago, so I can't remember what limitations I found. If I get a chance to do this in the near future I'll report back.


gvfs does some of what you ask. I guess you could trick open() with LD_PRELOAD.

For the dbus/polkit authentication prompts, I've seen it work on the command line but have no idea how it works. If anyone wants to donate, I'll spend a day and half a bottle of good whiskey and come out with a blog post.


I'm sure that installing Atom and VS Code would introduce me to two more file pickers.

Having just checked, both Atom & VS Code use the GTK/Gnome file picker. LibreOffice is also using the GTK/Gnome file picker.


To be fair, the Arduino IDE is java swing, so it doesn't look quite right on any platform.


"On Windows, you can basically use a UNC path"

How about NFS in Linux? It's a lot more transparent than SMB will ever be in Windows. SMB is and will remain a Windows feature.


These aren't the same thing.

In Windows, you can basically use a UNC path anywhere because CreateFile knows how to deal with it. The point is that you don't need to mount the remote filesystem (the Windows-equivalent being mapping a network drive).


I don't mount remote filesystems, autofs does it for me whenever I browse /net/$hostname


> Network transparency is another thing that suffers from this. On Windows, you can basically use a UNC path (\\server\share\path\to\file.txt) almost anywhere because the entire system from the file picker all the way down to the file APIs knows about UNC paths. In Linux, KDE apps do this one way, GNOME apps do it a different way, and command line tools need you to somehow mount the target server before you can even think about it. I last seriously used Windows about 14 years ago and I still miss this greatly.

I just use NFS and autofs. Sure, it's a few seconds more effort to set it up, but it's a once-off cost.


Windows doesn't have that kind of unique standard either.

Both KDE and GNOME have user interface guidelines. If Software doesn't follow it, well...


In recent years, Windows has become a mess in the UI space. Mac OS X has fared a little better, but it's also becoming a mess.

It makes me sad. Years ago programs written for either operating system tended to follow the UI standards pretty well, with the main exception being games. Microsoft started to try new things with Office, so if you wanted to see where the standard was going you just had to look at where Office was.


I'd say this is a sign of more and more people developing software, and with ever more diverse motives.

If your motive is to sell lots of copies to lots of people and make lots of money to it, investing energy into UI refinement makes total sense.

If you just have to scratch an itch, maybe it doesn't.


> I'm sure that installing Atom and VS Code would introduce me to two more file pickers.

I’m sure they will both use the same one.

I don’t like Electron because slow and consumes too much RAM, but I have to admit they do have that unified API. It’s quite high-level, and relatively stable because there’s only a single implementation.


arduino IDE is written in java with Swing, it looks weird on every OS


How about a list of all the things that really work great?

I've been using Linux on the desktop for nearly 20 years and I'll have to say it's fantastic, despite the occasional headache, which seems to be at a far less frequency than other major desktop operating systems.


I agree, I read the present list as “people with those concerns should not be using Linux”. Example if you require MS only software (photoshop) then stay on windows. I mean its just not practical for every software vendor to support multiple OSs.

IMO Linux on the desktop is in a remarkably fantastic state. There’s A LOT of really great distros and software that just works. My daily driver is 7 year old Chromebook with xubuntu 18.04, I do java development on this thing!!!


I'm guessing you're relatively tech savvy and enjoy troubleshooting (within reason). Most people aren't, of course.


I've said for years that Linux works for grandmothers, and for me (and a bunch of others).

It doesn't work for people who:

- need a locally installed copy of MS Office or other Windows only software

- IT admins that stopped learning a long time ago

- etc

As others point out, for som of us using Windows or Mac is a hassle. They're slow (30% longer compile cycles, don't even get me started on git), missing important customizability,m no built in universal package management etc.


how is git faster on linux?


Windows has abysmal performance in its equivalent of the VFS layer i.e. path handling, directory listings and so on.

Funnily enough the main reason is probably that compiling the Linux kernel is one of the most filesystem taxing workloads and also one of those kernel developers care most about.


And it is worth mentioning if you've never used git on Linux, the difference versus Windows really is night and day.


Not the person you're replying to, but I fall in the same boat as them as far as Linux experience goes.

And for me, I find I have way more trouble doing things on Windows/Mac than Linux. I think it's really more a way of thinking about how a "desktop OS" is supposed to work. People coming from Windows expect things to work the same, and that's just not the case.

Likewise, when I unluckily find myself on some closed-source box, _very little works how I expect_. And man is troubleshooting harder, because there are so many "surprises."

My point is, I think blaming the operating system is not the answer - users need to adjust their expectations and open their mind a little.

This is a somewhat poor analogy, but it's sort of like a Chinese citizen (closed-source user) becoming a citizen of a democracy (open-source user). The government is going to work differently, and you can't claim democracy is broken just because it's so different from authoritarianism.


"This is a somewhat poor analogy, but it's sort of like a Chinese citizen (closed-source user) becoming a citizen of a democracy"

I like that analogy (even though I am still looking for that pure democracy/open source government). In Linux you have the freedom to do allmost anything with the system, but you have to know what you are doing, as the system usually does not stop you, when you are about to do anything stupid. Windows makes me mad, when it tries to manage me. Like "Yes I really want to use this computer without firewall or antivirus, because it is not connected to the internet and never will be because it serves another purpurse." To do this you need to mess with obscure registry settings, the default behavior of windows is enforcing it and nowdays also updates, because most users don't know or care what they are doing and are used to be told what to do.

So I believe it is good that I can do anything with my system, but everybody started as a newb once so a more beginnerfriendly version could be helpful.

But Linux main problem is hardware support, and fixing broken audio/graphic/wifi driver is something which can drive away very experienced people. (it drove me to ChromeOS for my laptop)


I like your expansion upon it :)

I definitely do agree that "onboarding" could be improved. How I dunno. To me at least, it seems like I hear a lot of success stories from the tails of the spectrum - power users and developers on one side / the complete opposite on the other. And then for everybody in the middle, there's no other way to put it than it's almost a shit show:

On the software side there a million and a half different ways to do everything, and often an insane amount of "noise"/outdated info that needs filtering through to find what's relevant to your specific needs. Even at the lowest levels of the stack there is no "the one way", and I think all that uncertainty (especially from the beginner perspective) can make it feel like climbing a mountain.

Hardware, as you mention, is tricky if you don't know what to look for (and why would most people). At least from a longtime Linux user's perspective, it's incredible how much better things have gotten (since the 2.2 days in my case). But there's a ways yet to go, and it's by far the roughest where it's the most visible (ie the trendy bleeding edge). Part of that is just the nature of "lag" in open source development between code getting written, released, and finally showing up in your distro. That cycle can sometimes take 6 or 8 months, especially for hardware :(

Not that this helps users with existing hardware, but

* definitely always google before you buy (model name + "linux" and read the first page or two of results)

* stick with a non-high-DPI resolution screen

* WiFi, I've had the best luck with Qualcomm/Atheros, Intel, and Realtek (in that order)

* Graphics, get AMD. NVidia cards can work well enough with their proprietary driver, but the out-of-the-box experience is crap. Intel works great too, as long as you don't need it for anything heavy.

* Audio, for me the last time I had trouble was with one of the earlier Sound Blaster Audigy cards. Have stuck with onboard codecs since and honestly never had a problem.


"I like your expansion upon it :)"

I actually thought about that analogy for a while before, but rather used anarchy/libertarian vs. authorian/dictatorship ...

(basically the same point, only more radical)

Anyway:

"On the software side there a million and a half different ways to do everything, and often an insane amount of "noise"/outdated info that needs filtering through to find what's relevant to your specific needs. Even at the lowest levels of the stack there is no "the one way", and I think all that uncertainty (especially from the beginner perspective) can make it feel like climbing a mountain."

Yes. Even for simple things like a screenshot, there are a million ways. Not a problem in itself, but when you come from windows where this is a "print" command and I did not think there could be a reason to do it differently, but on some distros it is. I run into it a few times, "print" did not work, so googling: You want to do a screenshot? No problem, just install this via terminal, or this, or type in those commands and there you go.. WTF? I just want a screenshot? How is this not standard?

Now this seems to be mostly solved, on XFCE he even asks me what to do with the just taken screenshot, after I hit print (save, view, ..) Oh in general, I really love XFCE).

But unfortunately:

"Not that this helps users with existing hardware, but

* definitely always google before you buy (model name + "linux" and read the first page or two of results)"

this is not for newbs either. Newbs do not know the difference between gpu and cpu. And they certainly do not order single components to put together their PC.

Newbs need a company who does that for them. Compose a PC/Laptop which components who are supported and work well together, so what purism does. But suddenly we are not on the mass market anymore ... and we see the price difference.

So the problem remains complicated, with no easy solution.


> stick with a non-high-DPI resolution screen

Sorry, this is impossible to do once you tried HiDPI. The difference is overwhelming; I consider non-HiDPI screens an obsolete technology like CRTs.

I do use Linux desktops; thankfully, the HiDPI support is much better these days than it was even 2 years ago. Both Gnome and KDE work relatively fine.


Hmm my eyes must suck haha (I'm also not sure where "HiDPI" starts - 1920x1080/14" and 4K/43" keeps me happy enough)

Glad to hear support is getting better! I guess I'd be most worried about non-GTK/QT apps


As long as you don't connect a 1080p monitor to your laptop ;)


Still waiting for that improved AMD experience to give me back the hardware video acceleration on my Brazos APU.


Aye that sucks. Regressions are the worst, knowing it used to work. I had to look up that platform, knew it was getting old... and primarily for netbooks... I don't think I'd hold my breath for it getting fixed by AMD at least :(


Which means that installing Windows 10 with the DirectX 11 drivers from Asus, is the only way to get it back, ergo "Linux Problems on the Desktop".


I'm tech-savvy, but I use Linux because I'm tired of troubleshooting constant Windows problems. Maybe the OS-X world is better? But I've had the same DE for over fifteen years and nobody's tried to make it touch-friendly or stick ads in it.

The only thing that ever crashes is the web browser. I keep my computers no less than five years and they run just as fast as new, usually physically failing rather than becoming computationally incapable due to anti-virus and bloat slowing a system down.


Sure, it helps to be tech savvy when installing any operating system. I'm not sure that I'd agree that most of us enjoy troubleshooting, it just goes with the territory... always has even on Windows and Mac. The tech savvy and troubleshooting tend to come into play with non-standard / niche / unsupported hardware configurations and/or running bleeding edge / complex configurations. Most desktop and laptop configurations aren't that.

The main thing a non-tech savvy user needs to worry about when considering Linux is to generally understand that hardware support lags a bit. So they should do a few web searches on the make/model of hardware they want to use + 'Linux support' before they dive in. If they don't see page after page of glowing success stories, they have their answer and should steer clear of that configuration for the time being. If they do see lots of success stories, read a few of them to see if their eyes glaze over at what is written or if it seems pretty straight-forward and they can follow what's being said. No tech savvy required.


I contest that many of us tech savvy folk still don't like Linux, and not just because it is different, but because it has significant problems that make it more of a headache for our workflows than other OSs. I wish that weren't the case. I'd love to be using an open OS, but Linux's ways of doing things doesn't mesh with how I work.


And then there are those of us that really can't work on a Windows machine because it just doesn't mesh with how we work. With Windows Subsystem for Linux I can finally at least get some work done on Windows but honestly it still feels like swimming with my arms tied behind my back.


I have a friend who was complaining about Windows 10 so I set him up with Ubuntu. He's about a dumb as it gets with computers and he has never had an issue with Linux. The few things he's installed have been in the software store and he clicks the apt update once a week. He's much happier and less frustrated with Linux vs Windows.


Cool beans mate. I think you are describing someone actively updating their system because it isn't too onerous contrasted to it being a good idea (risk:reward).

Many, many years ago a decision regarding Window's software management was made and ever since it means that updates take sodding ages and sometimes require multiple reboots and are generally unpleasant. One day that will be fixed - it is not normal.


So, in essence, he is happy because you are his IT guy? Wouldn't that be the kind of thing your parent post is talking about?


My wife has no idea what her laptop is running (Arch Linux as it turns out).


I assume they're not comparing their own issues with Linux vs. the issues of less tech savy people with other OS, but vs. their own experiences with other OS.


hard to enjoy the parts that work great when you have a corrupted grub boot loader, nonfunctional networking and graphic driver issues.


Corrupted grub sounds like a hard drive failure. Regarding nic and graphics drivers it is possible to find hardware with good hardware support it is probably impossible for volunteers to fix all possible issues with closed source or proprietary software/hardware. Consider choosing accordingly.

If you want someone to select the hardware for linux compatibility consider buying hardware that comes with linux.


None of these have ever happened to me, and I started with slackware. (Maybe once around y2k a nic wasn’t supported but after waiting six months it was.)


Corrupted GRUB happened to me 7 times, so maybe we shouldn't throw around anecdotes?


Windows dual boot specifically likes to corrupt it or has in the past. I've periodically checked in on nix desktops over the last 20 years. Red Hat, Centos, Fedora, Suse, SLED, Mandrake, Slackware, Gentoo, Gentoo on a SGI O2, Debian, Ubuntu, Mint, Open BSD, FreeBSD, DSL, Puppy Linux, Solaris, Open Solaris, IRIX, Arch, OsX, the venerable hackintosh, and last off the top of my head BeOS which was somewhat POSIX compatible if not a nix.

So, the crap that sucked in 1998 is the same stuff that sucks today. Inconsistent clipboards, graphic driver/X11 support, multi monitor support and debugging/positioning issues, poorly documented or improperly configured out of the box network management tools, firewall, etc. tools. Boot loader failures, and (more significantly recovery). Inconsistencies between Qt, Gnome, Kde apps, graphic sub system freezes. Pulse or whatever sound system of the month suddenly failing one day. DVD playback, MDADM failures, Drive partitioning and resizing difficulties, File system corruption (is it disk based sure but windows is less prone to it on the older file system types, ZFS, XFS, etc. are nicer).

If you want to compile an application, run a server, etc. nix beats out windows any day. If you want to be able to install this great new linux thing you heard about on your existing computer, surf the web, manage your photo collection, hook up your scanner to copy in those old pictures of your kids, setup your Nvidia card and play some games on Steam, find and install the latest or a specific version of an app with out hitting the command line and typing things out like madison, then Linux is not the desktop for you.

edit and don't get me started on high end xeon and intel chip support/speed step handling, or version upgrades running successfully.


Sounds like there’s more to the story.


Note that the same author also lists all the problems for Windows 10 (http://itvision.altervista.org/why-windows-10-sucks.html), so at least he's holding equal ground. Think carefully about the pros and cons, and then choose one (Or maybe both, if you are willing to dual-boot or use a VM)


Who would want to read a list of things that are already working well, except maybe somebody already considering switching to Linux?


Users will always pay attention on what they miss out by using a lesser-known alternative, than what they gain.


I feel like a lot of the usability quirks that Linux has are trying to shoehorn a multi-user system into a single-user context. For example, there is so much work done (and even complaining in favor of doing that work in this article) to make it possible for multiple people to sit at the same computer. Nobody does that! Most people have more than one computer! Why do people spend their free time on that use case?

This could be a long rant, so I'll keep it short... but someday I'm just going to rip the concept of users out of Linux and see what it looks like. Oh no, you say, malware will get you! Unlikely. Malware running as my user can fuck over my life just as easily as malware running as root. So why even pretend that that's a good isolation model? It doesn't prevent any attacks.

(As for how Linux in 2019 is doing... I recently switched back to Ubuntu for a desktop. Whenever I lock the screen and have DPMS enabled, it forgets that I have two monitors and that I want 200% DPI scaling when it wakes back up. What? Back in my day you had to hard-code the resolution and monitor configuration in the X11R6 config and there was no way to change it without restarting the X server. May I please have those days back? At least once it started working, it kept working.)


I dunno, I would like to be able to have a restricted account to give it to my daughter. Same for mobile device - she's too young to have her own but I would like to give her my iPad to run YT Kids for example and only that so some kind of multiuser-ness is needed but a different one. I'd say SE linux tries to achieve that but it is far from being usable.


In the 90s, when our desktops didn't have user accounts, we used third party software to lock people into restricted environments. It wasn't perfect, but neither are user accounts.


That is orthogonal to Linux's user system, though. That model says things like "your daughter can have 25 file descriptors". It is not useful for this use case.


Right? The only reason user accounts exist at all in Desktop OSs is that all of them today were originally server OSs. Placing restrictions on user accounts is only useful for protecting the system from users, which is a valid concern on a network with shared resources but worse than useless on a personal desktop.

Mobile OSs got this right: on a personal device, the permissions model should be applied to the applications.


> Placing restrictions on user accounts is only useful for protecting the system from users, which is a valid concern on a network with shared resources but worse than useless on a personal desktop.

True for home users; not necessarily true for corporate users-- where computers are IT-managed (i.e. "don't let end users fuck them up") and may be shared (which is highly situational-- the degree to which computers are shared varies highly from company to company, or even deployment to deployment).

Heck, it's not even unheard of to end up with multiple "simultaneous" users on a single-seat desktop machine-- every major OS these days supports some form of fast user switching, which will leave one user's programs running while another user's physically sitting at the machine.


A few things, since I work in IT: we don't give a damn about your workstation. It's a fungible resource. Reimaging is easy and relatively quick. We even let you have local admin because who cares. Users are not prone to playing around with settings they don't understand, in my experience. If someone was constantly needing their workstation reimaged we'd probably just fire them for being incompetent. Ideally, the OS would be completely separate from the applications and configuration and be immutable, and that would go a long way towards eliminating those kinds of problems.

We managed to share home desktop computers in the 90s without significant problems, even though the OSs we used didn't support multiple user accounts at all. And there's no reason you need user accounts to accomplish what you're describing. You can still have profiles (preferences, application configs, etc), and you can encrypt them with a passphrase if you have any reason not to trust others using the same device.

> Heck, it's not even unheard of to end up with multiple "simultaneous" users on a single-seat desktop machine

A vanishingly small use case inside an already vanishingly small use case.


Not everyone lives in a first world country. I live in the Central Asia, and $200 a month is a decent salary here. And it is normal to share a computer between different family members.


Again, we did that in the 90s here in the west all the time and user accounts weren't necessary and their introduction did not really solve anything.


I still don't think users are the right model. For example, look at how Amazon allows multiple users to access one of their shared computers (it's a service called EC2). They don't use Linux users. You get root and the other people using that machine are protected from you.

I believe there is now an option (or maybe it's the default) in Windows to run IE under a hypervisor, to totally isolate it from the local machine. This is moving in the direction of providing something useful.

Though to be fair protecting the OS doesn't make much sense to me. I guess it's nice to guarantee that your computer will boot no matter what you do to it, but again, that is not the problem people are actually facing.

The corporate threat model involves things like protecting people from getting an email that says "click this OAuth button to give this malware access to your email". None of the critical software is running on a user's workstation, so whatever is going on there doesn't matter.


> Mobile OSs got this right: on a personal device, the permissions model should be applied to the applications.

I cannot agree. If I want to run another browser instance, I cannot do that on a mobile system. Maybe it would possible to do with some support from browser's developers but with multiuser system I need no support from developers, I can create new user and run another browser that would believe that it is the only browser instance running.

It is not just browsers. I can easily experiment with program configs, for example. Something doesn't work, I want to check is it due to some application configs or it's installed plugins. All I need is to create one more user and to start program instance from that user.

Android relies on SELinux, and SELinux allows much more than old user system, but SELinux is much bigger headache when you are trying to use it in a way that was not supposed by Google. So in reality SELinux on Android allow me to do nothing, I even cannot run app that requires access to a contact list without allowing it to access a contact list. It would be nice to create one more user on Android with empty contact list and to run this program from that user. Moreover I'd like to create user with faked contact list, faked browser history, with faked all the private information, and to run most of the apps from that user.


Nothing you're describing is a problem with the model, just the implementation. All of those things are easily solvable by being able to choose how the program sees the world. You can do it today with the various namespaces in Linux.


Yes, I agree. It is problem of implementation. But nevertheless it is a problem. I didn't tried namespaces myself, but I'm pretty sure that to do in in Linux it would be harder for me to get it to work. Even if I became familiar with namespaces.

Multiuser model is settled, simple and transparent model, that just work. You have abstraction of user and abstraction of file access rights, and it is all that you need.

namespaces have no such a simple model. How can I run program from other user but give it some special rights to access this git-repo in my home directory? Do I need to write a special C-program for that? Or maybe existing tools already can be configured with some obscure xml-file? I do not know, and maybe I'm mistaken, but knowing the general laws of linux software development, I guess that the best software for it I can find is a complex overengineered corporative tool with bells and wistles, and the easiest way to use namespaces in my case is to write C-program. (If I'm mistaken, please, correct me at least by stating my mistake aloud, or better point me to a docs, please.)

And here we came into real issue. To write a good C-program for my tasks, I need to start thinking as a software designer, to invent new simple model of process separation, that allow me to solve 90% of my tasks with ease, and the rest with some headaches, but everything must be possible. The only way to do it in a week is to refuse to think, and to replicate multiuser model on top of namespaces. But I need no replica of multiuser model because I have an existing one. What the point of discarding multiuser model, just to move to another implementation of multiuser model?

The only good thing I see is a lack of need to log into a root shell to create or delete users and groups. It would be nice, but I'm not ready to spend a week to write a C program and then unknown amount of time to maintain that program, just to stop using su/sudo for such tasks.

So, I can agree that multiuser model is bad for a linux desktop. But we have no real alternative. And the mobile OS approach is the worst. It reminds me of DOS, where you can work with one process at a time, where you cannot run two copies of a program, where all is done in the single global namespace, and any process can do anything it wants. The only choice you have is "to run program or not to run".


> namespaces have no such a simple model. How can I run program from other user but give it some special rights to access this git-repo in my home directory?

In Linux terminology, launch the program in a new mount namespace with a rw-bind mount to your home directory. You can do this with firejail, bubblewrap, or minijail easily and without a config file.


>The only reason user accounts exist at all in Desktop OSs is that all of them today were originally server OSs.

> "We managed to share home desktop computers in the 90s without significant problems, even though the OSs we used didn't support multiple user accounts at all."

That's not true. Window 9x series had user accounts (with no security between them.) This was beneficial to users because computers were expensive (and still are to most people..) so personal computers very often weren't personal. Having separate accounts, even without security, allowed individual users to configure the system to their personal preference and helped with file organization.

https://0x0.st/z-aG.png


Those aren't OS-level accounts, they have no (local) permissions system applied to them, they're just profiles. Regardless, I can assure you that basically nobody used them in the 90s.


I can assure you, they got a lot of use in the 90s, and they were used for exactly the same reason people still make separate accounts for their family members on their computers today. It's not about security. It's about keeping separate preferences.

Your hypothesis that XP somehow forced the concept of separate accounts on regular home users because NT was used on servers is just bizarre. People who wanted separate accounts were doing it on 98, and people who didn't simply ignored it and all shared one account. The UX for different family members sharing a single computer by signing into it existed before the NT kernel was in use around the home. The implementation changed when Windows went to NT, but the UX did not. And given that the UX of separate accounts was already appreciated by users, the more robust implementation made possible by NT was a no brainer.

>Mobile OSs got this right: on a personal device,

When it comes to a PC, "personal" is a misnomer. Failure to understand that is the root of your confusion. You are presumably at a place in life where your computer is your computer, not shared with others, like your cell phone. But when it comes to PCs, that perspective is a privileged one. It's evidently not important to you that numerous people be able to use your computer, but it is important to others. The UX of the device that lives in your pocket needs to be different from the UX of a device that sits in the middle of your living room for the whole family to use, like a television.


> I can assure you, they got a lot of use in the 90s

Alright, maybe that's a regional thing or something. I knew of no one who did that.

Regardless, even you admit that it was not about security, so why have user accounts? Simply being able to change the profile is sufficient.

> The UX of the device that lives in your pocket needs to be different from the UX of a device that sits in the middle of your living room for the whole family to use, like a television.

I still contend that this is an incredibly tiny use-case today, precisely because mobile devices have largely supplanted the role the 'family computer' used to serve. More importantly, that use case can be served without user accounts.


You got downvoted, but I mostly agree. I think mandatory strict selinux rules, containerize all user programs, or similar approaches are now more important for the threat vectors we face today on the desktop. I just discovered Apple's feature of requiring you to press a couple keys to authenticate a new keyboard. Doing that on GNU/Linux would be very hard.


That dialog (the "your keyboard cannot be identified" dialog) is about layout detection, not "authentication". IIRC, it's also skippable (but I haven't seen it in a long time; I've mostly switched over to Apple keyboards and they don't trigger that dialog).


Ah, thanks. Either way, a feature much more difficult in the Linux world, unfortunately.


>Year 2015 welcomed us with 134 vulnerabilities in one package alone: WebKitGTK+ WSA-2015-0002. I'm not implying that Linux is worse than Windows/MacOS proprietary/closed software - I'm just saying that the mantra that open source is more secure by definition because everyone can read the code is apparently totally wrong.

Huh? Those 134 vulnerabilities were found because people can see the code. If it were closed source, they would probably still be there today.


Closed source software is not opaque, people study how it works all the time.


I get that some people don't like linux, but some of these examples are just ridiculous.

Linux is administered by ssh therefore administrators don't know how to check so therefore they don't bother to update systems because "they're afraid that something will break." C'mon.


That's unfortunately true.


Linux as the open-sourced work of brilliant software developers wouldn't power most servers if it sucked.

But could designing good desktops need more than just good code?

Good kernels successfully run code. Good desktops successfully help users. I guess different goals require different designs?

Edit: to clarify, I didn't mean desktops don't require well designed software. Just had in mind that a desktop also have to take human psychology and human limitations into account.


Wait, you are confounding some things. A software design is good when it allows all of its parts to be elegant and meet the requirements. In no way does that say that a desktop OS is required to be designed badly.

Linux is powering servers and high performance computing because it is good at these things: mostly static hardware configuration, set up once during system installation high performance, modularity and the ability to inspect a deeply running system of you are an expert. It ticks all the boxes for these specidic environments.

On the desktop, not so much. For example, the concept of device files is hindering use cases that should "just work". When I plug in USB headphones, a new audio device is created. Fine. But I need to enter the device file name or ALSA device string onto half a dozen programs to use it. All I would want is to have the audio rerouted automatically. Pulseaudio was touted as the solution to that problem, but ar what cost? We're now literally stacking audio systems on top of audio systems and sacrifice to arcane gods to have it work.

When I plug in a USB drive, I now have to look up its device file name in order to mount it manually. The software stack required to automount it from a desktop environment is atrociously complex, because it requires root privileges to mount a device not listed in /etc/fstab with a user flag. And because any number of drives can be connected in any possible order, no entries in fstab can be made.

This clash of UNIX-like concepts and modern user expectations is what is holding Linux back. The underpinnings are not bad. They were just designed for a different task.

So, yes, you can build a user friendly OS. Yes, it can have a clean design. But it won't be called Linux anymore.


> This clash of UNIX-like concepts and modern user expectations is what is holding Linux back.

It did not stop OSX from achieving exactly what you're talking about, and if you look closely, its core is very explicit about its Unix underpinnings.

In my opinion it's not a "Unix" problem, but a bazaar/scale problem. In the bazaar world, ideally multiple competing solutions would pop up, ideas would be merged and which one or a few would end up top. The problem is that to implement even a single a good desktop system - and I mean top to bottom, not just a DE - would require a staggering amount of resources under a single unified goal and vision. The Linux desktop market is simply not big enough to support even one, never mind multiple competing systems. In the server space, Linux is absolutely massive, and doesn't have this problem.


Just curious, but I had the same issues, turns our I didn't install gvfs and associated handlers for mounting unmounting drives. Turns out installing it resolved most of my USB connection issues


I know that there are solutions (I use KDE, so it works differently there). The point I am trying to make is that UNIX device files were never designed to deal with the dynamic hardware configuration we see today, especially on laptops with periphery gerting plugged in and yanked periodically. And some of the solutions, like gvfs, are overly complex user space workarounds. A system that accounts for these dynamic usage patterns would have to look differently. But it does by no means have to be ugly code. In fact, it would probably be much simpler and more elegant than the current Linux desktop user space.


> Good desktops successfully help users.

I disagree with the phrasing. I'd say that good desktops enable users. One of my big peeves about Linux Desktop culture is that they see users as beneath them. They want to "help" users by wrapping them in straight jackets to keep them from hurting themselves and shining a laser pointer on the wall to entertain them. Have a problem with the Linux Desktop? Well, "normal users" don't do whatever it is you're trying to do, and you're not a C graybeard or you'd fix it yourself, so you don't exist according to their model of the universe.


Sorry, I actually failed to find a better word than "help", I meant: to offer the best experience to its user. And we humans have wildly different skills and expectations. I still think that user-facing software design needs more psychology on top of good code.


"Good kernels successfully run code. Good desktops successfully help users. I guess different goals require different designs?"

A good desktop needs to run code successful, otherwise every small bug can make a big annoying glitch.

I guess it is more a question of optimizing.

The hardcore linux user uses the terminal and a text editor mainly and kind of despise GUIs. Linux seems to be optimized for them as they are the most active ones using it. And this use case works perfectly.

GUIs are mostly a thing that was added because of the "newbs" but does not get used so much by the core - so they suck as the core group are the ones who know how to fix things. This was the situation when I started to explore the linux world and some things changed, but not much.

But Linux main problems are hardware issues, which is partly because of linux enforced OpenSource nature which wants drivers to be opensource and included in the kernel. And traditional industry does not like that approach. And given the small marketshare of linux desktop ... they don't really have to.


Good desktops need more than good kernel code.


The only thing about Linux I never really like is the package system.

Like if you want to install a new software, you usually don’t get .exe or .dmg. If you are lucky, the developer or some fans took the time to package it. Then you can do ‘apt-get’ ‘yum’ or ‘pacman’. However, packages got stalled and sometimes don’t match the original author intent. You can also build from sources, but it takes time and you have to know a bit of CLI. It never felt true freedom to me. But more like whatever the community feels make sense for whatever distribution weird dictactorship. Just a feeling and I still love and support Linux.


I have the complete opposite experience. Everything I use is nicely packaged in Debian/Ubuntu. At most I need to add a new package source and then I get automatic updates. In Windows and OSX you used to need to manually install a bunch of packages from a bunch of locations, with very dubious security. No wonder they've both implement app stores.


So strange, as i'm the polar opposite. I vastly prefer having everything update at once and be able to check versions, etc. I use homebrew and chocolately when I'm on macOS or Windows respectively to get that experience back.

I can understand trying to resolve dependencies can be a royal pain, espescially trying to find the distro's specific naming conventions, like libpq-dev vs postgresql-libs


Between Flatpak and AppImages we have good solutions for both people who prefer package managers as well as people who prefer a single file that can be run on double click.

As far as solutions goes, this problem is solved. They just need to be better known by software distributors.


Flatpak is over-engineered garbage. You still can't even install things on different disks without setting up an entirely new 'installation' or whatever they call it. And you still need a repo. Meanwhile I can trivially make most Windows software work from a USB drive I can carry around between computers.

AppImage can do that too, since it is a lot less over-engineered, but sadly very few developers use AppImage and it even distributions like Nitriux that claim to support AppImages don't display icons for them. Could be trivially solved with a standard for embedding icons in ELF, but the unix world hates the very concept of a program that isn't spread all over the file hierarchy and isn't full of hard-coded paths so it'll never happen.


I'm surprised to find that you have good luck with that; last time I looked, it was a nightmare for anything that didn't explicitly support portable mode because everything expects to save stuff into the registry.


There are several tools to help with that, like the PortableApps.com launcher, JauntePE, or even Cameyo. Programs that just use config files can usually be dealt with without those kinds of wrappers by changing an environment variable. But if you don't mind not bringing the config with you (and it often isn't an issue in my experience), you can usually just drop the folder on the drive and you're done.


> Flatpak is over-engineered garbage.

But, it works, especially for the use case of the "I need the latest and greatest versions of two packages".

> AppImage can do that too, since it is a lot less over-engineered, but sadly very few developers use AppImage and it even distributions like Nitriux that claim to support AppImages don't display icons for them.

So, use one of the other 20 distros where AppImages just work, perfectly, out-the-box.


Ignoring the issue with the oft-used fallback of distro roulette as a Linux evangelist defense mechanism, it isn't that they don't work, it's that their icons aren't displayed. There is not a single distro, including the ones that actually supposedly embrace AppImage, which displays AppImage icons correctly.


I agree with you.

The OS should give you only some essential end user applications (a bare-bones text editor, a terminal, a browser) and then the user should get their specific use applications (DAW, 3D modelling software, game engine, etc) from the application developer, not the OS maker.


We appear to be a dying breed. People who don't like walled gardens and middlemen between them and the people who make the tools they use.


There's not necessarily walls in the package manager model, in most cases it's possible to install binaries outside of the package manager.

It's just less convenient.


The same is true of the other walled gardens.


I fail to see how the package management in a Linux distribution could be considered a walled garden. Commonly it's cited as the exact opposite of that (see the table here: [1]). What type of freedom are you missing in something like Debian, where you can edit and recompile the core components of your system at a whim, the system even supporting you through source packages, and where anything outside the package management is just a "git clone" and a compilation away? I really don't know what more anyone could ask for. Certainly we don't want to go back to the 90s way of downloading unsigned .exe files from some random guy's web site and then going hunting for matching DLLs?

[1] https://en.wikipedia.org/wiki/Closed_platform


The freedom to not have to jump through ridiculous hoops like compiling from source just to use an application the distro didn't deem worthy of inclusion in their repo.


> to use an application the distro didn't deem worthy of inclusion in their repo.

How did you come to this conclusion? Did you request the package be included in the distro you use? If so, and it wasn't provided, provide the bug report/feature request link ...


My point is I shouldn't have to beg some third party to include it. The developer made it, I want to run it, there's no need for anyone else to be involved in this process.


So .exe files it is? You can do that on Linux as well, by the way, it's just a horribly insecure way to distribute software.


I keep hearing that, and it is of course possible, but the point is that hardly anyone does it. That's what makes it meaningless that it can be done.

And it isn't insecure. If I trust the developer and get the software from them, it's just as good as trusting a repo maintained by random internets who have been known to not only not keep software in the repo up to date, but actually introduce vulnerabilities that weren't there before!


> The only thing about Linux I never really like is the package system.

I think you just haven't used it enough to understand the advantages. If you don't need the lastest-and-greatest released last week versions, the package system is much more efficient that individually downloading hundreds of packages.

While Mac OS X has homebrew, it is still deficient in my opinion compared to most distros (because casks don't get upgraded by default).

> Like if you want to install a new software, you usually don’t get .exe or .dmg.

Why would I want either? Both have lots of issues. For example, by default: * No auto-update * Duplication of libraries and other files I already have * Spotty updates (e.g. who can be sure whether all libraries used have been patched by the latest version you have)?

> If you are lucky, the developer or some fans took the time to package it.

This is the case for > 99% of the software I use, even obscure stuff. For the other < 1%, I package it for the distro I use, and submit it so it is available by default in future releases.

These days, many popular packages also provide .appimage files (similar to .app files on Mac OS X) or publish Flatpak's (including Slack, Spotify, VS Code, Skype etc. etc.), and these can be used on any recent distro.

> Then you can do ‘apt-get’ ‘yum’ or ‘pacman’. However, packages got stalled and sometimes don’t match the original author intent.

Sometimes the original author doesn't know best ... in most cases packagers upstream their changes or discuss them with upstream.

> You can also build from sources, but it takes time and you have to know a bit of CLI. It never felt true freedom to me. But more like whatever the community feels make sense for whatever distribution weird dictactorship. Just a feeling and I still love and support Linux.

It seems like you never exercised your ability to vote, and think that everyone else is dictating to you ...


Here is what makes Linux work well for me: only run LTS versions. Never build your own kernel. Never install anything that isn't in the package manager.


IS good soviet citizen, never uses software not approved by party.


>Never install anything that isn't in the package manager.

If I wanted to not get any work done I would be using Windows instead.

gerdesj 27 days ago [flagged]

You clearly have never used Linux in any form. There are multiple package managers, granted.

However, one strength they all share is: "you usually don't get .exe or .dmg"! Absolutely! Apps are integrated and not simply add-ons as they are in Windows or Apple land. When I want to install say libreoffice or wireshark I simply ask the system to install them. I absolutely do not browse the internet and download something, extract it and run some "installer". When I update my system, all apps and the OS are updated in one go.

My system is curated for me, end to end, to a greater or lesser extent. When I update, all my system is updated - OS, apps and all.

I don't think yours is (whatever it is).


> You clearly have never used Linux in any form.

Can you please edit out personal swipes from your comments here? This one would be fine without that first sentence.

https://news.ycombinator.com/newsguidelines.html


If you need the last version of Wireshark, it’s getting complicated. And when you have a setup with Wireshark that works why update it? I am dubious that Wireshark can be an attack vector so security updates won’t be useful. Installing last versions is very straight forward when you are on Osx or Windows. You just go to the website and download it. Plus you get the original binaries not a doctored version to fit whatever launcher and log organization is currenty trending at Ubuntu HQ. Don’t get me wrong I still love my Ubuntus. But this we need to rewrite all software to fit our distributions is not a strenght. I think this line of thinking is kind of shared by Linus himself.


Going to dozens or even hundreds of different websites to download the latest versions of software and then manually installing them all isn’t fast at all.

c.f. apt-get upgrade or similar, which takes a few seconds.


>c.f. apt-get upgrade or similar, which takes a few seconds.

Except when the package isn't included, or it isn't the version you needed, because then you have to spend 40 minutes trying to install all the dependencies and building the package from source, instead of the 3 minutes it would have taken to install an .exe on Windows.


> because then you have to spend 40 minutes trying to install all the dependencies and building the package from source, instead of the 3 minutes it would have taken to install an .exe on Windows.

No, you take 2 minutes to install it using flatpak, or download a .appimage file.

And if neither of those are available, you spend that 40 minutes packaging the software, and submitting it to the distro you use for inclusion.

(If it was already packaged, but not new enough, that's a ~5 min job to do the update for your distro)


> And when you have a setup with Wireshark that works why update it? I am dubious that Wireshark can be an attack vector so security updates won’t be useful.

Ignoring feature enhancements and bug fixes for a moment, do you really think it improbable that there are security issues in a piece of software whose entire job is to sit on the network and record everything that it sees and then translate and interpret it?


If an evil app is already communicating on the web and is already on your computer, what can it gain more?


Because the "evil app" is just a specially crafted JPG that your browser opened up that triggers some edge case in the old unpatched version of Wireshark that is inspecting traffic.


Wireshark (usually) runs in promiscuous mode; it'd be "an evil app is already able to send something visible from your network interface" - which to be fair, might help for internal uses. It does usually mean that anything on your local network can attack you.


You have a very limited imagination. Wireshark is regularly attacked -- it parses random data it gets off the network, therefore it can be attacked using deliberately malformed packets. See https://www.wireshark.org/security/ for more info.


Every single of these vulnerabilities are crashes and infinite loops. The worst thing you can do - if I am running a 5-year old Wireshark - is to make it crash if I am visiting one of your web services. And you probably can already do that in newer versions if you dig enough.


> When I update, all my system is updated - OS, apps and all.

Except for when the distro "gods" don't bother updating a package for years, so you always get the "stable" 5-year old version.


And when the distro decides you have to give up some software because of a conflict it can't handle.


Most websites also have rpm's and Deb packages.


Please give me an example. The Linux kernel is about a week old on this laptop. The current version of Windows server is called 2019, the last one was 2016 (rofl)


But you have to live under the wing of your distro mantainers, instead of installing whatever you want from any third party. Isn't Linux supposed to be about freedom?

Yes, you can install software using tarballs, but it's not usable for 90% of users, and not because of the distribution model, but because of the lack of standardization in a good, easy to use application-installing API.


I get the base OS along with a very long list of apps with any distro. For example Arch, Gentoo, Debian, Ubuntu, Fedora, Suse etc - all will curate a lot of apps for me. If I want to go off piste I download the code and crack on. I have several compilers to choose from LLVM, gcc int al out of the box.

tarball installs is pretty much close to the state of Windows app installations. You have to find the bloody things, download them each time and hope you have found the right one and not a trojaned one. Each one needs its own update routine and will not be updated when the rest of the system is updated.

The Windows and Apple software distribution model is archaic compared to all Linux/BSD etc distros.


>tarball installs is pretty much close to the state of Windows app installations.

Except that they're hard to install for a novice user.

>You have to find the bloody things, download them each time and hope you have found the right one and not a trojaned one. Each one needs its own update routine and will not be updated when the rest of the system is updated.

None of those are problems for me, the actual problems are: installing all the dependencies and then executing the make commands. Both of those could be solved with a well designed API.

>The Windows and Apple software distribution model is archaic compared to all Linux/BSD etc distros.

The Windows and apple software distribution works for all my use cases, the Linux model doesn't (eg: if I want many versions of the same package, if I want a package that isn't included in the repos, etc).

Also, from a philosophical and aesthetical point of view, the Windows & Mac distribution model is better: you get your OS from the OS developer, and you get your specific-use applications from the developer of said application. You are not dependent on a single entity that supposedly knows what you need better than you.


> You are not dependent on a single entity that supposedly knows what you need better than you.

This right here is what Linux Desktop doesn't seem to understand. The whole culture is ingrained with the attitude that they do, in fact, know what you need better than you.


> But you have to live under the wing of your distro mantainers, instead of installing whatever you want from any third party. Isn't Linux supposed to be about freedom?

You don't. Yes, but if you don't take advantage of the freedom, that's not freedom's fault.

> Yes, you can install software using tarballs, but it's not usable for 90% of users, and not because of the distribution model, but because of the lack of standardization in a good, easy to use application-installing API.

See Flatpak. Go look at what software is available at https://flathub.org/apps , but both GNOME and KDE have application managers that can install from Flatpak repos (configured to use Flathub by default), and possibly the distro's native package manager as well (via PackageKit).

Or you can also use AppImage files.


"you usually don't get .exe or .dmg"! Absolutely! Apps are integrated and not simply add-ons as they are in Windows or Apple land. When I want to install say libreoffice or wireshark I simply ask the system to install them. I absolutely do not browse the internet and download something, extract it and run some "installer". When I update my system, all apps and the OS are updated in one go."

I think a lot of this is learned, or conditioned behavior ... there are a great many UNIX packages that could very easily (and very nicely and conveniently) exist as static linked, single file executables.

This is how a lot of software distribution worked in the 90s - for say, some Solaris tool or whatever. Granted, the distribution mechanism (some .edu FTP site somewhere) was totally insecure, but the packaging mechanism was great.


Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: