Hacker News new | past | comments | ask | show | jobs | submit login
Debian KDE: Right Linux distribution for professional digital painting in 2024 (davidrevoy.com)
306 points by abhinavk 30 days ago | hide | past | favorite | 195 comments



Wayland is flawless for what it claims to do. The issue is you can't replace X Org with Wayland, you can only use Wayland combined with other software to replace X Org. This is the biggest issue with Wayland: "Wayland is a replacement for the X11 window system protocol,"[0] but you can't actually replace x11 with it. What they should have done is make sure all the features that x11 had were supported by Wayland. By this I mean user facing features like copy-paste and color management. (And if someone thinks copy paste is not secure or whatever, let there be a flag to disable it or something). But instead they wrote a specification that when implemented replaces 90% (lets say) of X11. And since there are multiple implementations of the Wayland server you cannot rely on the other 10% to be present on any given server. If they had not written a specification and just released x12, or if they sanctioned only one specification, or if they had added more features to the specification then the there would be no problem. They still have room to add more features to the specification which I hope they do.

[0] https://wayland.freedesktop.org/


> copy-paste

When I was talking about switching to wayland with a colleague, who was already moved over at the time, he tried to convince me that having no copy paste is better for security.

And I, using a password manager that generates unspeakably complex passwords everywhere, was kinda stunned by that blindsightedness.

If you don't want copy and paste, use Qubes OS. That's the philosophy matching the rest of the architecture of the OS.

But as long as you have a binary named "[" on your system, don't try to convince people that you are doing it in the name of security. There is much more rotten fish to throw away before that.

The only thing that recently convinced me to give wayland another go is hyprland. Seeing all kinds of nice WM concepts being implemented in it, maybe hyprland is "the interoperability specification" we needed all along.


On the other hand, I think it’d make total sense to implement an optional mode that blocks programs’ ability to read the clipboard until the user explicitly approves them. I understand why some people might not want that (which is why I’d like it to be a setting), but it’s always felt a little weird that any program on desktop programs can grab the clipboard at will.

Things like making the clipboard “intelligent” might help too. On macOS there’s a bit of this when copying passwords from the system password manager, where the clipboard is cleared either after paste or after some short period of time to reduce chances of grabby programs pulling it.


The X11 primary selection buffer is an even better variant of that though. It allows single-shot copy&paste (meaning only one application can grab it) from the password manager to the target application and it tells the password manager the name of the application that grabbed it.

I think it shouldn't be too hard to hack in a dialog to password managers to confirm if the destination is correct before replying to the data request. But even without that one at least notices that a malicious/wrong application grabbed the password.


I feel like there's some other reasonable middle grounds too. If you don't want your clipboard to always be cleared, it seems sensible to e.g. only allow the currently active window to read the clipboard.


Yeah, or show a popup with the name of the window that just grabbed your paste buffer.


It makes a huge amount of sense to control what programs can snoop on each other. There won't be a lot of people against the idea in principle. Eventually the Wayland ecosystem will be strictly better than X because at least there is some amount of control over that sort of thing.

The problem that Wayland had is it took the stance of "just no!" and that means Wayland is now almost an adult (15 years, so a little time to go) and only just arrived at a place where a Wayland compositor is about as useable as an X11 based system (thanks Pipewire). Wayland has been a mess for so long I think even in Debian Bullseye swaywm didn't support Zoom (or any other program) screensharing. Which for 2022 was a remarkable gap in capability. I'm all aboard for the Wayland transition because it is a definite improvement, but Wayland's "security" debacle is a legendary case study in design gone wrong.


It's not just security, there's a bit of an over-obsession with generalised future-proofing: useful optional proposed protocols have been shot down because they 'assume windows will be laid out on a screen with xy coordinates'. It's a recipe for committee bikeshedding to run rampant, especially with multiple very opinionated parties involved, any of which can torpedo even an 'accepted' extension by just not implementing it. It's a nightmare of fragmentation.


Preach it! I'm always dumbfounded by the insanity on MS Windows (and nowadays Gnome, too!) that disallows copy-pasting into the UAC. Do they want me to use weak passwords? I don't get it.


What's wrong with having a symlink called '[' for 'test'?


A programming language should not be parsed and neither lexed on the filesystem layer. This is what REPL or interpreters are for.

Counter question: would you have all C++ macros from libboost as symlinks on the filesystem, too? How would that work? /usr/bin/C++/[ and /usr/bin/bash/[ maybe? What's the difference of the operator behaviors, given that macros exist?

If you're not understanding where I am getting at: these types of hacks were the reason for a lot of fuckups and a lot of CVEs in the past. Directly or indirectly. I'd even encourage every distro to use busybox at this point, just for having a centralized entrypoint for your shell scripts.


I have come to love shells with little special syntax, namely fish, so generally write

  if test
rather than

  if [
in any shell. I'm not particularly a fan of special syntax, let alone external commands pretending to be special syntax.

But restricting commands to specific character sets also seems a bit cheesy to me. It seems more natural that a character is a character. I can see the other side here, though.

I'm curious about those CVEs. Surely any script involving the hijacking of `[` to obfuscate malicious code could also just as well have hijacked `test` (wherever `test` is not a shell built-in), right?

> I'd even encourage every distro to use busybox at this point, just for having a centralized entrypoint for your shell scripts.

For operating system scripts, I guess that's fine. But administrators who are writing their own scripts should choose a shell that's actually pleasant to read and write, and ship an appropriate runtime, including external commands. You can do this today in a portable way with Nix or Guix and a bit of footwork, just as well with bash as with Python.

If you're writing scripts in something other than your daily login shell, you might as well use a non-shell programming language, because you're now losing the advantage of writing in a language that you practice constantly. And BusyBox is not a pleasant daily login shell, so you shouldn't use it for that, either.


> Surely any script involving the hijacking of `[`

It's not about hijacking of any binary, it's about mutation of state, and having to keep the same parsing state across multiple binaries, which is very bad practice (apart from USR1/2 and stdout/stdin not even remotely being made for that).

Keep in mind that the architecture we are speaking about was implemented when an OS had less than 20 binaries overall.

Now we have thousands of binaries with absurd levels of complexity, and nobody really knows what's going on anymore.


This still seems very handwavey to me with respect to the actual vulnerabilities. And the only 'split parsing' going on is external commands parsing arguments passed to them. `test` doesn't parse bash or any other shell. `[` is not special, either; it doesn't have to communicate anything back to the shell invoking it about how to parse the rest of the program, either!

Are you categorically against shell scripting (the invocation of external binaries as commands), then?


I'm confused, as every desktop environment and tiling window manager I've tried on Wayland has had copy-paste. There has never been a point in my Wayland experience where I had to even think about this.

You're speaking as though there is someone out there operating without that capability, when my experience has been the opposite. If that was the case you'd hear about it in every single "Wayland is bad" video, instead of the usual focus on nvidia (which I also have no issues with, even using Prime).


Some background here:

https://wiki.gnome.org/Initiatives/Wayland/PrimarySelection

My understanding is that PrimarySelection is not supported by Wayland out of the box. It has to be implemented optionally by Wayland servers. See https://github.com/swaywm/sway/issues/1012 .

Also IIRC you still can't copy-paste between VMs on wayland (although that could be outdated by now)


It was a problem in the early days. Some clipboard interop bugs between wayland-native and xwayland apps persisted for quite a while after too, but those have also been solved some time ago now.


> I'm confused, as every desktop environment and tiling window manager I've tried on Wayland has had copy-paste

what's the wayland equivalent to xclip that works on every WM ? e.g. `uuidgen | xclip -selection clipboard`


wl-copy


There is (was?) an keepassxc issue where the auto fill shortcut won't work due to wayland limitations... That was one time i remember wayland to be not working for my usual flow.


And here we are, those are my two cents and my experience. I have used Linux for years, I liked the idea of defending something free and open.I absolutely love KDE and its personalization ability, it’s just exactly what I want.

Yet, I’m back on Windows, I like well defined screen, I have a 4K another one of lower resolution, to handle that well, I need Wayland, heck KDE just sometimes has less features or less solved bug on X11 now.

And at night I just want to watch one or two things before sleeping, I just need to turn my screen to my bed, and choose one the suggestions of plateforms like YouTube or Netflix with my mouse. …Yet, sometimes I like to type something, it’s usually short so I just use and on screen keyboard program which I’ve still wait to find one that works for Wayland.

This is so dum, it’s crazy to me that we used to fly away from all the dysfunctions of Windows to go the sane Linux ecosystem to only find that is starting to look a bit absurd on that side too. And we look at the discourse of Wayland devs, they speak exactly like corporate would do.

So that and few package management breakage and I think this is also a weak point of the ecosystem, tends to break, to upgrade everything when you only need one thing, too complex too trouble shoot.We usually think we have superior technological paleform on with Linux kernel, but package management is one of the weak point IMO.

All of that pushed me outside of Linux for now.When I encounter something that looks infuriating on Windows, I just think I might see something like this on Linux.

Edit: typos


For work, Windows is not it for me anyone; it's a constant and distracting cesspool of ads for Microsoft services. In contrast, KDE wins here, by far, and not just because it's not distracting.

As for night time, I don't want or need my computer; I'll grab a tablet and consume all the content I want.

As for package management, let me know when I can type a single command on Windows and have my entire system and applications up-to-date.


I can only agree, Windows with its force restart, ads and little bugs and lags everywhere makes it a less suited plateform than Linux for word.

I can only lament that it’s kind of the opposite for personal usage.


> Yet, sometimes I like to type something, it’s usually short so I just use and on screen keyboard program which I’ve still wait to find one that works for Wayland.

KDE Connect does: https://kdeconnect.kde.org/

You can turn your phone into a trackpad/remote keyboard, as long as your desktop is running the host software. Works just fine on Wayland, in my experience.

Overall I think people are right to blame Wayland for asking developers to reinvent the wheel, but wrong to defend the first-draft nature of x11. In a truly reductive sense, Linux never really had a desktop that worked. It was a desktop server that got hacked into usability by a lot of contributors, who ended up building a big unusable monolith. Naturally, a solution that is neither big nor monolithic is bound to make people angry.

But, I think we're past the point of lamenting x11's death. It was meant to be this way, Microsoft built Desktop Window Manager and Apple rolled out Quartz; sticking with x11 just didn't make competitive sense. Wayland's "big problem" is that it asks desktop developers to go the extra mile, and I don't really think that's a bad thing to ask. Linux didn't need taller, more fragile software stacks; it needs more thoughtful integration and actual diversity in implementation. It's not a coincidence that modern applications like the Steam Deck practically rely on Wayland to deliver such a customized experience.


Thanks for the tip, I’ve already used KDE connect even for iOS-Windows communication, it’s just great.

I think people resort back to X11 because it’s only think that worked for a broad sets of features.Sure we can have Wayland making progress but it seems there’s little resource targeted it, not even implementation but even standardizing some things and we’re 15 years past. Linux kernel itself was already at infamous 2.6 version by then.

Sure proprietary desktops have kind of shown the way. One side, it’s good to have standards clear/clean enough that it can be easily implemented, on the other side, it can just be especially resource taking to redevelop things that are the core difference of the desktop environnement you might be developing.

I can wonder and worry, if we would have and will still ever see something like compiz fusion, the diagonal screen tick seen this year or anything else. Design standards api, inter app communication and make it customizable is no way easy and I feel like Wayland has absolutely not find how to articulate all of those things.


It's going to be a pain-point for a while. I invite you to look on the bright side, though; the past 10 years of Wayland was as bad as it will ever get. We live in wonderful times, where Nvidia/Wayland setups are actually stable; this is stuff people thought would never get fixed 10 years ago, but now we're starting to see the light at the end of the tunnel. There's still work to do, but I think we're passing the point where Wayland has more features than it lacks.

x11 has a place in my heart, I loved many of it's apps (shoutouts to xsnow) and cherished the wildly bloated featureset. But damn, it was broken. MacOS had a pretty terrible compositor for a while, but once you booted up Quartz with double-buffer V-sync (imagine, back in 2005) you would already know x11 was finished. Wayland was the inevitably long-winded response from the Open Source community, and while it languished for a long time it's finally quite usable.

Nobody is going to stop you from using x11, or maintaining it yourself if it comes down to it. The philosophy of the matter is decided, though; smaller featuresets are more secure and easier to implement. Especially since the advent of smartphones, I feel like the idea of an x11-native desktop metaphor has been nonsense. Yes, the GNOME pundits push this point pretty extremely, but there's a kernel of truth to it. We really do need more flexible desktop architectures if we want Linux to be a commercial-quality product. x11 is holding it back.


I don't get why parts of the Linux community are so resistant to embrace AppImages and provide first class integration for them. Decent desktop integration isn't that hard.

As a user I want to download the thing and run it. AppImages provide that.

As an app developer I want to create one single package that works everywhere that users can just download. AppImages provide that.

Snap and Flatpacks are solving problems that don't need solving for most people. Shitty sandboxing that doesn't even work and makes my app slower? I don't want it.

Most software is best installed by the native package manager. For the few exception AppImages are perfects.


Personally I dislike AppImages, Snap, Flatpack, Docker, etc. for one main reason:

If an app is so hard to distribute in any other way, that to me is a red flag that the app is not up to my quality standards or otherwise violates my sensibilities in some way. On my Linux desktop, I am very much in the camp of "that which exists without my knowledge exists without my consent".

(I fully recognize that I am extreme outlier in general, and perhaps a slight outlier among Linux users. Just offering one perspective, I make no claims that this it the correct perspective for most Linux users.)


> If an app is so hard to distribute in any other way, that to me is a red flag that the app is not up to my quality standards or otherwise violates my sensibilities in some way.

I have run across the periodic application that violates both my quality standards and sensibilities, yet I find indispensable. An example here is the e-reader software KOReader. It makes zero sense as a desktop application since it is designed to run on dedicated tablets hardware with e-ink screens. It is not packaged by many distributions, likely because few people would be interested in maintaining such a package.

So why would I want to use it on a desktop? Because the breadth and depth of features are unmatched. In my case, I am willing to put up with a quirky black and white touch based interface[1] in order to have access to those features on my laptop. So I use the AppImage.

While I dislike the mentioned distribution formats for the reasons you mentioned, some software is so wonderful[2] that it's warts should be ignored.

[1] It does have keyboard controls. In verifying a couple of details for this post, I also discovered that it can be controlled from a gamepad. Connecting my laptop to a television and sitting back to read a book with only a gamepad in hand is something that I am going to have to try one day.

[2] And sometimes that wonderfulness extends beyond features. The lua based portions of KOReader are sufficiently clear that I was able to create a profile for an e-reader (tablet) that is so new to the market that it isn't yet supported in the release version (the e-reader is only about a month old, while release versions of the software come out every four to eight weeks).


Yep, that is super reasonable. I also will use them if that is my only option. So in that sense I guess I'm am thankful they exist, I just don't want them to become to default or even mainstream.


"Quality standards" like using whatever old version of library the distro provides. And yes it's a madhouse both for app developers and for distro developers

This problem is exacerbated by things like the Canonical interview process where your hiring is intrinsically tied to having drank the whole koolaid and thinking it's the best thing around

We need a linux distro made by people who hate linux. People who buy no excuses. Maybe then things will work


What is stopping you?


$$$ and knowing that desktop distros are basically a loss leader, and basically nobody managed to profit from them (not even RH).

Android only managed to get where it got by keeping the kernel and ditching most of linux userspace


Mint seems to make solid money.

But if you want other people to do your desired work for free you might have to work on your messaging.


Do they? They rely on sponsorship https://linuxmint.com/sponsors.php

Not sure about the full financials and I'm not even sure they divulge it


AppImage is a good distribution format, but IMO is not comparable to your system's package manager or Flatpak for that matter. For starters, when you downloads an AppImage, you are just getting the binary. Documentation, Desktop and Service files, and update tracking are all things that are missing from a vanilla AppImage deployment that your system package manager always provides (Flatpak and Snap only handle some of those sometimes).

The missing piece is perhaps some sort of AppImage installer which can be registered as the handler for the AppImage filetype. When ran it could read metadata (packaged as part of the AppImage?) and generate the support files. Ideally would also maintain a database of changes to be rolled back on uninstall and provide a PackageKit or AppStream provider to manage updates with your DE's store.

Now none of that addresses dependency duplication, but thats clearly not in scope for AppImage.


I believe there is a tool called appimagelauncher that does just that.


How big of a problem is dependency duplication on 1TB drives?


The issue is memory. Every library in an app image has a unique memory space and so you have a bunch of copies of sometimes very large libraries sitting in memory rather than one copy from disk mmapped into memory and page duplicated all over the place.


Linux has had support for not doing duplicate pages for a long time now. I am forgetting the name of the feature but essentially this duplication is a solved problem.


That's only the case if the libraries loaded are identical, It won't work with slightly different versions of the same library (unless the differences are small and only replacements, so the pages remain aligned between different versions), and that case is very unlikely to be solvable


The parent comment doesn't talk about different versions and I wasn't either.


For different minor versions and builds of libraries?


This is basic paging and CoW (Copy on Write) behaviour. I agree, it's mostly a non-issue


Could be big, depending on how much room you give to /. All my Linux life, I have allocated about 50GB to the root partition and it's been adequate, leaving enough room for my data (on a 512GB drive). Now I install one flatpak and I start getting low disk space warnings.


That's also a big reason why I prefer appimages.

ossia score's AppImage is 100 megabytes: https://github.com/ossia/score/releases/tag/v3.2.0

Inside, there's:

- Qt 6 (core, widgets, gui, network, qml, qtquick, serial port, websockets and a few others) and all its dependencies excluding xcb (so freetype, harfbuzz, etc. which I build with fairly more recent versions than many distros provide)

- ffmpeg 6

- libllvm & libclang

- the faust compiler (https://faust.grame.fr)

- many random protocol & hardware bindings / implementations and their dependencies (sdl)

- portaudio

- ysfx

with Flatpak I'd be looking at telling my users to install a couple GB (which is not acceptable, I was already getting comments that "60 MB are too much" when it was 60 MB a few years ago).


How do you manage software with large assets? A single game can take 10-100GB of storage.


My steam library is on another drive (I have multiple O(TBs) large spinning-rust drives on the desktop). The nvme is strictly the base system + apt packages + build tools + /home/.


I also shun the snap bullshit. But TBH I haven't divided my disks into more than one partition (except from /boot and EFI stuff) for many many years now.


In addition to memory, there's the ability to patch a libz bufferoverflow once, and be reasonably sure you don't have any stale vulnerable copies still in use.


> I don't get why parts of the Linux community are so resistant to embrace AppImages and provide first class integration for them. Decent desktop integration isn't that hard.

> As a user I want to download the thing and run it. AppImages provide that.

Because Linux userspace libraries aren't designed to handle long term binary compatibility. There is no guarantee that a simple package upgrade or a simple distro version upgrade will not break things.

There is no guarantee that an Appimage will continue working 3 months later. If it relies on web communication, there is no guarantee that the application will stay secure since you have to bundle core system libraries like OpenSSL with your application (unlike Windows and MacOS).

I will even go and say especially GNU stuff is made specifically to make reasonable (imo at least 5 years) binary only i.e. closed-source software friendly distribution hard.

It is the culture adopted by all middle layer libraries and desktop environments too. The only supported form is source and every piece of software in Linux world assumes it is built as a part of an entire distro.

That's why Snap and Flatpak actually install a common standardized base distro on top of your distro or why Docker in its current form exists (basically packaging and running entire distro filesystems).

Only way to get around it is basically recreating and reengineering the entire Linux userspace as we know it. Nobody wants to do that.

Creating long term stable APIs that allow tweaking is very difficult, requires lots of experience in designing complex software. Even then you fail here and there and forced to support multiple legacy APIs. Nobody will do that unless they are both very intelligent and paid well (at the same level as an Apple, Microsoft or Android engineers). It is not fun and rewarding most of the time.


> That's why Snap and Flatpak actually install a common standardized base distro on top of your distro or why Docker in its current form exists (basically packaging and running entire distro filesystems).

And neither method really works for the desktop usecase, because one expects things to actually integrate with the desktop, and that often requires IPC, not just dynamic libraries. So if you bundle an entire filesystem with all libraries you've made things WORSE. Accessibility & I-Bus will break almost guaranteed every other release...


> There is no guarantee that an Appimage will continue working 3 months later.

that's somewhat an exaggeration. Here's an appimage I built 7 years ago which stills run on my up-to-date archlinux. If you follow the appimage guide it will work pretty much without issue.

https://github.com/ossia/score/releases/tag/v1.0.0-b32


> Because Linux userspace libraries aren't designed to handle long term binary compatibility.

The kernel however, is. So how about static linking (almost) all the things?


You cannot at the moment with GNU stuff since glibc relies on a plugin system to load things like DNS, user management etc. at runtime since it enables stuff like LDAP. OpenSSL also relies on dynamic library infrastructure.

The OpenGL drivers are also similarly dynamically loaded by libglvnd at runtime. Otherwise universal graphics drivers won't work. It'll be going back to the bad old days booting into black monitors when changing GPUs and then trying to figure out drivers in the TTY.

High performance data transfers like real-time audio, graphics buffers, camera data etc still has to use the lowest level possible i.e. shared memory. Dynamic libraries really help for having simpler APIs for those.

And then there is the update problem. If all programs are statically linked, a single update will easily reach gigabytes per upgrade for each CVE etc. The distro maintainers has to be extremely careful that they didn't miss to upgrade a dependency.


I likewise love AppImages and wish more projects used them, but I also love Flatpak. The downside of Flatpak is the overhead it takes to learn what it does, how it works, and how to manage them. If you already know container stuff like docker/podman then it isn't too bad, but it's a non-zero cost and friction.

I think most people don't like AppImages mostly because they don't provide any sandboxing. I think that's a silly reason myself, but I'm also an old, and us olds aren't terrified of using our computers like the youngs seem to be ;-). Though, other OSes are getting sandboxing for applications and Linux needs to not get left behind, so I'm glad it's being solved.

I also think fragmentation is a (valid) reason people dislike AppImage also. It's nothing wrong with AppImage specifically, but that its existence harms adoption of Flatpak by making it easy for people to not use Flatpak. Personally I see them occupying different niches. I use App Images for things like Kdenlive, Logseq, Upscayl, and UHK Agent. Those could all be Flatpaks, but developer effort matters. If devs provide an App Image build I think we should be praising them from the rooftops for caring about Linux!

Another reason is that it clashes with immutable OSes like Fedora Silverblue or SteamOS that are heavily container-based.

What I'd love to see is a tool that takes an App Image and automatically builds it into a Flatpak (possibly with predefined metadata). Flathub could easily be populated this way so that it's easy for developers/distributors to package and ship, but also Flatpak is the standard.


Unpopular opinion maybe, but I think sandboxing and app packaging/distribution should be entirely separate components so the user can mix and match the two freely. Combining both into a single “solution” makes for inconsistency and trouble.


Flatpak uses bubblewrap, and you can use that separately (directly or through ex. bubblejail).


> Though, other OSes are getting sandboxing for applications and Linux needs to not get left behind, so I'm glad it's being solved.

it should have been very telling to the linux community that Microsoft with all its mighty billion dollars wasn't able to force sandboxing on the users with WinRT and had to backtrack and allow Win32 apps again. Hell, people moved to linux because MS tried that.


That's not because of the sandboxing, that's because WinRT required UWP. Microsoft since introduced Win32 sandboxing, which is how the Microsoft Store, most "inbox" apps on Windows, and Game Pass games work.

But sandboxing isn't a requirement on Linux, its an optional distribution method. When you look at a console like the Steam Deck, that system practically requires you to use Flatpaks for third-party software because the OS is an immutable image. If you install anything through pacman it will be wiped on the next OS upgrade.


> Though, other OSes are getting sandboxing for applications and Linux needs to not get left behind, so I'm glad it's being solved.

It is an unquestionable mistake to conflate packaging/distribution and runtime sandboxing into a single solution. These are different problems that fundamentally have nothing to do with each other. There are great solutions for sandboxing that don't care how an application is installed, and solutions for packaging that don't force a specific sandboxing solution on you. This failure to separate concerns is one of the primary disadvantages of Flatpack.


> This failure to separate concerns is one of the primary disadvantages of Flatpack.

I would agree with you if there wasn't already RPM, debs, etc. But also given how flatpak is implemented on top of containers, it's basically "sandboxed by default" and any unsandboxing involves exposing/mounting things in from the host into the container. You can "unsandbox" any flatpak you want using flatseal (or if you know how, flatpak directly), so I think flatpak is actually pretty good in this regard.


> I would agree with you if there wasn't already RPM, debs, etc.

No, these are purely packaging solutions that do not conflate their purpose with sandboxing.

> But also given how flatpak is implemented on top of containers,

Yes, this is the problem I'm referring to.


Greetings fellow UHK user, the best keyboard.


Truly, it has forever made me dislike all other keyboard :-)

The layers (especially mouse layer) is so genius I can't believe it isn't more common


The Linux community is self-selected and highly opinionated on everything.

Concerning AppImages: I have no interest at all in them and although Flatpak still has its share of problems, basically all important communities adopted it (but Ubuntu).

Flatpak integrates in your system, has sandboxes, has automatic updates, shares dependencies etc.

Is Flatpak perfect or running w/o problems? Certainly not.

But IMHO AppImages add nothing over Flatpak, but lack the unified infrastructure and integration into packet managers etc.

We all would benefit from agreeing on one standard, and by now it looks like Flatpak is that one standard. So, I don't want to download random AppImages from the internet, I want a certified Flatpak which integrates with my system.

(Being a member of the Linux community for longer than most people using Linux are alive by now, of course and invetible, once Flatpak is working and established, it will be replaced by some broken other solution. :-P)


IMO AppImage fell short by not requiring upgrading support in all appimages. I don't want to have to monitor random sites for updates to my applications. So after trying to use them for a time I moved on. Currently experimenting with both flox and flatpaks as they both handle this aspect reasonably.


> I don't get why

Loudly declaring your ignorance is not an opinion. If really cared, you would go look at the large volume of the well-thought-out complaints. But you don't.


> I don't get why parts of the Linux community are so resistant to embrace AppImages and provide first class integration for them. Decent desktop integration isn't that hard.

AppImages are good for certain use case. Recreational vehicles are also good for certain use cases. But just as a I don't live in RV parked in my garage when I'm at home, I don't use AppImages as my primary way of running locally-installed software on my main system.

> As an app developer I want to create one single package that works everywhere that users can just download.

You should just release source tarballs (or static binaries if you're not FOSS), and let the distros do their job of packaging and distributing the software. AppImages are fine to maintain in parallel for specific use cases, but I just want normal software binaries running in the OS environment I've already set up to my liking, not embedded in someone else's encapsulated runtime environment that's inconsistent in dozens of ways from my system config.

> Most software is best installed by the native package manager. For the few exception AppImages are perfects.

Exactly. But static builds are far superior to AppImage.


As a user I don't want to deal with updating all my software individually. AppImages also provide that. I would be perfectly fine with AppImage format packages being shipped through my package manager, but I don't want to download isolated software from the web when I can avoid it.

FWIW my package manager of choice is GNU Guix, which solves all of the same problems as snap and flatpak in a much more elegant way.


While I'd love to see good tooling for AppImages appear, it's just not made for it.

The fundamental problem is that AppImages are literally just an archive with a bunch of files in them, including libraries and other expected system files. These files have to be selected by the developer. It's really hard to tell which libraries can be expected to exist on every distribution, which libraries you can bundle and which ones you absolutely cannot due to dependence on some system component you can't bundle either, or things like mesa/graphics drivers. There's tools to help developers with this, "linuxdeploy" is one, but they're not perfect. Every single AppImage tutorial will tell you: Test the AppImage on every distribution you intend this to run on.

At the end of the day, this situation burdens both the developer (have I tested every distribution my users will use? both LTS and non-LTS?) as well as the user (what is this weird error? why isn't it working on my system?), and even if this all somehow works, newer versions of distributions are not guaranteed to work.

Flatpak, for all its bells and whistles, at least provides one universal guarantee: Whatever the developer tests, is exactly what the user will experience. I think this is a problem that needed solving for many people.

...it hurts a lot to say this as a longtime flatpak avoider, still always prefer distribution packaging, but I've come to accept that there's a genuine utility to flatpak, if only to cover for letting users test different versions of software easily, and similar situations that distributions just cannot facilitate no matter how fancy their package manager.


It can also happen for Flatpak not to work on some distros.


What's the update story for AppImages? AFAIU you have to download updates by hand?


Yes. Things that are critical to update regularly should be handled by the package manager.

I use AppImages when I want a specific version of a specific app. I never want them to be automatically updated because doing so might introduce breaking changes that might mess with my workflow. Imagine complex software like Blender or Krita updating automatically while you working on a specific project, possibly even breaking your saves, absolute horror.


Updating app images are solved if a few ways: https://docs.appimage.org/packaging-guide/optional/updates.h...


There aren't centralized solutions that I know of, but there are projects such as AppImageLauncher[1] that can provide some automated management of that. It's not perfect but it is helpful.

[1]: https://github.com/TheAssassin/AppImageLauncher


My Arduino AppImage self-updates - so I know it's possible


>> Snap and Flatpacks are solving problems that don't need solving for most people. Shitty sandboxing that doesn't even work and makes my app slower? I don't want it.

After using Ubuntu for years, this change made myself and many others I know switch. I've been on MX for the past two years and love it.


what is MX?



That website conveys very little useful information for understanding what makes this distro unique. I don't know what antiX or Mepis are. Why would I choose this over a more well known and presumably better supported distro?


It has more information on the front page about the distro than the debian website has about debian. I guess debian shouldn't be used either.

Also antix has a hyperlink which clicking would have answered your question as to what antix is.


> As a user I want to download the thing and run it. AppImages provide that.

Yes. And no.

About 70-80% of AppImages I downloaded in the past didn't even start. 100% of my installed Flatpaks have been running fine on any distro I tried. I would love a world where Flatpak was redundant, but it very clearly isn't.


I'm curious, could you try this one and tell me if it starts ? so far it works in all the mainstream distros I could try but if there's someone out there who cannot open it with an OS less than a decade old, I want to make sure I can fix that : https://github.com/ossia/score/releases/download/v3.2.0/ossi...


AppImages don't solve the discoverability or upgrade problem.

I'm on an immutable distribution (Fedora Kinoite) so installing native packages is discouraged. I have everything running in Flatpaks, even Steam and all games. I haven't experienced any performance impact. I think Snaps do something weird with squashfs, but Flatpaks just set up and manage symlinks to dependencies via ostree.


I disagree with you in that, only system software should be installed by the native package manager. Everything else should be AppImages.


No thanks. Everything should be installed via the operating system package manager.


What is "system software"?


The stuff that gets your desktop started up into a usable state. Same as on any OS, macOS or Windows. The software that is expected to be there on a given version of the OS, that other software may depend on being at some particular (major, at least) version until there’s a new version of the OS.

The stuff that updates when you update the OS.

On macOS, the software that’s not an optional package manageable through the App Store (even if installed by default) or managed by Homebrew or what have you.

You know, the system software.


This is a distinction that should NOT exist. Like on phones, you end up with "software" that is just a glorified web browser and doesn't integrate with the rest of your system nor cannot access your hardware to its full extent.

I.e. If I want to set up a script that makes Libreoffice trigger phone calls through a bluetooth modem, I should be able to. Otherwise it isn't really a computer. These "system" vs "non-system" almost always end up down this slippery slipe, and avoiding it is one of the reasons I enjoy desktop Linux for all its brokenness.


It’s very nice when it exists because you can do whatever you need with user-facing software without risking system stability. Long-term stable versions of basic software, including gui libs, also provides a reliable target for software deployment.


That's the problem. I don't know.

I've made up half a dozen definitions for it before asking, and none of them were a good guide to decide about the software on my machine. Yours seems to focus on the DE components, what is both way too restrictive (why are `ls` or `test` out?) and way too inclusive (my DE installs with an Earth rendering and graphic calculator, my workplace's DE installs with CandyCrush).


A universal definition isn’t needed to apply the concept, in the same way the Internet can’t agree on some fixed universal definition for what a sandwich is, yet this doesn’t impair assembling a BLT.

It is in fact applied by organizations, and they manage fine, so lack of a universal definition isn’t a hindrance.

If you want to guarantee certain versions of ls or test are available for the duration of the supported life of an OS, yeah, they’re part of the base system. This kind of arrangement is very nice for both users and software vendors. The base-system instability of an Arch or a Gentoo (rolling release), or the ancient productivity-software packages of a Debian, aren’t the only options—lockstep-release stable base system and rolling release user packages are an option.


>>> Snap and Flatpacks are solving problems that don't need solving for most people. Shitty sandboxing that doesn't even work and makes my app slower? I don't want it.

To me that also raises the issue of Snap/Flatpacks and Wayland both handling part of sandboxing/capabilities. Ultimately if you want to handle how your apps can be run and got resort back to lower primitive and you’re handling very different things like files acces(Snap/Flatpacks) and visual elements(Wayland) and maybe got back to the Linux kernel to handle that.


AppImages are nice as a user, but it seems the lead dev is intentionally not adding support for wayland (besides some other odd choices) so it feels like a dead end technology


In what way is AppImage interacting with the display server in its own right such that it would need to be specifically updated to support Wayland?


I don't know enough about that to tell you something specific


Literally this. One thing both Apple and Windows have that Linux does not is, every software has an easy universal package for their respective platform. For Linux if its even in your main package manager its like whatever was "stable" in 2022 or whenever the last major version of Debian / Ubuntu came out, and that's all you get. It's annoying to no end. I now just download AppImage every chance I get.


Unless something has drastically changed since I switched to Linux around 10 years ago, Windows did not at all have a "universal package". Instead, installing software meant manually downloading an installer from the vendor's website and then manually interacting with that installer through a GUI. Windows installers come in a variety of different (even custom written) formats which essentially make it impossible to automate package management in a universal way.


Back then Windows had already blessed Windows Installer packages (.msi), which could be installed unattended from the CLI. But, to be fair, many companies still preferred to use other installer tools, including sometimes Microsoft themselves (e.g. ClickOnce).

But there's been improvements. The winget CLI can now install and update several well-known applications, and even upgrade them when you first downloaded them any other way. There's also the MSIX package format, which is much closer to distro packages or mobile apps, can auto-update without a central repository and supports sandboxing.

Nowadays, even if still a bit hacky, I definitely consider Windows packaging less painful than the mess that is Linux distribution. Packages in distro repositories are regularly outdated, and every other package is installed misconfigured or has been patched in unclear ways by the distro maintainers; then there's Flatpak which make for gigantic installs with clunky sandboxing, or AppImages.


my biggest point being that Linux package managers carry out of date packages. Unless you want to use arch linux or similar.


Stop using Debian based distros, this a deliberate choice on their part. Use a rolling release like Fedora.


Use Debian Testing. This is the _rolling_ release from the Debian family.

And don't be fooled by the name of the release: "Testing". The name "Testing" exists due to an orthodox approach of Debian community that only the Debian stable can use "stable".


Testing is not really a rolling release and is not guaranteed to get security updates in a timely manner so it's certainly not for everyone.


Testing is a rolling release for most of the time. As well as Sid is:

Sid [rolling] -> Testing [rolling] -> Stable [point release]


This is only true until testing gets frozen. After a freeze gets lifted again, many transitions might occur. I use it myself but it's wrong to call it a rolling release that everyone should use.


testing is absolutely not a rolling release. Sure it gets new packages faster than stable which can appear to be the same as a rolling release it gets frozen before a stable release and suddenly it's not a rolling release.


So to describe it precisly: Debian Testing is a rolling release for majority of time. When the Stable is going to be released, Testing gets frozen for a while. But then it becomes the rolling release again. Debian stable gets released once in ~2 years. So once in ~2 years Debian testing is not a rolling release.


And ignore the orthodox Debianeers who say "nobody should be using testing".


Fedora isn't more rolling than Debian. That is both have rolling testing vs rawhide but otherwise they are not.


Then point your /etc/apt/sources.list to 'testing'.


@giancarlostoro

This is the solution. Use the following web page to compare packages versions from Debian Testing and Fedora 40: https://distrowatch.com/dwres.php?firstlist=debian&secondlis...


... sorry, this sounds to good to be true.

Usually it works like this (forced to use macOS at work and Windows from time to time for special software): One installs some software from trusted websites (VSCode, VLC, Firefox, etc.) on macOS/Windows, and then when one just wants to get some work done: Update-PopUps, yay. They annoy the hell out of me, especially because there is no unified system for macOS/Windows. Break my flow, wait for download, privilege escalation, installation and starting again. Thank you very much. This happens multiple times a week especially for packages like VSCode, VLC, Office, Outlook, Firefox, KeepassX, Calibre... packages which you don't want to be outdated, ever. It is f*cking ridiculous that I have to take care of this BS in 2024.

On my Linux boxes I login and work. Updates have been silently downloaded and installed for the packages and/or flatpaks and everything is up to date, no annoying update-popups which break my flow and I know that I have the latest version of all software especially security sensitive packages.

At the end, you can pick your poison.

Having integrated/working package management with silent updates is one of my killer features of Linux/Flatpak. I want to set it up one time (automatically) and never have to think about or deal with it again.


When combined with Flatseal as an easy to use privilege granting system, it really is hard to beat.


Although I like appimages more compared with flatpack and snap

I think they are all worse than the other packaging methods

With a NixOS configuration I can install almost all software I use in a single command

Even in window I mostly use scoop/chocolatey


Interesting guide. I wasn't aware Wayland still had such shortcomings even for graphics professionals.

> It's now up to each desktop environment project (e.g. GNOME or KDE Plasma) to develop their own full featured GUI for tablet configuration.

That doesn't sound ideal.


Everything is outsourced to the compositor. It's insane. Same reason there's no xbindkeys for Wayland - every compositor needs to implement their own version of it. Every compositor needs their own screen reader and accessibility software as well. All in the name of preventing mythical attack surface (people write exploits to give themselves arbitrary code execution, not for the ability to read window contents or whatever Wayland is supposed to prevent).


Isn't screen reader support implemented using AT-SPI over D-Bus? So I would expect it to be independent of the window system.

On https://www.freedesktop.org/wiki/Accessibility/AT-SPI2/ it even says:

> Wayland

> Works just the same :D

Apart from that while it's true that the compositor has to do everything, some of the interfaces seem to be shared (standardized? I don't know enough about Wayland development tbh) across different compositors: https://wayland.app/protocols/


That part is not so much about security: it would be entirely possible to make a secure standard protocol for such things, it's just there's enough highly opinionated people involved that no-one can agree and they each do their own thing.


Wayland also doesn't have a video capture interface as part of the standard. E.G. for Remote Desktop style applications (you might even use such if part of a video conference). Instead that's left up to the compositors, and therefore potentially not supported or possibly every compositor could have their own standard.

https://wayland.freedesktop.org/faq.html#heading_toc_j_8


Many compositors use the xdg portal standard. Go figure.


Yup, most common multi-platform remote control tools (AnyDesk, Teamviewer, etc) simply don't work if you're running Wayland. Major PITA if you're doing some form of tech support...


I had to switch back to X11 because both nVidia and nouveau would produce around 5 fps on my A2000 RTX on multiple distros. This is just for basic usage, not even painting-related.


Way-not-gonna-land as someone said some years ago. :-P I'm using Wayland for my work now for two years now, and it has been amazing. But then, I rarely do any graphics work.


I would like to move back to Linux (specifically NixOS), but I've held back because support on the T2 Macs is pretty bad still, and Apple is still providing updates to them for now.

When I tried installing Linux on there, I had a ton of trouble getting Wayland to work in any capacity (it was very glitchy when I tried), and I had to go back to X, before nuking the Linux install and going back to macOS.


did you have any problems specifically with X? It's worked flawlessly for my entire adult life, and a decade ago they made it so magical I didn't even have to write a config anymore. I think they will have to pry X from my cold dead silicon at this point.


No, not really, I just wanted to play with the shiny new Wayland stuff. Also I like the Sway window manager, I think more than i3? Obviously they're both pretty comparable.


>did you have any problems specifically with X?

Every recent time I would daily drive Linux (Ubuntu and Antix) I had screen tearing out of the box. Not really an acceptable feature.


that's interesting, what graphics card were you using? was it some fancy monitor? I have always used pretty standard stuff and never had any problems.


>what graphics card were you using? was it some fancy monitor?

3 year old Lenovo laptop with Intel Integrated plus Nvidia dedicated on Ubuntu connected to external HDMI and Displayport standard 1080p monitors and built in laptop monitor, and an old PC with old dedicated Nvidia on AntiX(Debian) connected to old 17" VGA LCD monitor. Nothing fancy. Screen tearing on all of them.


were you using the nvidia proprietary drivers?


I think Nuveau on both or whatever came out of the box during install. Tried to install the Nvidia propetary but couldn't due to some MoK signing issues.

Anyway, the out of the box experience is shit on this front. Screen tearing shouldn't be a thing anymore. It's not like those GPUs were some exotic prototype HW nobody else has to need hours of reding forums and tinkering with .conf files to get rid fo screen tearing.


1) Nvidia's just bad on Linux because Nvidia doesn't give a shit about open source support, or about doing things the way the open source folks do. And while the Nouveau driver is an amazing technical feat, it's also amazing any time it actually works, given that it's totally reverse-engineered with effectively zero help from Nvidia.

2) If I were using a batteries-included distro,

  Section "Device"
    Identifier "AMD"
    Driver "amdgpu"
    Option "TearFree" "on"
    Option "VariableRefresh" "on"
  EndSection
would have been dumped into '/etc/X11/xorg.conf.d' for me. But because I'm using Gentoo, I had to do it myself. Not a big burden.


absolutely love the Linux culture of "don't screw it up: true":

> Option "TearFree" "on"

reminds me of a pdf viewer at some point had an option to "respect permission" or something, you could turn it off and ignore the limits on the pdf that the author set.


> reminds me of a pdf viewer at some point had an option to "respect permission" or something

Oh yeah, I always say "Don't respect" on PDF readers that have that setting. It's a very nice setting.

In regards to "TearFree" not being on by default, I expect that there's a good reason for it... maybe it adds some amount of latency (whether on hardware I don't have, or so little latency that I don't notice it), or maybe it just flat-out doesn't work right on some hardware that I happen to not have.

Regardless, deciding whether or not to set those settings is the job of one's distro.


ah fair enough, I've never bothered with secure boot, my laptop is from 2012. nvidia is usually a pain, but I've never really had a problem once the proprietary driver was installed. I thought Ubuntu made that easier.


Just go to github and search issues with 'Wayland support'

I think is more probable that anyone will have a problem in Wayland than X


As much as the "Wayland is now ready since it's the default on big distros" meme goes, it's not really ready if you're having issues with it.

Ready means if everything is flawless to MacOS/Windows levels, where you never have to think about display server compatibility, and that's where X11 is still king despite the performance shortcomings. At least it always just works™ and compatibility beats performance most of the days.


That feels like a high bar for FOSS (maybe for any software...), but given how long Wayland has been getting worked on, you'd hope it's at least usable for most (but not digital painters, apparently).


Keep in mind Wayland has been worked on since 2008 when I was still in school. I think some young HN readers haven't even been born yet.

I could excuse it if it had been 4 years old or so, but it's old enough already it can drink and smoke in Europe.


Wayland is now way older than Xorg was when they decided that it needed re-inventing.


Wayland was first released in 2008, so it's 15 years old.

Xorg was a fork of XFree86, which was released in 1991. So that codebase was 17 years old when Wayland was first released. I don't think the date of the fork is what's relevant here.

I could also argue that you should be comparing the Wayland protocol (2008) vs the X11 protocol (1987), or even older versions of X (1984).


Fun fact, GTA Vice City is now further away (22 years) from its 2002 launch date than it was from its story setting of 1986 (16 years).

If that game were to be launched today and feature a setting in the same time delta, it would be set in 2008 when movies like The Dark Knight or the first Iron Man launched.

How time flies.


I remember the nineties when 30 years ago was the 60s. now 30 years ago is my flipping childhood.


So when you were a child, a 1969 Mustang was a "clasic car", but now an equivalent classic car would be a fourth gen Honda Civic that your local weed dealer drives.


Maybe more a Camero IROC-Z?


thank goodness I don't know what any of that means, I was never into cars!


A very lovely read. Well done to the author for taking the time to give such a thorough, generous description of their workflow and the whole process. I don't do any graphics stuff, so it was very interesting to imagine such a different world. Cool!


I work with Krita and digital tablets and made the switch to Arch-based distros in good part to address the problem of Krita being very much behind in official repositories or using the appimage with the problem he mentions. With regards to update, this has been surprisingly smooth, the main problem has been with software development tools (the PostgreSQL and Python major updates for example require a few hours every time to fix everything, and occasionally things like Latex also break and require maintenance).

I did not try the switch to Wayland; I've actually tried installing the packages as recommended in the wiki, but despite that launching a session just fails and sends back to sddm, so I haven't tried much more. According to his guide, I won't try before a long time.


At this point I have only one app in XWayland (IntelliJ) and they are working on native Wayland support. I hope soon we cross the point where everyone is better off on Wayland.

I'm quite glad it works for me now, and I feel bad for everyone who's usecase is not yet catered for.


I just got a Framework 16, installed Ubuntu 24.04 on it, and have been going through this exact struggle after avoiding it for many years. The screen DPI is just high enough that it's uncomfortable for me to use at 100% scale, so I set it to 125%, but any non-Wayland application would get artificially resized and look blurry.

There's a pending Mutter MR[1] to allow XWayland applications to handle the scaling themselves (which IntelliJ IDEs can do) but it hasn't been merged yet, and I'm probably never gonna see it on 24.04 anyway. Apparently KDE already supports this, but the Gnome folks have been reluctant to adopt the same approach.

I ended up going back to 100% scaling, increasing the system font size, and then setting `GDK_DPI_SCALE=1.25` in `/etc/environment` to tell all GTK programs to increase their scale. It's not perfect, but most things are the right size. I definitely would not have wanted to switch to Wayland any sooner than this though. Transitional periods are such a pain.

[1]: https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3567



I saw that, very excited for it and hope it lands soon!


The new 13 laptop has a resolution that allows for 200% scale exactly!


Dang, now I want one! I already have two of the 13s (11th and 13th gen Intel), plus a new 16 with GPU (which I absolutely adore btw) though and can't possibly justify such extravagance.

Love that they're fixing the screen though. It's fine for me since I solved it the first time, but it's hard for most people at first. I just enabled "Large Text" in Gnome's accessibility menu and that gets it pretty good. If you want to really hone it in though, enable fractional scaling[1] and then tweak the scaling factor.

What I do after a fresh install (with Fedora but should work with most any new-ish Gnome):

[1]: `gsettings set org.gnome.mutter experimental-features "['scale-monitor-framebuffer']"`


You can buy just the screen :)


You can set an environment variable for electron applications: https://git.sr.ht/~tristan957/dotfiles/tree/master/item/syst...


I'd hate to tell you to switch DEs, but I ran into this same issue with my Framework 13. After using GNOME for almost 20 years it got me to try KDE which supports that feature natively. Now none of my applications are blurry or require tweaking.

I hadn't really used KDE much before that, but KDE 6 is a solid release.


Very nice cold shower for all the Wayland zealots.

I am generally pro-Wayland when it'll work, but against stucking Wayland into my throat and ordering me to abandon my routines, telling me they were no good. And this is what is happening right now with a lot of "enlightened" people.


X11 is still there if you need to use it.


Why didn't the author use opentabletdriver, it would have fixed most issues he had with the graphic tablets. It works on Wayland too.

https://opentabletdriver.net/


Hey, I know this author! Great chap, works with a lot of open source art projects to keep them up-to-snuff


https://www.peppercarrot.com/ author of a opensource webcomic


Can confirm from personal experience. It has been years, but Deevad was a joy to work with. He will discover your niche project, test your betas, give feedback after trying to work around the quirks and spending time actually using it. He will contribute when possible and promote your software if it gets the job done. I'm glad he keeps doing this over all the years and projects.


Arch user with this update and it has served me well. Just ignore the kernel stuff and only update userland/libs etc:

  pacman -Syu --ignore linux,linux-headers,linux-api-headers


Why not use the lts kernel it you want to stay on a version?


Sounds like these desktop folks need to band together and re/implement/think the donut of missing functionality around wayland and get it standardized. Ship a lib metapackage called "wl-plus" perhaps. Then everyone could move on confidently.

What is the freedesktop group doing? I thought this was their area.


Do people just not realize that you are supposed to resize images? You don't just upload a 2560x1440 image, then have the client shrink it down to 930x523, unless you can take the server-side hit, and don't care about the time spent by the person waiting for the image to load.


I feel like the file size of an image that's the dimensions of a standard 1:1 pixel density desktop wallpaper on a site talking about digital painting is simply not the thing to be upset about.


I've been using Debian since it existed. still happy


TL,DR

Us Wayland skeptics were always right.

Again -- I know I'm old, but it still boggles my mind how accepting LINUX people became of breaking backward compatibility in this situation. Like, the opposite of that was our whole thing.


It doesn't matter who was right. It doesn't matter how wrong or broken Wayland is.

It does matter who is willing to work on code. And no one is willing to do the work on the X11 codebase that it needs.

That's it. Plain and simple.

Forking X11 and assuming Vulkan while throwing out everything else is probably pretty doable and would result in a 90% codebase reduction. However, nobody is willing to sling the code to make it happen.


Current Xorg has Glamor, using OpenGL to accelerate the 2D ops of X11. There is also Zink, OpenGL-over-Vulkan for the rare case of Vulkan without native OpenGL.

OTOH, there are people willing to add support for old hardware that probably can't run Vulkan, or even OpenGL, and sometimes they manage to upstream that code. With your "bsder" nick, the following example seems relevant.

https://blog.netbsd.org/tnf/entry/x_org_on_netbsd_the [search "PowerPC"]

Some old X11 coders still work on it, or make suggestions to improve it.

https://lists.x.org/archives/xorg/2024-April/061644.html

https://keithp.com/x-ideas/

But such initiatives lack the "new shiny bling" factor, which seems to be a must nowadays to be a "success". It's the curse of "mostly done, mostly maintenance" projects in a world where breaking ground looks better in a CV than polishing and oiling the machinery others crafted.


The problem is I guess that Xorg, a project nobody is working on, still works better than a project that is on full throttle, full of zealots, evangelists, programmers, designers, thinkers, philosophers.

That says a lot about the state of those two projects.


All that really says about these projects it what's already plainly obvious: X11 is an old project and Wayland hasn't caught up yet for every minor thing you might want it to do. X11 has more than 40 years of development behind it, it isn't easy to catch up to all the features that have been built into and on top of X in the about 8 years (at most) Wayland has been getting the majority of attention.


It's not about coders. It is about the Wayland spec itself missing features that the maintainers of the spec are bike shedding over, for example, color management and HDR.

>8 years (at most) Wayland has been getting the majority of attention.

Color management on Wayland has been discussed for over 10 years by now.[0] In fact this proposal contains code to implement color management! So it is not about code. It is about bikeshedding the spec.

[0] https://lists.freedesktop.org/archives/wayland-devel/2014-Ma...


> "you might want it to do"

This is my problem with Wayland zealots. First and foremost I want it to display windows on my laptop and my desktop and not freeze my computer on start. Then, I would like to have more than 10 frames per second when it'll finally start. Is that too much to ask? Wayland doesn't deliver these two points on my setup. Should I change my hardware to match your hardware?

Wayland development is already in progress for 15 years. And it sill fails to provide basic screen displaying capabilities. I have nothing against the technology, just against the people running the show. I think Canonical has made a mistake to cancel Mir.


On the contrary, who's willing to work on it isn't what matters either. What matters is whether the code actually does what people need it to do. Now you might very reasonably assume that having devs willing to work on Wayland would lead to that, and that nobody wanting to work on Xorg would cause it to stop working, but in reality that's just not true; Wayland has been preferred by the devs for 16 years and in 2024 is still behind Xorg in meaningful ways. They're finally close, but as someone who keeps looking for ways to make Wayland work it just isn't there.

> Forking X11 and assuming Vulkan while throwing out everything else is probably pretty doable and would result in a 90% codebase reduction. However, nobody is willing to sling the code to make it happen.

The easy way is to move to just using rootful XWayland, which lets you use the fancy new backend while keeping everything else the same. IMHO they really should have implemented that first, moved everyone to the new backend, and then if they really had to push pure Wayland do that after.


Right. So I'm probably unique in that I'm a very long time Linux user who doesn't do much coding myself.

And as such, the Wayland project is the ONLY one that ends up reliably generating the usual e.g. "WELL YOU'RE NOT ALLOWED TO HAVE AN OPINION BECAUSE YOU DONT WORK ON IT YOURSELF" type of retorts.

They're really weird because -- on one hand they're technically true. But on the other, they don't really explain why SO MUCH OTHER free/open source software doesn't have the extreme backwards compatibility breakage that this does.

Why do we get the free magic of people making great backward compatible code with other projects and so painfully not with this one? To me, THATS the interesting question, and cutting it off with WELL, YOURE NOT DOING IT SO YOUR OPINION DOESNT COUNT just is weirdly unhelpful.


> Why do we get the free magic of people making great backward compatible code with other projects and so painfully not with this one? To me, THATS the interesting question, and cutting it off with WELL, YOURE NOT DOING IT SO YOUR OPINION DOESNT COUNT just is weirdly unhelpful.

I can invert your question to: "Why is it that so many people complain so loudly about the X11/Wayland thing and yet so few step in to sling code?"

These projects are relatively old. People know what is broken in X11/Wayland; pointing that out doesn't help very much. What isn't always obvious is how to fix what is broken.

What's left to do is hard. The easy stuff has been done. The grindy but straightforward stuff has also mostly been done. So, what's left is the stuff that is both a lot of work and not at all straightforward.

Given that, what value is there is anyone that isn't bringing code?

If the problem needs a mass of work, the only thing valuable to be brought to the table is code. Alternatively, if somehow a brilliant insight got missed by a ton of experienced people, the only thing that can break the logjam and demonstrate superiority is code.

I'm no fan of Wayland and think it is very much the wrong direction. However, if I'm not willing to commit time, people or money to fix the situation, how much weight does my opinion really carry at the end of the day?


You literally just did nothing but further my point.


> Why do we get the free magic of people making great backward compatible code with other projects and so painfully not with this one?

And I explained what the difference is.

These projects are old and difficult. Most open source projects are neither.

There are very similar problems with open source domain-specific tools, for example (Musescore, GIMP, FreeCAD, KiCad before CERN decided to fund it, etc.).

Musescore has a nice video about how hard fixing it was and how hard it was to get to Version 3 and then trying to get it to Version 4 and how lucky they sometimes had to get: https://www.youtube.com/watch?v=Qct6LKbneKQ


Again, though, the weirdness exists; you just mentioned a bunch of niche things (presuming that KiCad is niche, I've never heard of it, so probably)

X/Wayland ain't like that AT ALL. It's something very central to the OS, which just tends to "get taken care of."

Really, my gut is that this is about a slow subconscious shift in culture; in the beginning there were the hippies who shared everything and now GNOME et al want to be 20 devs in a trenchcoat pretending to be Steve Jobs.


> On the contrary, who's willing to work on it isn't what matters either. What matters is whether the code actually does what people need it to do.

That code doesn't write itself. It needs someone motivated to write it.

As someone who reverse engineered a video card and wrote an entire X11 driver almost 3 decades ago, I get this painfully. If I wanted it done, I had to do it myself.

Both X11 and Wayland have some fundamentally wrong architectural choices. The problem is that both X11 and Wayland are sufficiently "good enough" that the activation energy to get traction above either of them is huge. It's a gigantic pile of grunt work until you get to the fun stuff, and nobody who isn't paid money by a company can muster the motivation to overcome it. And people paid by companies are going to solve the problems they are being paid to solve and not much else.

Open source, in general, is having a problem with being stuck in "local minima". Lots of things are good enough to gain sufficient social traction that replacing them with something better is ferociously difficult. (package managers, git, dynamic libraries ... I can go on and on and on ...)

It's part of the reason why I don't try to interfere with the "Rewrite It In Rust(tm)" brigade even when I think it's a silly idea. It is now a bigger problem to corral the motivation to do something in open source than gaining the technical skill to do it.


The good old X11 without Vulkan work good enough for me, despite all the shortcomings.

You are right that we can't force wayland dev work on x11 -- all I ask is don't push their new work as default.


That's basically the same everything that pottery guy touch: push something look better 90% of time; promise to fix the remaining 10% later (with a bunch of fanboi yelling why those 10% does not matter); push the 90% product as default; declare success and have the remaining 10% sit for 10+ years unfixed


[flagged]


>I've only been using this operating system for two weeks...I have already been productive with it and from my experience this is a really polished system.

If it works, it works. Artists have a different set of needs and the thread mentioned in the article showed stability > features

https://discuss.kde.org/t/fedora-kde-40-plans-to-completely-...


Don't confuse the linux distro term Stable, with Stability. They are unrelated.


I want that on a shirt


One of the things that Debian distros have over RPM based distros to me is installation. Even the worst one I've used (Debian stable) is better any RPM one. I've tried openSUSE and mandriva. Also, there's something about RPM distros, that I've tried, don't feel like they are mine to control.


I don't know why you're comparing them based on package formats as if that makes the distributions similar. Mandriva was also discontinued 12 years ago.


I wasn't comparing package formats at all. I only mention RPM to generalize the type of distro.

It was definitely more recent then 12 years ago. Maybe it wasn't mandriva but it was a kde based distro. I prefer KDE and many of the recommended ones are RPM based so recently I tried many of them but their installations were far more limited compared to Linux Mint/Ubuntu/MX and the resulting system didn't feel like something I had much control over.


Also, Debian has backports, so the 'non-supported hardware' babble it's bullshit. For sure you will have the latest MESA and Kernel in backports while keeping the rest stable enough.


I haven't heard that about Debian. But i am no daily Debian user.

Curiosity: What would you recommend then, for someone who doesn't want to be ignorant or complacent?


This will introduce its own problems though, for example, X11 is insecure.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: