Hacker News new | past | comments | ask | show | jobs | submit login

> I don't get why parts of the Linux community are so resistant to embrace AppImages and provide first class integration for them. Decent desktop integration isn't that hard.

> As a user I want to download the thing and run it. AppImages provide that.

Because Linux userspace libraries aren't designed to handle long term binary compatibility. There is no guarantee that a simple package upgrade or a simple distro version upgrade will not break things.

There is no guarantee that an Appimage will continue working 3 months later. If it relies on web communication, there is no guarantee that the application will stay secure since you have to bundle core system libraries like OpenSSL with your application (unlike Windows and MacOS).

I will even go and say especially GNU stuff is made specifically to make reasonable (imo at least 5 years) binary only i.e. closed-source software friendly distribution hard.

It is the culture adopted by all middle layer libraries and desktop environments too. The only supported form is source and every piece of software in Linux world assumes it is built as a part of an entire distro.

That's why Snap and Flatpak actually install a common standardized base distro on top of your distro or why Docker in its current form exists (basically packaging and running entire distro filesystems).

Only way to get around it is basically recreating and reengineering the entire Linux userspace as we know it. Nobody wants to do that.

Creating long term stable APIs that allow tweaking is very difficult, requires lots of experience in designing complex software. Even then you fail here and there and forced to support multiple legacy APIs. Nobody will do that unless they are both very intelligent and paid well (at the same level as an Apple, Microsoft or Android engineers). It is not fun and rewarding most of the time.




> That's why Snap and Flatpak actually install a common standardized base distro on top of your distro or why Docker in its current form exists (basically packaging and running entire distro filesystems).

And neither method really works for the desktop usecase, because one expects things to actually integrate with the desktop, and that often requires IPC, not just dynamic libraries. So if you bundle an entire filesystem with all libraries you've made things WORSE. Accessibility & I-Bus will break almost guaranteed every other release...


> There is no guarantee that an Appimage will continue working 3 months later.

that's somewhat an exaggeration. Here's an appimage I built 7 years ago which stills run on my up-to-date archlinux. If you follow the appimage guide it will work pretty much without issue.

https://github.com/ossia/score/releases/tag/v1.0.0-b32


> Because Linux userspace libraries aren't designed to handle long term binary compatibility.

The kernel however, is. So how about static linking (almost) all the things?


You cannot at the moment with GNU stuff since glibc relies on a plugin system to load things like DNS, user management etc. at runtime since it enables stuff like LDAP. OpenSSL also relies on dynamic library infrastructure.

The OpenGL drivers are also similarly dynamically loaded by libglvnd at runtime. Otherwise universal graphics drivers won't work. It'll be going back to the bad old days booting into black monitors when changing GPUs and then trying to figure out drivers in the TTY.

High performance data transfers like real-time audio, graphics buffers, camera data etc still has to use the lowest level possible i.e. shared memory. Dynamic libraries really help for having simpler APIs for those.

And then there is the update problem. If all programs are statically linked, a single update will easily reach gigabytes per upgrade for each CVE etc. The distro maintainers has to be extremely careful that they didn't miss to upgrade a dependency.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: