
Linux apps that run anywhere - devaroop
https://appimage.org/
======
Pneumaticat
FYI, AppImages do _not_ run anywhere. There are a lot of issues with them in
NixOS, since NixOS is all about having explicitly-linked dependencies, and
AppImages still often have implicit dependencies that aren't in the image
itself, since they are assumed to exist on the host system.

See
[https://github.com/NixOS/nixpkgs/pull/51060](https://github.com/NixOS/nixpkgs/pull/51060)
for an example.

~~~
tigrezno
but who really uses NixOS? a 0.001% of linux users?

~~~
nyanloutre
How did you came up with that number ?

------
im_down_w_otp
One of the issues I've had with things distributed as Snaps or Flatpaks is
that they tend not to pickup the settings and preferences that I've configured
on my workstation.

For example, if I've setup themes, fonts, scaling factors, custom keyboard
shortcuts, etc., they tend not to be available/utilized by the applications
which are distributed in these bundling formats. For the most part, that's why
I avoid using them.

I assume, but don't actually know, that this also true of AppImage. Can
anybody who knows better confirm, or deny, that?

~~~
vhhn
I was surprised when VSCode snap told me that I should start using different
snap to install new versions if VSCode and that my settings would be kept. And
it did. So there is some way to respect users settings across snaps at least

~~~
Lorkki
The official VS Code snap is unconfined, ie. it only uses snap for package
management without the sandbox. This means it can access the settings stored
in your home directory just the same as if it was installed through apt.

Snaps with strict confinement can also ask for home directory access as a
specific permission.

------
earenndil
It makes me really sad that this is necessary. Unix _has_ a concept of shared
libraries. And somehow it managed to get ruined so irrevocably that there's no
going back. This—this was a solved problem! It really was. It was solved, and
then we unsolved it when we decided that 'move fast and break things' was more
important than ABI stability. And now shared libraries are completely useless.
I struggle to name a single useful c library that's useful as a system-wide
shared library nowadays. Libc/m doesn't really count because it's part of the
language, and libz is small enough that everyone who needs it statically links
it. Everything else is neither ubiquitous nor stable enough to be used. HOW
DID THIS HAPPEN?

~~~
vlovich123
Private developer wants to distribute binary + shared dependency libs. On
Windows they package it into an installer which unpacks it into the target
destination & everything works. On MacOS the user gets a folder that acts like
file within which everything is stored. Additionally there are reliable
releases so something targeting a minimum of MacOS 10.14 has a reliable way to
specify that in the toolchain & know that the prerequisite runtimes are there
(Windows is a bit less elegant here but still manageable).

On Linux you have to provide RPMs for Redhat, DEB files for Debian-variants,
??? for Gentoo users. Moreover, your dependencies have to be managed in a
totally bizarre way & you need special launchers to put your shared libraries
elsewhere & add them to the path to avoid making assumptions about whether or
not the user has the right prerequisites. Or you run your own apt/yum/etc
servers to host your packages & play nice within the ecosystem. Additionally,
some do periodic releases. Some do rolling releases. Considering how small of
a population Linux it makes it more headache than it's worth to target for
commercial shops that are cross platform as most of their customers are. That
also isn't getting into the mess that 32-bit vs 64-bit is on Linux.

Finally, the big advantage is that the release is done by the author. No more
package maintainers providing questionable maintenance across a bunch of
distros.

~~~
bubblethink
>Finally, the big advantage is that the release is done by the author

I see that as the biggest net win of the current system although it may seem
inefficient or bureaucratic. I do not trust the authors. The only modicum of
sanity and trust comes from the fact that debain/fedora maintainers are
actually on your (the user's) side and have strong rules and guidelines about
everything. Desktop linux doesn't have that much meaningful isolation or
sandboxing that malicous apps cannot circumvent. It's only now that we are
seeing some efforts in this direction. Still, it's quite far from something
like android where you can quite safely run arbitrary applications.

~~~
oblio
With your logic, you've already lost.

If you don't trust the developer of an application you already run, you're
screwed in any scenario.

Yours is not a realistic threat model.

~~~
flukus
Developers don't deserve that trust.

It's not just threat model, developers are increasingly focusing on fast
iteration and annoying users with constant and often unwanted updates,
something debian saves users from, very few users care about always having the
latest features and bugs or want to become beta testers. Not to mention the
privacy shitshow from developers wanting telemetry or more nefarious reasons.

Software repositories like debian and the apple app store are great because
the put a layer between the developer and the users and require a 1-1 trust
calculation.

~~~
oblio
Distributions in their current form are almost harmful. I like what they do,
conceptually, but that model you're describing should only apply for the base
system. I want Firefox to update ASAP, I want VLC to update ASAP.

The distribution model should only apply to libraries and base tools. And even
those should be versioned so they can coexist easily and I'm easily able to
install any app, from the ones that want GTK1 to the ones that want GTKLatest.

~~~
flukus
> I want Firefox to update ASAP

Firefox is the perfect example of why I hate user facing apps updating
constantly. They're always adding random features, breaking plugins (still
don't have vertical tabs working properly) and shifting the UI around. It was
much better back when they had stable releases.

> The distribution model should only apply to libraries and base tools.

As long as nothing breaks it doesn't worry me how many times libc is updated,
it's the user facing changes that interrupt me I want to avoid.

> And even those should be versioned so they can coexist easily and I'm easily
> able to install any app, from the ones that want GTK1 to the ones that want
> GTKLatest.

If they can't commit to stable releases and non-breaking API then they aren't
going to commit to maintaining the 15 versions of GTK that you'd end up with
on your system, that's the worst of every world.

------
hardwaresofton
Video introduction to AppImage (linked on the AppImage website):
[https://www.youtube.com/watch?v=mVVP77jC8Fc](https://www.youtube.com/watch?v=mVVP77jC8Fc)

Also, a side point -- is it wrong to want people to spend more energy on
building fat binaries? To me they are the ultimate in portability (by
definition), and investing in projects like musl libc and distributions like
alpine, languages like go and rust that build portable static binaries is so
much more accessible to me as a developer.

All the approaches to portable apps seem to just be hacking around the problem
but I wonder if we should instead be pouring energy into making fully static
binaries easier to build, _then_ trying to optimize them to get them smaller.

~~~
pknopf
What if another heartbleed happens? Wouldn't it be better to update a single
shared library?

~~~
mickael-kerjean
In theory. In practise it's much faster to push for a fix to your users with a
brand new fat binary than having to figure out the mess that is distributing
your software on every possible Linux distribution (obligatory XKCD:
[https://xkcd.com/927/](https://xkcd.com/927/)).

Also shared libaries assumed your software will work on a different version of
a library which is quite a bold assumption that may or may be true depending
on the phase of the moon

~~~
Conan_Kudo
> In theory. In practise it's much faster to push for a fix to your users with
> a brand new fat binary than having to figure out the mess that is
> distributing your software on every possible Linux distribution (obligatory
> XKCD: [https://xkcd.com/927/](https://xkcd.com/927/)).

Electron seems to have disproved this. There are many Electron based
applications that are broken with glibc >= 2.28 even though a fixed version of
Electron has been out for it for nearly a year.

Fat binaries (or fat binpacks) are a failure.

~~~
hardwaresofton
Would you mind explaining more about this? I'm not sure I understand
completely what you mean -- glibc is basically impossible to statically build
and is linked. It's part of the reason why "static" builds don't really exist
on debian and many other distributions. Correct me if I'm wrong but glibc just
isn't portable -- this is why I mentioned having to go into alpine & build
with musl libc. Seems like the electron project has chosen not to support
it[0].

Another aspect worth considering is the software logistics/delivery problem --
it absolutely _would_ be great to have dynamically linked software updates if:

1) Your software could always ensure to get the version it expects with the
version it expects

2) It wasn't hard to distribute the software (AKA X > 5 providers are hard to
package for)

Assuming I'm not completely misunderstanding your point, if the electron based
applications you're discussing were truly fat binaries, _nothing_ could break
them, outside of CPU architecture level impropriety.

BTW, there are some systems like Nix & Guix that have solved #1 -- it's
extremely easy to ensure that your program gets the exact version of some
dependency.

[0]:
[https://github.com/electron/electron/issues/9662](https://github.com/electron/electron/issues/9662)

------
giancarlostoro
You know if even Linus Torvalds likes it, you've done something right. I love
the idea of solving the current approaches to installing / maintaining
software on Linux. This is one approach I do like, but I still appreciate
maintaining packages through a package manager. I would love to see a best of
both worlds, packages like deb that can both run directly and be managed
through the package manager, depending on how you choose to run them.

~~~
stubish
The quote doesn't say he liked it. The quote says it is 'just very cool'.
Maybe I've dealt with too many out of context book blurbs and sound bites in
my time, but a single dubious endorsement like that is worse than no
endorsements.

~~~
progval
Full quote:
[https://web.archive.org/web/20170914030116/https://plus.goog...](https://web.archive.org/web/20170914030116/https://plus.google.com/+LinusTorvalds/posts/WyrATKUnmrS)

~~~
stubish
Would suggest "This is just very cool [...] works very well" to avoid people
like me reading through the lines ;)

------
znpy
Every app packaging its own shared library and runtime because someone didn't
want to deal with packaging software.

I foresee someone in five years complaining about how much RAM linux on the
desktop uses. And somebody else blaming it on shared libraries not actually
being shared because all the "apps" load their own snowflake library. So
multiple copies of glibc, multiple copies of gtk, multiple copies of
everything.

RIP RAM AND WALLET.

------
shmerl
Don't forget about trade-offs. Such kind of bundled packages have worse
security than distro packaged method, where dependencies are getting patches
and fixes. Because most developers won't ever bother patching their bundled
dependencies.

So know what you are paying with.

------
voodootrucker
What people use it for at the top, what it does in the middle, and how it
works at the bottom, buried in a video - typical modern sites (except this one
at least has the video).

What I would like to see: 1\. Problem statement 2\. How this solves it 3\.
Usage guide 4\. Source code link 5\. No appeal to authority of who's using it

~~~
satori99
> and how it works at the bottom, buried in a video

A video that plays for _12 minutes_ before it explains the bit about using an
ELF header which mounts a disk image payload using FUSE.

------
opan
I'd rather see people use Guix or Nix when their native package manager
doesn't have something.

------
dorfsmay
Anybody has strong feelings about AppImage vs Snap vs Flatpack (and any other
similar ones)?

~~~
sandov
I like the concept of AppImage much more than Snap and Flatpak.

I fully embrace the idea of decentralized distribution of applications, as
opposed to the way package managers work (central repository mantained by the
distro)

I believe the operating system should only be concerned about the _base_
software and present a sane interface so that the user can then install the
specific programs they need, the OS should not care about how or where the
user gets those programs.

Appimage is the only project I know that respects that idea. Snap and Flatpak
are centralized AFAIK (or are unnecessarily hard to use in a decentralized
manner).

~~~
deviantfero
How is this different from windows? I think this is good for dependency heavy
apps, such as krita, but you should still try to keep things as centralized as
possible, makes updates easier and painless

~~~
sandov
it's not fundamentally different.

The practical difference is that the ecosystem of Linux applications is
composed almost entirely of open source software. Consequently, installing
something you downloaded from the web is much less dangerous than installing a
closed source program on Window, provided that you trust the website.

I agree that the centralized scheme is easier to use in the 80% of cases. i.e.
when:

(1) The package you want is in the repos, and ... (2) The version of the
package you want is in the repos.

But, when those 2 conditions are not met, installing software is usually
harder than on Windows. Additionally, I don't like the very nature of
centralized things, even if they are managed by _the good guys_.

~~~
stewbrew
Unless somebody else built the app from source and reproduced exactly the same
binaries there is no guarantee that the binaries you download were actually
built from the source you're looking at. Open source per se doesn't magically
imply any benefits wrt security. Things look differently if the binaries were
built on a central & trusted platform or by trusted packers.

~~~
sandov
> Things look differently if the binaries were built on a central & trusted
> platform or by trusted packers.

How so? I believe the same principle applies for centralized distribution. How
do I know the packer didn't change the code?. The same way I trust repo
mantainers I can trust application developers, or any other third party.

And reproducible builds are possible both in decentralized and centralized
modalities of distribution. Aren't they?

------
TBF-RnD
Come on guys, I see a lot of negativity here but we can have the cake and eat
it. Look at the following imaginary but likely scenario. A promising coder
creates an app in C using cmake or whatever. For a veteran linux user to use
git and compile is not a problem. Our up and comming coder want to reach out
to a wider audience however so instead of creating X packages he provides an
appimage. All of the sudden all the fedora, debian and ubuntu guys etcetera
can run it without the hassle.

Now imagine if the project turns out to be a silver bullet for some really
important problem. What will happen, the maintainers for the bigger distros
will simply download the code and there will be maintainers that steps up and
maintains the software for the repos.

Voila, the best of two worlds.

... and if the project doesn't become a huge mainstream access users now can
get it via source or appimage .

As far as commercial projects are concerned they will operate according to
different dynamics. But who cares we want open source solutions for our linux
systems anyway.

------
Lowkeyloki
What I'm disappointed about with AppImage is that it doesn't deal with
differing architectures as far as I can tell. Which would really help me right
now as I had to send my laptop to Dell for repairs and I'm now using just my
Android phone and my Raspberry Pi 3B+ as a desktop. And compiling stuff on the
Pi is SO SLOW!

------
sprash
If you want portable apps just make one fully staticly compiled binary.

This is the worst of all worlds. No security updates for used libraries and no
performance gain that comes with static linking.

~~~
zurn
The static linking ship has sailed years ago for glibc based apps (no static
linking support).

~~~
sprash
Always use musl for static linking. As a bonus you might get an even smaller
binary than a dynamically linked glibc binary.

~~~
cesarb
Keep in mind, however, that AFAIK musl won't respect /etc/nsswitch.conf, so if
for instance the machine is configured to lookup users on ldap, a musl static
linked program won't be able to correctly lookup users.

~~~
beatgammit
There are certainly cases where you need the features of glibc over musl, but
those are pretty rare IMO, and you can always implement a missing piece
yourself if using musl saves you enough in maintenance overhead.

------
baroffoos
What is the difference between appimage and just a binary marked as
executable?

~~~
osrec
It's supposed to contain all its dependencies.

------
zurn
Does it work on Android? For cli apps running under adb, at least?

------
hpaavola
"To run an AppImage, simply:

Make it executable

$ chmod a+x Subsurface _.AppImage

and run!

$ ./Subsurface_.AppImage

That was easy, wasn't it?"

No. How about click/double click the app icon in your menu/desktop/the folder
where you downloaded it into?

~~~
IloveHN84
You need .desktop files in /use/share/apps

~~~
hpaavola
I, as a user, do not need any files in /usr/share/apps. The system might need.
And I don't want to deal with those.

All other major operating systems (can) work like this; download something >
click it > it works. No need to launch terminal, set executable permissions
and type the name of file.

------
jhoh
This website just makes me angry:

\- Fixed social media share buttons that cover the text on mobile

\- It auto-translates to German even though my system language is set to
English (seems like they use geolocation for this which is bad practise)

\- Center aligned text that is annoying to read

------
Annatar
"No need to install". This is every system administrator's nightmare: users
running arbitrary executables bypassing operating system packaging. Come time
to upgrade or reinstall what can happen? If there are security updates which
are needed to the program, what could happen, since this is statically linked?

This is the pinnacle of destructive lazyness and amateurism in IT: as a
developer it is one's job to master every operating system packaging format
for the target platform one develops for. OS packaging is a tool invented for
developers, not a tool meant to be subverted at every turn and opportunity
like this.

~~~
etaioinshrdlu
Literally the reason why PCs won over mainframes. Users want to run the
software they want to run.

If you don't trust your users to run software on your server you probably
shouldn't let them on your server in the first place... or else contain and
isolate them with a VM or similar.

Multi-user operating systems are feeling a bit like they are going the way of
the dodo, to me... That is acutall multi-user systems, not user accounts for
system services.

~~~
Annatar
What do you think powers the InterNet that you're connected to right now?
Multiuser systems, running applications under different logins. Even the much
hated systemd brought physical multiuser computing back to GNU/Linux.

~~~
etaioinshrdlu
Yeah, but the user accounts are at the application level not the os level.

I doubt there is a top internet company around that makes a unix account for
each web user. That would be an antipattern...

~~~
Annatar
Search for "free shell accounts". You might be surprised.

~~~
etaioinshrdlu
It kind of reminds me of shared hosting providers without root access. Sure
they exist but really have been overtaken in a big way by virtual private
servers... That's what I mean by they seem going out of style.

