Hacker News new | past | comments | ask | show | jobs | submit login
AppImage: Linux apps that run anywhere (appimage.org)
357 points by b169118 on Feb 27, 2016 | hide | past | web | favorite | 233 comments

Seems unnecessarily complicated.

A script like this gets you 95% of the way there:

  mkdir AppDir
  mkdir AppDir/bin
  mkdir AppDir/data
  cp $INSTALLDIR/app AppDir/bin
  cp -r $INSTALLDIR/data AppDir
  cp `ldd AppDir/bin/app | grep -o '\W/[^ ]*'` AppDir/bin
  cat << "EOF" > AppDir/app
  SCRIPT_PATH=$(dirname $(readlink -f $0))
  $SCRIPT_PATH/bin/ld-*.so.2 --library-path $SCRIPT_PATH/bin $SCRIPT_PATH/bin/app $* 
(Sometimes I wonder if people make a big mystery of Linux app distribution on purpose, to discourage distribution outside of proper, secure channels.)

Yes, except that you might want to be a bit more selective in which parts you use from the base system and which parts you want to bundle - some people might not want to have e.g. glibc in each app bundle. This is where it starts to become a little more complicated than the above, see https://github.com/probonopd/AppImageKit/wiki/Creating-AppIm...

If you don't have glibc in each app bundle, it won't run everywhere. Different distros have different versions of glibc and it's very easy to end up in a situation where you use some symbol that's not defined in the system one.

Which is why you want to build on a system that is using an older version of glibc than the systems you want to run your software on. Assuming that glibc does't break backward compatibility, which it really shouldn't (and in practice, rarely ever did). Or use https://github.com/probonopd/AppImageKit/tree/master/LibcWra...

This makes a mockery of shared libraries. The technical debt will accrue and run unfathomably deep. Security failures due to currency issues are simply the most obvious. The insoluble mystery bugs of mismatched dependencies will plague application developers that choose this distribution strategy.

Yay for hyperbole ;)

Shared libraries are most useful when matched within a specific distribution's package/version chain.

Tying a third-party distributed package to a given shared library, on the other hand, is less helpful than just supplying the expected, tested, and supported library upfront and still let knowledgeable operators do what they will on their chosen platform.

No technical debt is accrued if the libs are truly interchangeable anyway, but especially if they are not, this stands a better shot at fixing it.

Your suggestion amounts to advocacy of tight coupling, which massively dilutes the long-term benefit of shared libraries and leads developers under pressure of expediency into a tangled thicket of anti-patterns. This is an illustration of how technical debt is accrued, and I've seen it umpteen times, never to the good.

If you don't have Linux in each app bundle, it won't run everywhere. Linux breaks userspace ABI sometimes.

I think you meant "$@" rather than $*

(The quotes in "$@" are important.)

Also, your script will fall over if any of the paths have spaces in them, such as INSTALLDIR.

You should use lowercase for variable names like SCRIPT_PATH. Uppercase is for exported variables.

readlink -f $0 breaks on OS X. You must do both readlink -f $0 and readlink $0 in order for it to work everywhere.

Shell scripting isn't easy. It took a year to understand these nuances to the point that they're second-nature, and I'm still discovering new ones. Anything that improves this situation would be a welcome change in my opinion.

EDIT: cat<<'EOF' is necessary here. cat<<"EOF" will interpolate variables.

Some commands that can help with nuances of shell:

Check syntax:

sh -n myscript

Check for bash-specific syntax:

checkbashisms myscript

Check for various gotchas:

shellcheck myscript

Check for style:

bashate myscript

Check for badness:

grep -rE '(wget|curl).\|( sudo)? *(ba|z|)sh' myscript

To elaborate, the GP should have put quotes around command arguments that included variables. For example:

  cp $INSTALLDIR/app AppDir/bin
Should be:

  cp "$INSTALLDIR/app" AppDir/bin

> Shell scripting isn't easy. It took a year to understand

That might make it seem like too hard for some people. It's really not that bad. These particular mistakes were beginner's mistakes.

But it is tricky! I usually try to tell people to avoid writing shell scripts until you've at least read the "Bash Beginners Guide" or something similar. It's comparably short (remember, you're learning a new language and an arcane one at that) but at least lets you avoid the easy traps. Also, indent and comment. Surprisingly many people seem to think hygiene is somehow less important outside their main language.

On a side note, what's up with not reading manuals these days? When I am tasked with something new, no matter how trivial, my first step is to at least skim the manual get a feel for what the problem domain looks like and how you are supposed to wield the tool I am about to use. Turns out nobody does that anymore. Zero, out of over a dozen people in my closest (java, web) team have read the documentation for the product we build on. I find it unprofessional, but I realize I'm the strange one and it's probably just me getting older.

> On a side note, what's up with not reading manuals these days?

The difficulty of finding manuals, and when you do, they're badly written in such a way that they assume all sorts of contextual knowledge based around the sorts of problems the author of the software was trying to solve in 1986 or whatever year. Manual writing is at least 1000x more difficult than bash scripting, because you have no way of knowing up front how the target "machine" is going to interpret your "script".

And of course that's all assuming a manual even exists at all.

Everything seem complicated when you dont properly understand the problem.

That's a really interesting thought, because everything also seems simple when you don't properly understand the problem.

("Let Apple open the damn back door, let the FBI know what's on the phone, then close it again. This isn't difficult." — Piers Morgan)

you two are discussing separate sides of the same coin. or to clarify:

people with a naive understanding of a problem will always feel like people who don't design solutions that are overly complex.

people with a deeper understanding of a problem will always feel like people who don't design solutions that are overly simplistic.

I've successfully refactored way too much code that was needlessly complex to agree. Better understanding of a problem frequently leads to simpler solutions. It doesn't always lead to more complexity.

Whether people who are bad at programming, or at least inexperienced, write overly complicated code or not has got nothing to do with what I said.

Except that there's word-choice correlation.

Don't steal my code, please. Don't bundle my open library with your proprietary app without my written permission.

That depends on your target user. I can't imagine asking my parents to run your script (even if it was packaged up as a single file), but I'd like to think that they could run something like AppImage without too much problem.

What happens if the app forks a child process? Chrome, for example, starts nacl_helper and chrome-sandbox. Is there a straightforward way to get those children to inherit the dynamic loader and library path?

> A script like this gets you 95% of the way there

Ohhh, but that five percent!

Ohh, yeah, so you force everybody to understand the command line when something does not work as expected.

95% of the way means it is not finished. When I buy a car the seller better does not say that the car works 95% of the time.

Installing something in millions of computers is hard. We do it. Being able to install something on one computer with a person that is a computer programmer with 20 years experience with the command line is completely different from making it to just work for millions of people.

Apple actually makes it work for millions of people in a very simple and elegant way. As a geek I could use homebrew or any other system in Linux if I want, but Apple's system works great for most people.

What this could and should lead to is this: a separation between system packages and user applications, prefferably with two different managers.

What we have now are mostly system package managers, you want them to be stable, secure, having the latest features might not be necessary.

But we see more and more often that that distribution channel doesn't work well with applications: you end up with old, buggy, insecure applications because the distribution just couldn't keep up with the upstream update cycle. Why not have a cross-distribution application manager which distributes AppImages? That way, application distribution is an effort concentrated over all the distributions, possibly benefiting the entire community.

Not everyone has root on their Linux box and even if you do, you ought to be able to install applications as you without sudo'ing anything. Right now most Linux applications guide you to a deb/rpm which runs as root (and who knows if you can trust it).

TBH this is just the start of making applications more secure, they ought to be sandboxed too, so one application can't read the data from another application by default (similar to how phones work).

There's also NIX, which I think does a good job of issolation in user space. https://nixos.org/nix/

You can make it slightly harder to infect the rest of the system leaving the factual truth that the moment you install malware you are hosed unchanged

Wouldn't this raise the issue of application updates ? If the app imports its own libraries, who will take care to update them if a security issue is detected ? That's the point to enforce using shared libraries on unix system.

You actually cannot make malicious applications safe you can however make nonmalicious applications inconvenient.

I believe Ubuntu has done a some progress in this area with Snap packages.

If you don't share common dependencies you're doomed to have security hazards.

Consider it this way: if you install package FOO with its dependencies A,B and C, and FOO's developer thinks it's "feature complete", the package will never be updated, but every time a vulnerability is disclosed in his dependencies you'll be it. For _every_ program maintained this way.

My argument is that you'd have cross-distribution appimage maintainers who would press a button and rerun the build script with the new library if it's needed.

So appimage becomes yet another distro? (with the somewhat unique property that you can overlay it over other distros)

More like app ecosystem.

I really think that although this might be a boon to packagers it results in an inferior system wherein people end up with bloated,buggy packages.

I don't want to deal with 2 different package managers and new users are apt to be confused by the idea that they get packages from 2 different places.

Yea, I agree. I personally think that this is a huge step backwards.

Look at Android and MacOS. You have these self contained packaging things that contain all their dependencies. You have the same dependencies in each package. You have a TON of wasted space with packages that have the exact same built-in libraries and jar files.

Especially with Android, they could have made a real package dependency system, with slotted installs of all the jar/jar-versions from the official maven repos. When you install and app, it could install all its dependent jars. You'd still be able to do multiple versions of each of those jars, probably easier than it is with standard .so/.dll libraries.

Projects like this are a huge step backwards is terms of DRY principals and general package management.

Is space really an issue in most non-mobile user devices nowadays? Consider the tradeoffs: a bit of space vs. packaging nightmare which results in old, buggy and insecure packages. This was posted a few days back on hn: https://statuscode.ch/2016/02/distribution-packages-consider...

I am not sure if the whole jar package dependency management would work under ART.

I wastes memory too, having multiple copies of the same shared libraries loaded.

Memory spend on executables isn't a bottleneck even on mobile. You are getting into embedded territory for this to become an issue.

I really hate that people have this idea. It's why people create apps which consist of a built in web server their own copy of chrome that connects to said server.

In the end something that ought to have taken 10mb of space and 20mb of ram requires 100mb of space and half a gig of ram and still manages to be slow on a new system at operations that ought to have been fast on a ten year old machine.

Down with the Javascript generation.

I've been spending a lot of time in the embedded area lately (sub-200Mhz 32k RAM automotive systems, firmware and fpgas) and I agree with you. But there's always tradeoffs. In this case you're trading some memory (which is cheap nowadays) for up-to-date, secure, usable apps, which seems to be a pretty complex thing to do. Nobody solved it so far. The people using node-webkit are offloading the library compatibility issues to the people who built Chromium and are provided with a consistent container to run their application in.

Or in other words, they use a poor man's Java vm, except bigger and more fragile.

To be honest, chrome-"native"-packaged-webapps, or what you wanna call them, feels a whole lot better to use than any java based GUI app I have ever used, including JetBrains products.

I hate to use any of these chrome-"native"-packaged-webapp, though. I just hate java based GUI apps more.

The software that we build tends to gain layers. In theory, each layer adds new functionality that couldn't be done on a lower layer. The problem comes when someone unnecessarily implements lower functionality in an upper layer (like reading local files through a web server). It happens because systems are too complex now for a specific dev to know every layer, and new hardware is fast enough that it doesn't matter that you're taking an order of magnitude more resources than you actually need to do the task.

Not so. This matters at scale too. Multiplied by ten thousand nodes, a 50MB wasted RSS becomes half a terabyte of unnecessarily procured silicon. I don't want to pay that tax.

Android and MacOS each have a lot more users than desktop Linux, a lot more applications packaged, and applications get updated in a more timely fashion. Obviously that's not all down to a difference in packaging, but it does seem like the inefficiency of duplicated dependencies isn't all that big a problem.

Android also runs on phones that still mostly have less storage and less RAM than our desktops, so you'd expect saving space to be even more important for Android.

Well, it's the system that is composed of packages. The app images would be distributed through an app center. Most users wouldn't have to touch the system packages in their daily use case.

> preferably with two different managers.

That's overcomplicating things. The "user" manager would have to figure out where the specific distribution is storing this or that lib. Doing it reliably across even a small subset of distributions (say, Ubuntu, Debian, Fedora and RedHat) and a small subset of their releases, would be very challenging.

It would make much more sense to add a "user mode" option for the likes of apt-get, whereby it does not need sudo and it will install the specified package in ~/bin, ~/usr etc, symlinking necessary libraries. That doesn't seem too hard to pull off, in theory.

> It would make much more sense to add a "user mode" option for the likes of apt-get, whereby it does not need sudo and it will install the specified package in ~/bin, ~/usr etc, symlinking necessary libraries.

And then any random piece of malware running in your browser can in principle hose all your applications. There's a very good reason that ordinary users do not have write access to application binaries on Linux.

Any proposal of untrusted user level software installation will require, to be legitimate, a well implemented MAC solution to the program. Android actually demonstrates this fairly well using SELinux and app permissions.

New software could present its access requests to the user, to be approved or not. Unapproved or undocumented accesses are also displayed as notifications, where declining the running apps access becomes a permanent rule.

Of course, to standardize such a system across distros, either you need an abstraction layer access management syntax, or you have to get everyone to agree to one of SELinux / Grsec / Tomoyo / Apparmor.

Really though I think the obvious answer is that ten years from now we have something like xdg-app shipping generic usage filters and mandating userspace MAC on new programs, along with an xdg -> distro specific library mapping layer from something like appstream.

It would hose only your "usermode" apps, which would be a small subset -- likely smaller unpopular apps that lag in the distro repository. The system as a whole would remain intact, so you can logoff and clean up the mess as root. What hack would rely on some specific app being deployed in user-mode anyway? To do what, steal user files it already has access to?

It's obvious that "usermode" should be the exception and not the norm, but from a security standpoint it's exactly the same as compiling and installing with custom prefixes.

> It would hose only your "usermode" apps, which would be a small subset

Until more and more user apps start to get out of sync with the "system" versions of things, and therefore need to be installed with the "user" option in order to work.

> What hack would rely on some specific app being deployed in user-mode anyway?

A hack that doesn't care about specific apps but just wants to compromise whatever it can. Like, you know, a virus.

> To do what, steal user files it already has access to?

And send them to the Internet, without your knowledge. Like viruses already do--only it's a lot harder to get them to run on Linux. At least, it is now.

We use HTML5 clipboard and webcrypto and frequently get support requests from Linux users with outdated versions of supposedly 'evergreen' browsers.

That's hardly related to a package manager. You can for example run Debian unstable to get more up-to-date stuff or mix Debian Jessie with backports. Same package manager, different sources.

I think the whole point of OP is to have completely separate installs (e.g. system apps installed in one root and user apps in another). If you go installing latest apps from different sources/repos, you're bound to face conflicts with the system packages. E.g. while installing packages with CPAN or pip, it is really easy to cause conflicts with the system Perl or Python, respectively. Right now there are lots of varying solutions to address this such as virtualenvs and docker images. These are fine for developers, but generally not acceptable to consumers.

It would be nice to have a safe consistent experience, a generic user-mode package manager.

FWIW, it's generally a bad idea to mix in testing and unstable in a stable system (or even just testing). My recommendation is to run stable along with backports (if needed) -- and use something like schroot[1] for running persistent (separate) sessions under testing, unstable etc.

Schroot essentially wraps up debootstrap and chroot along with mounting /proc etc, and (optionally) bind-mounting /home (which among other things gives access to x11 session cookies, for sharing one xorg display -- so one could run firefox, gimp or mplayr from an unstable-chroot on a stable system). Overall, with SSD prices dropping as sizes are increasing -- it might be better to simply use vms (one of the nice (potential) things about rkt -- the ability to run a "container" under kvm, turning it into a vm).

[1] https://wiki.debian.org/Schroot

What I'm suggesting is a decoupling between system package managers and application image managers (which should run across distributions/versions).

Edit: thinking about it, this should also be the perfect opportunity for a cross-distribution app store/center.

Thanks for mentioning this. I have been running unstable for many years and when people ask I simply tell them that it's like running latest Ubuntu except i can stay with my favorite Debian.

Yea I never understood why there's no separation between these two. The fact that apt-get needs root password only makes sense if apt-get is meant for system packages.

User applications should not require root privileges to install.

Installers regularly install packages to system dir because it would be idiotic on a multi user system for each user to install different versions of Firefox to their home dir.

It also serves to keep unprivileged users from installing software.

You are free to install software to your home but it shouldn't be the default behavior

The overwhelming use case for desktop users is there's usually one user on the system.

Which is not a good reason to design anything based on that assumption it's extremely easy for a single user to use a multi user system but not the reverse.

I believe the core problem here (that led to containerization, application images and alike) is that correct packaging for most distros is hard.

There are tools like fpm or even checkinstall that can build simple good-enough-but-not-really packages, but I think maintaining a "proper" Debian packaging requires some pretty arcane knowledge that's spread around various pieces of documentation (and maybe I'm just stupid, but also a lot of trial-and-error).

Completely agree with you in regards to Debian packaging. I googled around and found at least 5 different official guides (on the debian wiki) all using slightly different approaches. I tried 3 of them before giving up, as none of them seemed to work.

Oh? The basics are not that hard. dh_make will do most of the work for you.

The basics may not be that hard, but the multitude of overlapping tools and frameworks that sit on top of them and try to make things easier is seriously confusing and offputting. For instance, there are build tools called dpkg-buildpackage, debuild, git-buildpackage, pbuilder, cowbuilder, and doubtless some others I've forgotten about. Different guides will recommend different ones.

Once you've made a package, you either try to get it into Debian, or put it in your own repository, both of which come with additional challenges.

I've done some Debian packaging before - both to go into Debian and to put in PPAs. I've given it up: the effort was too much and the rewards too little.

Sad thing is that this is 100% a problem related to the tools for building Debian packages (same goes for RPM). The tools can be replaced without sacrificing binary compatibility at all.

If any aspiring hackers are around, I suggest taking a look at Arch and its makepkg/PKGBUILD tools. Pretty much a simple shell script to define a package, and a uniform tool to build it.

Debian packaging basics aren't hard once you know them, but starting from scratch, it's hard to collate the information. There's lots of stale information out there (including the debian wiki) and disagreement about how to do it - debuild? dh_make? dpkg directly? tar/ar manually?

Similarly, if you want a package up something without a Makefile, you may as well go home. My first packaging attempt was basically to install a tarball at a location (in-house use only), and it wasn't clear that I had to make a Makefile to do this first. And then you get introduced to the bizarre world of make with all of it's idiosyncratic rules and behaviours.

Then you get to play with all the dpkg control files, and if you're using debuild, you have to avoid the lintchecker - because you can override lint rules... but the lint profile then disables overrides for some useful ones (the "install to /opt" rule gives me lots of spam). So now you need to learn about the lint-checker so you can do your own profile (or just disable it).

Then, as takluyver says above, you get to add it to a repo so systems can access it...

Once you know all this stuff, it seems pretty simple, but getting over that hump is difficult.

One part of it is that every damn package manager out there (outside of perhaps Nix/Guix and Gobolinux) is hung up on having one canonical version of every lib package.

Meaning that the package manager balks at having lib 1.0 and 1.1 installed at the same time, even though ld can handle it just fine, unless you play sleight of hand with the package names (meaning that you get something like lib0 and lib1 for 1.0 and 1.1 respectively).

This in turn leads to a bunch of busy work regarding backporting fixes, maintaining manager specific patches, and recompiling whole dependency trees on minor updates.

There's nothing fundamental that means that has to be the case; in fact, Debian does (in some cases) ship multiple library versions in a stable release.

The primary reason this is usually not done is the work required: it means you need to support two versions in stable, two (or more?) versions in unstable, &c. This quickly becomes a great deal of work :)

Err, their solution is the one i lay out in the second paragraph, where foo 1.0 becomes foo0 1.0, and foo 1.1 becomes foo1 1.1 simply to get around package manager limitations.

That is where the extra workload is coming from, the need to juggle effectively two package trees, and the patches for each.

If instead the manager was properly able to handle multiple versions, they could all be named foo, and the manager would be able to traverse the tree and see if foo 1.0 could be discarded or not based on the dependencies listed in other packages.

You get something of this nature in Nix (though they take it one step further) and Gobolinux, by using the directory tree as the final arbiter.

On Gobolinux you have /Programs/name/version, thus installing foo 1.0 would end up in /Programs/foo/1.0, while foo 1.1 ends up in /Programs/foo/1.1.

Then as long as you have one or more programs installed that need 1.0, you can leave it in place, and remove it when no longer needed.

For sake of compatibility, Gobolinux also have a /lib (along with the rest of the FHS) that contains symlinks back to the individual lib files in /Programs, using SONAME to its full potential.

I think lwf was talking about the human cost of having two versions. That doesn't go away when they are both named foo instead of foo0 and foo1.

Ranking package managers by difficulty of creating packages (1-10, lowest to highest):

1. tarballs (slackware)

2. Ports (FreeBSD)

3. Portage (Gentoo, Calculate Linux)

4. pacman (Arch, Archbang, KaOS, Manjaro, Antergos)

5. building from source (Linux From Scratch, Most old installs eventually turn into this)

6. rpm (Redhat, Mandriva, Scientific Linux, UnitedLinux)

7. deb (Debian, Ubuntu, Mint/LMDE, Hanna Montana Linux)

Why Debian and Redhat-derivatives are the most popular, I'll never understand. I have _way_ less headaches maintaining custom Calculate Linux chromebooks than I ever did with fleets of Redhat or Ubuntu laptops.

The web of dependencies for both build and execution can be maddening. This is why apt-get and yum have won: they take care of finding and downloading those 3246536245 libraries which are absolutely essential for each stupid two-liner application out there. And when it comes to large software (which you probably don't manage on a chromebook), good luck downloading and compiling several GBs of KDE or GNOME source code with the right options for your hardware.

running/building a full plasma 5.5 KDE 5.x current. Full install including most office apps (Libreoffice, etc), a bunch of emulators/games (free and proprietary - steam included), and other apps.

Intel haswell/sandy bridge chromebooks with custom hard drives. I think you're greatly overestimating how difficult it is to maintain all of this on Portage. I invite you to try a distribution that isn't Debian or Redhat-based. You might never go back.

A comment like this is even better if you mention what distro you were using and maybe drop a link to a guide showing how easy Portage is to use. Then, people might experience what you describe.

I already mentioned in the great-grandparent that I was doing this with Gentoo and Calculate linux (which are interchangeable).

I try not to specifically invite people to "install gentoo" thanks to /g/. they'll find it on their own. There's a lot of learning before quickpkg makes an install take 3-5 minutes.

Gotcha. But thanks to /g/? Didn't know that existed so how would I have found it on my own? And "lots of learning" despite you having way less headaches? I'm a little confused as some of your answers inspire more questions.

/g/, the technology troll board of 4chan, has a meme of "install gentoo" whenever someone says "which is the best distribution to install for a new user?" Those users usually come back about a week later hating everyone for putting them with such a difficult OS. Calculate Linux has effectively removed most of this barrier, though, making Gentoo really easy to maintain for newbies.

The Gentoo Handbook is an amazing tool to learn linux. By the end, you have an expert's understanding of how Linux works, and how to install it without a GUI (or even a package manager). I learned it during the "Stage 1 era", when the install started with bootstrapping, then building your compiler before installing the rest of your system. Nowadays, you partition disks, format, chroot in, set timezone and encoding, untar the stage 3 tarball, emerge --sync, build your kernel (which is the hardest part), install a bootloader, install wpa_supplicant, and you're done. It's about 30% more difficult than an Arch install, becuase you probably will screw up your first kernel configuration and make a nonbooting kernel, booting to a black screen. But unlike Other OS's and distros, when it breaks in Gentoo, it's probably your fault.

I have way less headaches, because chromebooks are braindead to deploy this way. flash a bios, and then install the binaries once booted off a LiveUSB. Quickpkg allows you to make binaries of your existing system (built for Sandy Bridge, until I decommission the C710's, then I'll build for haswell) with all your custom flags already set, so VLC and other apps are far more robust than their Ubuntu/Redhat versions. Gentoo is also the only distro other than GalliumOS that actually has working Elan touchpads for many chromebooks (thanks to Hugh Greenburg of GalliumOS's patches that I'm maintaining for newer kernels), as the patches have not been included in Arch or other distributions's kernels.

I run a Poettering-free install (JACK instead of PA, wpa_gui instead of NetworkManager, OpenRC instead of systemd), so having one master install I can re-deploy in the amount of time it takes to brew coffee is pretty handy, especially considering de-poetterizing a new Debian/RHEL/Fedora/Arch install is painful, if not nigh-impossible, and at the very least time-consuming.

re /g/

Haha. That is pretty evil given my experiences starting with command line Linux back in the day. Far as learning, I've heard that before about Gentoo. Thought about doing it at some point. Right now, kind of mixed between learning BSD or Linux as the security enhancements I create might need to temporarily stay proprietary to support their development. BSD vs GPL you know. I mean, I'd give back bug-fixes or feature developments where possible but differentiator has to be secretish.

"But unlike Other OS's and distros, when it breaks in Gentoo, it's probably your fault."

I'm avoiding that right now but fine with the concept. Reminds me of Mentor's Hacker Manifesto we used to quote to justify such responsibility.

"so having one master install I can re-deploy in the amount of time it takes to brew coffee is pretty handy"

That is handy. I've always especially liked how the source-based distros were customized to one's own hardware. That drastically increases the initial deployment time but significantly reduces many issues along the way.

there are two solutions for reducing deployment/build time. First is to use distcc on all machines (so when idle, they contribute to the building of any packages). Second solution is to build one master image, quickpkg it, and then deploy the binaries. I use a combination of both; all binaries are compiled for sandy bay architechture, so I get most of the benefits (though haswell/broadwell gets faster VLC/ffmpeg if I recompile), and they build at night when nobody's around.

Why hasn't Gentoo made the same changes Calculate Linux has?

Because it would pigeonhole the distro.

Calculate basically takes Gentoo and precompiles it with certain defaults.

This means that the Calculate maintainers have made certain choices for the user, while Gentoo proper would have left them up to the user.

Why would "building from source" be a difficult way to create a package? ISTM that if you have any software at all, it's already building from source? I understand that building from source is considered difficult for users, but here you seem to be talking about maintainers. Or do you mean setting up $#%&ing autotools? In that case I agree.

the dependency hell from building from source is slightly less of a purgatory than that coming from anything above it. That includes the headaches of autotools.

I dunno, Gobolinux is pretty much LFS with addition layer of tools. And more often than not, the problem is that of developers doing a crap job of actually documenting their dependencies (or hardcoding paths and/or filenames).

Perhaps it's slightly subjective, since I would place FreeBSD ports in between rpm and deb in terms of difficulty.

Debian packaging isn't too awful, though I don't like how many files are involved (I prefer a single spec file, plus patches, as found in RPM). But, apt repository management is truly terrible. The documentation is laughably bad and disjointed, and it points to several different tools and processes that may or may not work together; my Debian/Ubuntu repo generation script is a ridiculous mishmash of stuff that kinda sorta works if I squint and don't look at it too closely (and a lot of the repo is manually created and maintained because I couldn't figure out any tools to automate it).

Contrast that with creating yum/dnf repos: run 'createrepo' on the directory of RPMs, and you're done! Signing RPMs is much better documented, as well. There were like three or four (conflicting and mutually incompatible) processes that come up when you google signing debian packages and repositories, and I still don't know with certainty what the "right way" is.

I don't know why Debian never got this part of things right; apt/deb has been around for about two decades, you'd think someone would have looked at createrepo and said, "Oh, hey, we should do something like that so it's easier for people to distribute package for Debian/Ubuntu."

All that said: I'm rarely a fan of "new package managers". I like having one standard package manager on my system (I don't really care whether it is apt or yum/dnf), and I strongly believe everyone ought to be using the system standard package manager across all of their systems even for custom software. I don't like the proliferation of language specific package managers, in particular. npm, gems, crates, PyPI, eggs, pear, pecl, composer, etc. Shit's gotten ridiculous. Even on my own systems, I can't keep up with what's installed from where. And the problem has gotten much worse over the years; I used to package all my own stuff as RPMs, and all of the CPAN or Python dependencies, as well, using automated tools for building RPMs from Perl or Python libraries (the old cpanflute2 and the dist_utils RPM target; there are better ways in both cases, now, I think, for those languages). But, now that I have a much more diverse set of systems with more languages and more dependencies, that's become more difficult. And, it's also become the norm for people to document only installation with oddball package managers (npm, in particular, is the way everyone documents everything these days), with no good mechanism to package for RPM or deb.

I dunno, I think we're going down a risky path with all these "package managers" that only work for one little niche and I believe security issues will be the major negative side effect. I mean, I have great tooling to let me know when my RPMs or debs are out of date across my couple dozen servers; it's much harder to know when packages installed via all those other systems are out of date; it becomes a manual process in some cases, and it's also risky because dependencies are not always very reliable in those systems. I do a lot of, "Will this break some other thing when I upgrade this library?" Which I don't worry about with RPM updates from CentOS (because of the compatibility assurances of that system), or from my own company repos (because we do our own testing and understand the parts we're distributing).

In short: Yes, it's hard to package and distribute for RPM or deb. But, not as hard as dealing with a half dozen different package managers, containers, and a wide variety of other things being updated via a wide variety of other mechanisms. The former is hard but quantifiable and manageable. The latter is a nightmare for security and reliability.

From an application developer point of view, distribution package managers are niche: apt or rpm only works for a specific group of users on those Linux distros, whereas npm/pip/gem will work for all their users, including Mac and Windows users, who often outnumber Linux users.

> But, apt repository management is truly terrible.

Aptly is a relative newcomer to the scene that makes repo creation/management much easier. It's actively developed.


Is this of help to you? https://github.com/spotify/debify

On my Linux Mint 17.3:

1. Downloaded the app, opened Nemo and double clicked on the app. Another Nemo window popped-up and the app didn't start. Opened terminal and checked permission to find that it wasn't executable. The point of the ease-of-use is kind of lost as user would be puzzled and give up.

2. How do I uninstall the app? Is it as simple as deleting the file? What if doing that leaves orphan files (that I don't know about) that double-clicking on the appimage file could have created? The website doesn't mention how to uninstall files?

PS: regarding #2 above, I found that right clicking on the Mint Menu entry for the app shows Uninstall option. Clicking on Uninstall removes the entry from Mint Menu. The .appimage file needs to be deleted manually separately (which kinda makes sense). I just hope it hasn't left orphan files.

As to 1.) you downloaded the app (.appimage) and double clicked on it. Why? Nowhere in any documentation does it say that .appimages are executable out of the box. In fact this is a security feature that it doesn't. IN all placves that I have seen .appimages available there is a statement that says the user has to go into permissions and make the file executable before it will run. Makes sense since most malware is made to run without user intervention. If an app doesnt need you to run it, then its the perfect platform for malware/virus.

> Nowhere in any documentation does it say that .appimages are executable out of the box.

Sure, but literally the first thing on the landing page is...

> As a user, I want to download an application from the original author, and run it on my Linux desktop system just like I would do with a Windows or Mac application.

On my windows box, if I download an exe, windows won't let me run it until i go and "unblock" it. So it seems the experience is sort of similar.

> Nowhere in any documentation does it say that .appimages are executable out of the box.

In fact, just above the download button for the GP's download, it tells you to make it executable before running.

   What if doing that leaves orphan files (that I don't know about) that double-clicking on the appimage file could have created? 
I'd imagine this is true for all packages like yum ect. How can this problem be solved?

With APT you can run apt-get purge to delete most configuration files and such, if it is packaged properly.

That's still a pretty big "if".

Not really - in my experience it works quite well.

So every time there is glibc / openssl / anything else security update, We will have to update all appimage programs as well ?

With the new "statically link all the things" trend from go and rust, that's coming anyway.

Rust has always been capable of dynamically linking, and I believe that Go is gaining support for dynamic linking sometime in the future as well.

Follow-up question: is there a built-in app update mechanism? If not, this isn't really a replacement for package systems.

AppImageUpdate lets you update AppImages in a decentral way using information embedded in the AppImage itself. No central repository is involved. This enables upstream application projects to release AppImages that can be updated easily. Since AppImageKit uses delta updates, the downloads are very small and efficient.


Yes. But it's still better, since your distribution package manager can just pull directly from upstream rather than rebuilding by themselves.

Edit: Come to think of it, considering how many applications out there haven't had any updates in years... that might not be such a good idea.

As near as I can tell, yes.

Application binaries must statically link libc and ssl when making programs for packging into appimage?

I haven't dug far enough into this specific project to know if it's static or dynamic linking, but that just doesn't matter.

Each app has it's own copy of libssl etc embedded into the prepackaged "binary" which is executed.. That's enough to know it's going to lead to all sorts of suffering when you actually try and rid yourself of $CVE of the month.

You can use either static or dynamic linking. An AppImage is really just an ISO container wrapping around your binaries.

So, yea.. As I said, in this context, it just doesn't matter if your statically or dynamically linked against glibc, every single AppImage published before Feb 15th or so requires an update.

What percent of AppImage's in the wild have shipped an update? Of those, how many have updated previous stable releases rather than just the latest version? I suspect very few.

[EDIT - Typos]

Regardless of dynamic or static linking there's the fact that "users dont upgrade" so you've lost either way.

Maybe not, but Since "Every AppImage contains an app and all the files the app needs to run." Even if you were dynamically linking, you'd be linking against a lib contained in the AppImage. So, each app would still have its own glibc that would have to be updated.

"The AppImage needs to include all libraries and other dependencies that are not part of all of the base systems that the AppImage is intended to run on" [ https://github.com/probonopd/AppImageKit/wiki/Creating-AppIm... ]

Do you have any specific problem with that?

The problem with that should be obvious. Package managers can, right now, update the library that multiple applications use without updating the applications/executables too. One update rather than hundreds or thousands.

You could also just not update, but then you'll have massive security holes in your computer.

There's a reason Linux adopted the shared library model.

I'm not hrtghrth3, so can't speak for him... but..

Yes, I have a problem with that. I trust I can count on Debian/Ubuntu/RHEL will ship a new package for every critical CVE promptly, without forcing me to upgrade to the latest upstream version. I have zero faith upstream maintainers will do the same - which leaves me with two choices

1) Pretend there is no CVE

2) Use the latest app version, bringing with it all new bugs, workarounds, incompatibilities and so forth - and hey, the latest version might not even have the fixed code in it.

> I trust I can count on Debian/Ubuntu/RHEL will ship a new package for every critical CVE promptly, without forcing me to upgrade to the latest upstream version.

There was an article just a couple of weeks ago pointing out that distributions frequently don't fix security issues:


No doubt it's better for high-profile applications, but there are far more applications that people want to use than distros have the resources to issue security updates for.

Sure, you're boned if the security flaw is in the custom protocol handler of some random package of which you are one of 37 total users. Typically, however, security code does not reside in such packages. They typically link to popular libraries for e.g. TLS support. Those popular libraries are kept current by reputable distros. You shouldn't be using low-volume packages that implement their own security, anyway.

One of the packages in the article I linked above was Wordpress, which is not exactly low volume. I work on the Jupyter/IPython notebook - we found a security bug last summer, got a CVE (CVE-2015-5607), issued patches... and the version in Debian unstable hasn't been fixed, seven months later.

If distros can't do security maintenance for the software people use, claiming that they shouldn't use that software at all is no kind of an answer.

I didn't even know that Debian has a WordPress package. I always downloaded it from here https://wordpress.org/download/ and applied updates as soon as my WP servers ask me to. I don't trust Debian mantainers to be faster than that.

As a comparison: WordPress is at 4.4.2, Debian 8 has it a 4.1 and Debian 7 at 3.6.1. You can't use those, you must use the one from the developers.

Same thing for example with Ruby: Debian 8 is stuck with 2.1.5 (November 2014), which is not even the latest one of the 2.1 series. The current version of the language is 2.3. And Node.js? 0.10.29 (June 2014). Again, you can't use the packaged versions and you must go to the developers, maybe through rvm, nvm or other language specific package managers. I really don't understand why they bother packaging those languages. They should have tiny packages that explicitly tell that you are on your own and redirect to the appropriate package manager for the language.

If one can't build a server app from source then one shouldn't be connecting it to the internet all by oneself. There are any number of companies who are happy to host WordPress for reasonable fees. Keeping software updated is hard work, and those who do it for free will keep their own schedules. I'm sure it's frustrating to maintain a package that's "in the middle": too popular to just say "install with pip", not enough volunteers to maintain the Debian package in-house. Perhaps that means that Debian is too accepting of packages. I don't think it's an indictment of the idea of a package manager. I also doubt that lots more free labor will appear now that there is yet another new zipfile format to keep maintained.

If building it from source is acceptable, why would using packages direct from the maintainers not be? For anything in an interpreted language, there's little difference anyway.

I agree that Debian is too accepting of packages. We don't recommend that anyone installs Jupyter/IPython with apt, and I wouldn't mind if Debian removed it from the repos. But if there are all these packages that shouldn't be distributed through Debian, we need some other way for users to get them conveniently. We often use language package managers like pip, but they're not great for desktop applications - they can't easily set up a launcher entry, for instance. That's where something like AppImage comes in.

I think it's perfectly acceptable for project maintainers to provide rpms, debs, ppas, etc. as long as those artifacts are organized in largely the same way that distro maintainers would organize them. (I doubt any distro maintainers would disagree.) I don't think it's a good idea to pack lots of already-distro-packaged libraries into those artifacts. Building for many distro formats is a pain, but I doubt that building this new format in a universal way will be easy either. If you have a different AppImage for each distro/version combo, is that an improvement?

For the packages you've mentioned, live active projects with maintainers who can produce this distribution format but can't deal with apt, maybe the giant ball of wax is okay. I won't be installing anything this way, but clearly I'm not the target user. It seems certain, however, that in future we'll be hearing about lots of vulnerabilities attributed to AppImages linking in old library versions.

Having re-transitioned last year back to the land of Free *nixen after a long stint on OS X, I marvel at how awesome package management is, and wonder how I ever gave it up. Even where I'm now forced to use OS X (work), I try to brew / brew cask everything.

The Windows / Mac distribution model feels more like a necessity on a closed system than something that is ideal.

EDIT: I realize that comment sounds more critical than what I intended. While I have a preference for package management, others clearly don't. Also, I might be inclined to use something like this for fast-moving big apps where I want the newest features. I could see trying Guile-Emacs or QGIS this way.

What OS X at least gets right is that OS-bundled software is immutable, or desired to be. If you want a recent version of e.g. Python, you install separately and add it to your $PATH instead of changing the base. Then any app that depends on Python can continue using the system version.

I am profoundly uninterested in a third-party package manager that does not provide any sandboxing features. xdg-app has them, but AppImage does not mention sandboxing on its web site or in its README.

Well, first of all AppImageKit is not a package manager. For a comparison with xdg-app and other systems, see https://github.com/probonopd/AppImageKit/wiki/Similar-projec...

As for sandboxing, this is definitely an area which we would like to add to AppImageKit, e.g., see https://github.com/probonopd/AppImageKit/issues/77 - thoughts and pull requests welcome!

Why are you not using containers for sandboxing?

Probably just because I haven't had the time to investigate them yet. Can containers be used without the need for root access? Pull requests welcome.

I agree, sandboxing is a must-have feature. xdg-app has a system of runtime so it can run on any distro as long as you have installed the runtime needed by the app. Also GNOME and Papyros will have it while KDE seems interested in it also so it should have quite a lot of support.

xdg-app looks good, but I think we're still at least a couple of years away from the point where a developer can use xdg-app to reach the majority of desktop Linux users. That's a long time in software development terms.

You're surely right, but to me the important aspect is not only to make a package that run everywhere but also bring the mobile security scheme to desktop ie have apps that are sandboxed and run on top of a well-defined runtime with well-defined APIs. In that sense Xdg-Apps have a bigger mission that AppImage so maybe we can't really compare them

You can ship xdg-app + your app in the meantime? Consider it a 'polyfill'. Some best practices around this would be great.

I too love package management like the rest of the people on this thread but the existence of this project confirms that we have a problem, distribution is pretty hard.

Linus Torvalds addresses some core issues in his "DebConf 14: QA with Linus Torvalds" talk starting around 5:40 https://youtu.be/5PmHRSeA2c8?t=5m40s

Tools like fpm help a lot. Developers have a lot of options now too where they can install python modules and ruby gems as regular users, to their ~/.local directories. It's not ideal, but it helps.

And despite all the things I hate about systemd, it does make packaging with a unified initialization script. There have been projects that have tried to do just that piece as a drop in replacement (uselessd) but have gone unmaintained.

This is very similar to the approach that we took in our InstallBuilder cross-platform installers (http://installbuilder.birock.com), embedding a filesystem in the executable that gets mounted at runtime. If you do it right, it can support a wide variety of Linux distributions and significantly decrease the amount of pain end users and app developers experience. Those who disagree with this approach and believe "this is not the Linux way" should take a look at the referenced Linus Torvalds vieo

Just a heads up, your link is pointing to the birock.com domain instead of bitrock.com.

This is the correct URL: http://installbuilder.bitrock.com

I find it interesting in a "Post-Snowden" 2016, that the web page with details about a mechanism to produce fat executables (and links to a demo app) is not protected with SSL. Certificates were cheap before... Now they are free thanks to Let's Encrypt.

There's really no excuse for this.

Well, the code is on https://github.com and my AppImages are on https://bintray.com/probono/AppImages - once https://github.com/isaacs/github/issues/156 is closed, appimage.org, too, can go https.

The cost of the certificate is a very small part of the overall cost of a proper SSL/TLS implementation.

If you don't want to exclude older browsers, you need a dedicated IP address, or you need a system to manage putting multiple names on one certificate. Let's Encrypt is a great option for multi-SAN certificates, as long as you don't care about Windows XP users.

If you have any kind of redundancy, doing perfect forward secrecy gets much harder. The open source approaches to scaling TLS along with PFS are bleeding edge, poorly documented, and may involve writing some code.

I agree, TLS everywhere is a worthy goal. But I think it's easy to underestimate how complicated it can get, especially at scale.

> If you don't want to exclude older browsers, you need a dedicated IP address, or you need a system to manage putting multiple names on one certificate. Let's Encrypt is a great option for multi-SAN certificates, as long as you don't care about Windows XP users.

This is a website for AppImage. I doubt they're targeting XP users.

> If you have any kind of redundancy, doing perfect forward secrecy gets much harder. The open source approaches to scaling TLS along with PFS are bleeding edge, poorly documented, and may involve writing some code.

This is simply a brochure website, so this does not apply.

For more complex applications or websites, there is a certain degree of engineering required to support HTTPS-by-default. But in today's world it is a necessity.

Windows XP users ought to be shown a fullscreen banner ad warning them to upgrade.

One of the best things about Linux systems is having this huge pool of apps that you update centrally that are guaranteed to work on your system.

This is a backwards step, if anything.

Well, this is what Linus had to say about packaging: "I've seen this firsthand with the other project I've been involved with, which is my divelog application. We make binaries for Windows and OS X. We basically don't make binaries for Linux. Why? Because binaries for Linux desktop applications is a major f*ing pain in the ass. Right. You don't make binaries for Linux. You make binaries for Fedora 19, Fedora 20, maybe there's even like RHEL 5 from ten years ago, you make binaries for debian stable, or actually you don't make binaries for debian stable because debian stable has libraries that are so old that anything that was built in the last century doesn't work. But you might make binaries for debian... whatever the codename is for unstable. And even that is a major pain because (...) debian has those rules that you are supposed to use shared libraries." (August 29, 2014 DebConf Q&A with Linus Torvalds)

This attitude is exactly why Linux has such a horrible, horrible packaging experience. Building cross-distro packages is awful. Teaching users how to install packages is awful. Installing custom apt sources or repos to get newer versions of packages is awful. And we're getting stuck up on some inconsequential thing like a library will be duplicated. Guess what, OSX has been doing it successfully for years.

The Linux community is missing the forest for the trees.

I'm confused and couldn't find an answer searching for "xdg-app" in AppImage's website/github. Is this a whole alternative to xdg-app, or a layer on top of it?

If it's indeed an alternative, why would anyone choose this rather than xdg-app? xdg-app being backed by fedora/freedesktop/gnome folks might mean more traction and maintenance, doesn't it?

EDIT: okay, saw https://github.com/probonopd/AppImageKit/wiki/Similar-projec... mentioned below by the author, sorry for the noise.

From an application developer POV: AppImage looks like something I can use to package an application today. xdg-app looks like what I might want to use in a few years.

Why not just use a package manager? If the one in your distro sucks, find a better distro. (Arch Linux would be a good example.) The rule of thumb with OSes and implementing features is to avoid reinventing the wheel.

Ports-like tools work very, very well on Linux. Binary distribution works fine too. Also, the shared libraries of each application don't have to be (and usually shouldn't be!) bundled with the application.

Finally, I'm concerned about the MIT licensing, with all the GPL code floating around in that repository.

Did you read the intro paragraph on the website?

"As an application author, I want to provide packages for Linux desktop systems, without the need to get it 'into' a distribution and without having to build for gazillions of different distributions."

So you are a developer. You don't want to build gazillions of packages, but you want to target lots of users. Suppose you build a package for one distribution. Good luck convincing all your users to switch to that distribution.

I guess you are using Arch. Suppose the developer's opinion is that Gentoo's package manager is better, so he only bothers making a Gentoo package. He then tells you to switch to Arch if you want to use his app. Will you seriously do that?

I am sorry, but if YOU, the author, will ship MY GPLed library with YOUR application, then YOU are FORCED to open YOUR code under MY terms. Understand?

Don't steal my code, please. Don't bundle it with your proprietary app without my written permission.

What does the GPL or using libraries have anything to do with the topic discussed?

Bundling a GPL library with your application, even as a shared object file, is not allowed unless your application is also under the GPL.

Furthermore, the application seems to include parts of Linux and certain other GPL'd code files while trying to maintain an MIT license.

This is rife with licensing concerns, only one of the many concerns I have with it.

I get the source, I follow the instructions for building and installing it (usually 3 commands; one to configure, one to compile, one to install). That's usually enough to get it working on just about any distro.

If I'm feeling especially up for it, instead of running the install step immediately, I throw together a PKGBUILD so I can use Arch's package manager to manage it (like this one: https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=therm...). Gentoo has a really similar system, so I could just about copy the dev's script wholesale. Then instead of running the commands myself, I run "makepkg && sudo pacman -U *.pkg.tar.xz".

Oh, you mean for "regular users". Yeah, if you want to support those, you need to do some legwork. Regular users who have no intention of learning how to use Linux usually aren't (and shouldn't be!) using Linux. Mac OS and Windows have support teams and wide adoption. Why use Linux if you don't have to?

Please note that I'm not trying to sound elitist. But why would you use an operating system that has a steeper learning curve as a regular user?

I'm actually happy that Linux lacks a central concept of package manager because it encourages experimentation. Everyone gets to figure out what kind of package ecosystem they want to see most. We get to choose instead of just getting whatever comes with the computer.

This is missing the point. Linux distributions do not lack package management options, they lack stable and sane APIs for developers to work against. No amount of static linking and binary bundles can fix that.

The Linux Kernel interface does an excellent job of remaining stable. There's a surprisingly small set of interfaces that the kernel actually exposes to userland, and in the words of Linus, "WE DO NOT BREAK USERSPACE!".

I don't see the lack of stable APIs elsewhere as an actual problem. The biggest problem is that it makes life harder for proprietary software developers - it's pretty much mandatory for them to setup automated CI & release processes if they want their software to actually be usable.

There's a huge upside to not caring about a huge stable userspace API: it's much easier to continue to evolve if you just stop caring about backwards compatibility. And often, backwards compatibility means giving up your future to hold on to your past. The difficulty of getting anything done grows exponentially more complicated, often with few significant tangible benefits.

What exactly was achieved in the backwards compatibility nightmare that is Gnome 3, for example?



All things UI in Linux distributions go through so much insane thrashing that very very few application developers want to bother. There are no tangible benefits to UI thrashing; people use systems for software, not window chrome!

This is because the developer community, aside from some long-time system hackers like Linus who know what's at stake (most of whom are Linus's age or older), do not think in terms of building software that will last and adapt to changes. They think in terms of "old and busted" and "new hotness" and that their job is to migrate everyone off the old and busted and force the community to adopt new hotness. Hence the churn in GNOME, hence systemd.

Desktop Linux is definitely still maturing...

I'm not up on the latest in Gnome-land and don't know the background on the issue you linked, but it looks like the benefit is that they get to stop maintaining a bunch of code for a UI feature they've decided isn't actually a good idea. That will make it easier for them to continue to build reliable software, as there will be less legacy code to consider when writing future code.

The UI churn you talk about may be because the "window manager" concept is probably fundamentally flawed. Creating a coherent and sensible UI when you have to target a whole suite of different window managers which may use entirely different UI paradigms is... probably a nigh-unsolvable problem which might not even be worth working on.

The only time you run into a WM issue is when you try to ape the OSX singular menubar.

As long as you go with the Windows thinking of having the menubar inside the relevant window, your UI will work across just about every WM out there.

I really don't see why the UI world is so hell bent on moving away from the menubar. That is unless they are stuck up on turning every UI into a touch UI...

> Desktop Linux is definitely still maturing...

You're such an optimist.


c 2003

Yup, and it seems we forgot about static linking, then broke it, and now are reinventing it in new and convoluted ways.

I do think Linux could make smarter use of static linking in places. Weird, esoteric and tiny libraries with unstable APIs make terrible shared libraries and are a constant source of irritation.

Static linking has it's place, and so does shared libraries.

This is what powers [1]PortableLinuxGames and it often comes pretty handy.

[1] http://www.portablelinuxgames.org/

So this attempts to solve the problem of shipping and running desktop applications in a distribution agnostic way.

It cannot actually do this however, because in order for an application to actually run you need to ensure that it's dependencies (in particular shared libraries) are installed on the OS in a version that's compatible with the application, and the documentation is essentially hand-waving the problem away:

  Gather suitable binaries of all dependencies that
  are not part of the base operating systems you
  are targeting. For example, if you are targeting
  Ubuntu, Fedora, and openSUSE, then you need to
  gather all libraries and other dependencies that
  your app requires to run that are not part of
  Ubuntu, Fedora, and openSUSE.
So what does an application author do if the application requires OpenSSL, which exists in multiple ABI incompatible versions in different versions of distros?

xdg-app actually solves that problem with its Runtimes - you create an xdg-app application by building it against a SDK that corresponds to a particular Runtime, which won't be updated in ABI breaking ways. Application authors know exactly which libraries they can rely on and which they have to bundle.

The problem boils down to shared library version number compatibility. The idea of shared libraries is to reduced storage space (disk and memory) at a processing price overhead. Note that this storage constrain becomes less critical these days.

Another benefit of shared libraries is security fixes and this becomes more and more important.

The only solution I see is that distros must preserve the role of shared library managers and support cohabitation of many library versions. The shared libraries have been designed to support this cohabitation. Any app that doesn't support this cohabitation should be fixed or rejected.

Shared libraries should have a version release and patch number. The apps should only depend on version and release number. The patch number is for bug fixes.

Users ahould have specific permissions to add new software, which can be enforced by write access to the shared library directory. Users should not be allowed to install crappy and insecure software/libraries on computers shared with other users.

Issues others mentioned are the bundling of libs as well as the lack of sandboxing. But I think the developer experience of xdg-app is superior specially if you just want to make a small change to an app without needing a day setting up the dev environment and maybe even another distribution for compiling the stuff.

So .. I have project on my plate which is basically a firmware updater for a project based around the ESP8266 (see http://magicshifter.net/ if you're interested).

My question to anyone who knows the answer: would it be possible to write an app (in Linux of course) which access the /dev/tty.FORESP8266 and writes/reads raw blocks, such that it could be bundled into AppImage and run on, basically, OSX and Windows as a Linux app - and still have the raw i/o access it needs to perform a firmware update?

If so, I'm willing to expend the time to learn how to use it and build this app .. it seems to me to be a more interesting approach than using, for example, Qt to build a cross-platform serial i/o app ..

Bear in mind AppImages are not Java, they run only on Linux, not on OSX or Windows.

Anyway, I'm using Qt5+QtSerialPort myself inside AppImages, and works beautifully.

"As a user, I want to download an application from the original author, and run it on my Linux desktop system just like I would do with a Windows or Mac application."

what? no, man... this is the win9x "freeware" application install.exe, with my machine fill with crap

If you install an untrustworthy application, it's game over no matter which packaging model is involved. That doesn't mean it's not worth the effort to make sure you can easily install trusted apps directly from the author.

What if you want to install the latest version of VLC?

linux repositories, whatever distro you choose, are great because of that: I don't trust $developerfoo or $startupbar, I just trust my distro packager.

BTW: normally I don't care if I don't use the latest latest version of $program. Do you REALLY need to always get the latest tip of git of every program you use? oh, and who said to you the latest version of $program is packaged?

Sometimes you just need the latest version. KiCAD has recently gone through development sprint after being dormant for years. My distro offered ancient version without new features. After trying to resolve source building dependencies for an hour, I just ran latest windows binary using Wine, and had no issues using it. It was bizarre experience.

I wish we could kill this idea that the Windows/Macintosh model of running around the web finding random binaries to install is a good one. Every time I have to use a Mac or Windows machine and do this I find it a major chore, and also get pretty firmly freaked out by the number of spoof application homepages with trojaned installer "bundlings" you see flying around the web. could be an easy mistake to make for a less savvy user.

As for "it just works", I do wonder how long the rest of you have spent trying to get Postgres and psycopg2 reliably working together on a Mac. (Yeah, Postgres.app "just works"...)

It's a one-command, ten-second install on my Debian machine.

You shouldn't run "random binaries", but binaries from the original author of the software. E.g., Scribus from https://www.scribus.net/ or Subsurface from https://subsurface-divelog.org/. If you don't trust the original application author, then you should better not run the software at all.

For the average user this relies on the original author having the resources to be the top hit on google.

as long as you do proper signing of code, reproducible builds, then your security concerns go away. https://defuse.ca/triangle-of-secure-code-delivery.htm

I used to be interested in this kind of thing but it turns out lxc containers are lightweight and do the isolation thing well so let's use them!

Just need something in the file manager to recognize container images and run them like an app.

What about the stereotypical non-technical relative, who you've moved to Linux (because all they do is web and email and message) to reduce your family support effort. Is it reasonable to expect them to run container and run things in that? It seems at least more support load.

This "drop this in and run it" seems a lot more promising for that use case. Similar to freezing a python app, I suppose.

It all depends on a simple interface, no reason containers couldn't be run on (double) click, or put in the main menu, etc.

This is pretty exciting to me. At my job we create a Linux desktop app using NW.js, which basically has Ubuntu 12.04 as a minimum requirement. Unfortunately almost all of our customers use RHEL/CentOS 6.x, which dates from 2010. I've gotten NW.js compiled in CentOS 6.x (and submitted patches to it as well as Chromium, which it is based on), but Chromium is a famously moving target and the latest betas will take even more work to backport. Hopefully this will solve this problem.

Isn't this what dockers original design goal was?

> "As a user, I want to download an application from the original author, and run it on my Linux desktop system just like I would do with a Windows or Mac application."

Has anyone ever said such a thing? My guess would be the exact opposite: users like the comfort of installing via the OS package manager, rather than hunting for binaries on the internet.

This seems like it does fill a need, but what I'd really rather avoid the Windows scenario, where each and every single app has its own auto-update manager - in addition to the system update manager. That just becomes a redundant mess very easily.

I have to wonder if the Subsurface demo was key to achieving the endorsement from Torvalds.

How would this relate to AppC / RunC?

Why not take the opportunity and package app in a way that isolates them like SubgraphOS is trying to achieve? :)

This uses the fuse file system to intercept and rewrite paths. Anyone know the performance hit of doing this?

Depending on the scenario, an app packaged as an AppImage may launch as fast as or sometimes even faster (due to the compression) than an installed app. In most cases, there will not be a noticeable difference for normal desktop applications. Since an AppImage is also a valid ISO, you can loop-mount it and copy its contents wherever you like, and do a comparison.

So it's shar that cleans up after itself with LD_LIBRARY_PATH set for you?

Do you guys remember those "Portable" Windows executables?

I do - they worked find for me. Did they give you a hard time? I ask because of the quote marks around portable.

Some better than others- I was able to grab a copy of portable office (Office 2007?) that ran fantastic on x86. I also went through the trouble of sandboxing every one I ran due to the unknown packing mechanics... Cool stuff none the less

Yup and they were fantastic! Mac OS X apps can work in much the same way. It's nice being able to download something and know it'll "just work".

Obviously it's terrible for getting security patches. Not entirely sure what to do about that. It would be nice to make package management more consistent across distros but I digress.

a.k.a. how Windows executables more or less used to work before the crawling horror that is the registry.

Special support for "portability" is an indicator of a design flaw in your OS.

Oh exploitable!

Ugh, please don't fuck with my scrollbar.

It even breaks the trackpad back gesture. This is ridiculous.

Agreed. I took a look at the website, tried to scroll, closed the window right away due to it messing with the scrollbar.

I just made a pull request for fixing it.

Thanks, looks like it was merged.

Congratulations, you just have invented Windows ;)

is it safe?

Of course not. Hehehee. Stuff like this makes people like me happy, because of all the exploits.

As a user, I want to download an application from the original author, and run it on my Linux desktop system just like I would do with a Windows or Mac application.

Please pull over. I want off this ride.

Why the hell are we regressing to shipping around hackily built binaries?

Because users and developers both want to be able to download and use new versions of software at a release cadence that makes sense for that application. The "every application gets the same release cadence no matter what" approach only appeals to people making distros.

This. Distro packaged applications are almost always woefully out of date, because they're frozen before the distro is released, and then you use the stable release of the distro for months or years.

But even if distros adopted some kind of half-rolling-release model so applications could be updated quicker, developers either have to manage N different packaging systems (apt, rpm, etc.) or rely on other people to deal with packaging for different distros, and lose control of a crucial part of their user experience.

Then there's dependencies. Linux distros go to one extreme (no bundling), while the popular approach to developing applications is to bundle most libraries so that you know exactly what version you're working with. There are arguments for both approaches, but it's another source of friction between developers and distros.

Distro packaging works for computational infrastructure that's widely used and not changing much, but it's a poor fit for what users and developers expect for actively developed end-user applications.

Shameless gentoo plug: gentoo does rolling releases. There's still stable v unstable but it's per-package. I've run everything unstable for years (except the kernel and default gcc which I update only infrequently). Most packages get updated unstable ebuilds within days of release, probably because with nice upstream choices, a new ebuild can be made with a change on a single line. I've even made such an ebuild myself with no experience when I needed today's version.

There's even a way to make an ebuild for nightlies, although you can get build breaks with those.

People making distos, as well as anyone who has been bitten by unreproducible binaries generated by hand-crafted/manually driven build processes.

When I install software from a distro, it's been vetted and I can safely assume the software has a sane reproducible build process, or at a minimum, has had a sane reproducible build process added by the package maintainer.

If a new standard wants to solve that process, I'm all for it, but I demand reproducibility (or somebody to hold financially reaponsible, in the case of closed source)in my software builds.

Nobody except weird Linux nerds and developers cares about any of the things you listed.

Normal people just want to click a link and have their app install and run.

I don't why you are been down voted. Normal people would to download the s/w , install and run. As a developer sometimes i wish the same as well instead of hunting down dependencies.

Yes but the logical extension of that argument is that people don't care about food-safety because they don't know what e-coli is.

And that is true.

So, the problem is not package managers per se, but the velocity of updates? Fix that instead? (see Arch Linux).

What if I prefer a predictable release cycle for the base OS, but still need the latest LibreOffice/VLC/_____ for one reason or another?

I like the approach taken by e.g. Nginx and MariaDB, where I add additional vendor repositories, but this workflow is probably neither user-friendly enough for my mom, nor have all vendors the resources to maintain several repositories for different distributions and their respective versions.

Use an OS with backports, like Debian. For example, if you took Debian Jessie, you could pull LibreOffice 5 from backports. VLC is already almost up-to-date (2.2.1).

That's one use case. Here's another: I am a user who wants stable secure software above all else. When faced with the need for a version of software that is not yet provided by the distribution, I download, verify, build, and test the software from source.

The "every application gets the same release cadence no matter what" approach appeals to me, and I'm a normal user.

Nothing in using the distro's packaging formats forces you to release only at the same time as the distro.

While of course you are technically correct, think about it from the grandma perspective. Grandma uses $app that requires wine, but grandson set her up on Debian stable. She can't run the latest wine because it isn't stable, and it's needed for $app, without updating the system to unstable.

I could be erroneously attributing the intended use case, but I believe this isn't designed for people who know what they're doing; it's for people who don't know, like grandma.

I have some programs that it's ok if they are bleeding edge and unstable to get the most features. Some programs that is not ok. And if those 2 categories have a shared dependency (different versions of it), that becomes an issue.

I would say that most things should use the old model, but there are scenarios where this could be preferred.

Then use one of the many available PPAs? If the package is so unique that no one maintains a PPA, then chances are that no one maintains an AppImage either. In that case, build from source. (Pro tip: when building from source, install using GNU stow so you can easily uninstall as well.)

But as a developer you don't want to build a gazillion of packages to be able to reach many users.

Building from source? Please try to imagine this from the point of view of users who aren't so technically expert as you are. They just want to, for example, run the latest version of VLC.

Yes this is a challenge for developers. I don't think the answer is yet another package format.

I may be wrong, but isn't Ubuntu/Canonical "snappy" supposed to solve this problem?


It's possible that they don't like the potential for walled gardens like those being erected in windows or mac-land. I don't like a bunch of binaries either, but I also don't like the app-store-ification of desktops.

Ya know, I've thought for a long time that the idea of app-store-ification of OS's was lifted directly from distro repos. The depth of use is drastically different, but the core concept is the same.

Something like this is probably more useful in the Linux desktop advocacy areas and not so much for hackers. Sometimes suspension of disbelief should be applied to projects like this, as it's target market is not hackers but instead helpful in converting desktop users.

You know, I realized I DO like the optional app-storeification of desktops, when it's in the open source world and there's no DRM or account tie-in. The Ubuntu software center is great (but I usually just use apt-get). I guess what I meant was the "I'm Apple and you should only be allowed to install the software I want you to have - you know - for your safety!" trend which seems to be taking hold. The warnings presented are alarmist, and the hacks like using functions meant for assistive devices for Steam are an indication of the garden walls closing in.

That being said, I totally get why Apple, MS, et al are doing this. Unsavvy users installing random binaries causes no shortage of headaches or damage to your reputation. I just worry about the long term implications.

Good. Get off this ride. I'm asking why the hell we are still stuck with the current packaging schemes.

From the website:

>Just download the application, website, make it executable, and run!

You want me to download some mystery program and run it with all my privileges? No ... that isn't going to happen...

Presumably, you'd only do this if you knew what it was and you wanted it.

TL/DR: Let's take the broken app model that lets people download and run buggy, virus-infected programs on Windows and OS X, and bring it to Linux!

Every model ever devised lets people run buggy programs. If I couldn't run buggy programs, I wouldn't be able to run any programs at all.

> Every model ever devised lets people run buggy programs.

There's a big difference between:

(1) A user being able to run buggy programs if someone is able to hack the secure distribution chain or if someone is able to get the user to give root permissions (or any permissions other than those of their ordinary user account) to a piece of malware;


(2) A user being able to run buggy programs because the default app model on their system dumps all app binaries into their ordinary user account's data area, with that user having write access to all the files, so any random piece of malware running as that user can hose them.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact