A script like this gets you 95% of the way there:
cp $INSTALLDIR/app AppDir/bin
cp -r $INSTALLDIR/data AppDir
cp `ldd AppDir/bin/app | grep -o '\W/[^ ]*'` AppDir/bin
cat << "EOF" > AppDir/app
SCRIPT_PATH=$(dirname $(readlink -f $0))
$SCRIPT_PATH/bin/ld-*.so.2 --library-path $SCRIPT_PATH/bin $SCRIPT_PATH/bin/app $*
Shared libraries are most useful when matched within a specific distribution's package/version chain.
Tying a third-party distributed package to a given shared library, on the other hand, is less helpful than just supplying the expected, tested, and supported library upfront and still let knowledgeable operators do what they will on their chosen platform.
No technical debt is accrued if the libs are truly interchangeable anyway, but especially if they are not, this stands a better shot at fixing it.
(The quotes in "$@" are important.)
Also, your script will fall over if any of the paths have spaces in them, such as INSTALLDIR.
You should use lowercase for variable names like SCRIPT_PATH. Uppercase is for exported variables.
readlink -f $0 breaks on OS X. You must do both readlink -f $0 and readlink $0 in order for it to work everywhere.
Shell scripting isn't easy. It took a year to understand these nuances to the point that they're second-nature, and I'm still discovering new ones. Anything that improves this situation would be a welcome change in my opinion.
EDIT: cat<<'EOF' is necessary here. cat<<"EOF" will interpolate variables.
sh -n myscript
Check for bash-specific syntax:
Check for various gotchas:
Check for style:
Check for badness:
grep -rE '(wget|curl).\|( sudo)? *(ba|z|)sh' myscript
cp $INSTALLDIR/app AppDir/bin
cp "$INSTALLDIR/app" AppDir/bin
That might make it seem like too hard for some people. It's really not that bad. These particular mistakes were beginner's mistakes.
But it is tricky! I usually try to tell people to avoid writing shell scripts until you've at least read the "Bash Beginners Guide" or something similar. It's comparably short (remember, you're learning a new language and an arcane one at that) but at least lets you avoid the easy traps. Also, indent and comment. Surprisingly many people seem to think hygiene is somehow less important outside their main language.
On a side note, what's up with not reading manuals these days? When I am tasked with something new, no matter how trivial, my first step is to at least skim the manual get a feel for what the problem domain looks like and how you are supposed to wield the tool I am about to use. Turns out nobody does that anymore. Zero, out of over a dozen people in my closest (java, web) team have read the documentation for the product we build on. I find it unprofessional, but I realize I'm the strange one and it's probably just me getting older.
The difficulty of finding manuals, and when you do, they're badly written in such a way that they assume all sorts of contextual knowledge based around the sorts of problems the author of the software was trying to solve in 1986 or whatever year. Manual writing is at least 1000x more difficult than bash scripting, because you have no way of knowing up front how the target "machine" is going to interpret your "script".
And of course that's all assuming a manual even exists at all.
("Let Apple open the damn back door, let the FBI know what's on the phone, then close it again. This isn't difficult." — Piers Morgan)
people with a naive understanding of a problem will always feel like people who don't design solutions that are overly complex.
people with a deeper understanding of a problem will always feel like people who don't design solutions that are overly simplistic.
Except that there's word-choice correlation.
Ohhh, but that five percent!
95% of the way means it is not finished. When I buy a car the seller better does not say that the car works 95% of the time.
Installing something in millions of computers is hard. We do it. Being able to install something on one computer with a person that is a computer programmer with 20 years experience with the command line is completely different from making it to just work for millions of people.
Apple actually makes it work for millions of people in a very simple and elegant way. As a geek I could use homebrew or any other system in Linux if I want, but Apple's system works great for most people.
What we have now are mostly system package managers, you want them to be stable, secure, having the latest features might not be necessary.
But we see more and more often that that distribution channel doesn't work well with applications: you end up with old, buggy, insecure applications because the distribution just couldn't keep up with the upstream update cycle. Why not have a cross-distribution application manager which distributes AppImages? That way, application distribution is an effort concentrated over all the distributions, possibly benefiting the entire community.
TBH this is just the start of making applications more secure, they ought to be sandboxed too, so one application can't read the data from another application by default (similar to how phones work).
Consider it this way: if you install package FOO with its dependencies A,B and C, and FOO's developer thinks it's "feature complete", the package will never be updated, but every time a vulnerability is disclosed in his dependencies you'll be it. For _every_ program maintained this way.
I don't want to deal with 2 different package managers and new users are apt to be confused by the idea that they get packages from 2 different places.
Look at Android and MacOS. You have these self contained packaging things that contain all their dependencies. You have the same dependencies in each package. You have a TON of wasted space with packages that have the exact same built-in libraries and jar files.
Especially with Android, they could have made a real package dependency system, with slotted installs of all the jar/jar-versions from the official maven repos. When you install and app, it could install all its dependent jars. You'd still be able to do multiple versions of each of those jars, probably easier than it is with standard .so/.dll libraries.
Projects like this are a huge step backwards is terms of DRY principals and general package management.
I am not sure if the whole jar package dependency management would work under ART.
In the end something that ought to have taken 10mb of space and 20mb of ram requires 100mb of space and half a gig of ram and still manages to be slow on a new system at operations that ought to have been fast on a ten year old machine.
I hate to use any of these chrome-"native"-packaged-webapp, though. I just hate java based GUI apps more.
Android also runs on phones that still mostly have less storage and less RAM than our desktops, so you'd expect saving space to be even more important for Android.
That's overcomplicating things. The "user" manager would have to figure out where the specific distribution is storing this or that lib. Doing it reliably across even a small subset of distributions (say, Ubuntu, Debian, Fedora and RedHat) and a small subset of their releases, would be very challenging.
It would make much more sense to add a "user mode" option for the likes of apt-get, whereby it does not need sudo and it will install the specified package in ~/bin, ~/usr etc, symlinking necessary libraries. That doesn't seem too hard to pull off, in theory.
And then any random piece of malware running in your browser can in principle hose all your applications. There's a very good reason that ordinary users do not have write access to application binaries on Linux.
New software could present its access requests to the user, to be approved or not. Unapproved or undocumented accesses are also displayed as notifications, where declining the running apps access becomes a permanent rule.
Of course, to standardize such a system across distros, either you need an abstraction layer access management syntax, or you have to get everyone to agree to one of SELinux / Grsec / Tomoyo / Apparmor.
Really though I think the obvious answer is that ten years from now we have something like xdg-app shipping generic usage filters and mandating userspace MAC on new programs, along with an xdg -> distro specific library mapping layer from something like appstream.
It's obvious that "usermode" should be the exception and not the norm, but from a security standpoint it's exactly the same as compiling and installing with custom prefixes.
Until more and more user apps start to get out of sync with the "system" versions of things, and therefore need to be installed with the "user" option in order to work.
> What hack would rely on some specific app being deployed in user-mode anyway?
A hack that doesn't care about specific apps but just wants to compromise whatever it can. Like, you know, a virus.
> To do what, steal user files it already has access to?
And send them to the Internet, without your knowledge. Like viruses already do--only it's a lot harder to get them to run on Linux. At least, it is now.
It would be nice to have a safe consistent experience, a generic user-mode package manager.
Schroot essentially wraps up debootstrap and chroot along with mounting /proc etc, and (optionally) bind-mounting /home (which among other things gives access to x11 session cookies, for sharing one xorg display -- so one could run firefox, gimp or mplayr from an unstable-chroot on a stable system). Overall, with SSD prices dropping as sizes are increasing -- it might be better to simply use vms (one of the nice (potential) things about rkt -- the ability to run a "container" under kvm, turning it into a vm).
Edit: thinking about it, this should also be the perfect opportunity for a cross-distribution app store/center.
User applications should not require root privileges to install.
It also serves to keep unprivileged users from installing software.
You are free to install software to your home but it shouldn't be the default behavior
There are tools like fpm or even checkinstall that can build simple good-enough-but-not-really packages, but I think maintaining a "proper" Debian packaging requires some pretty arcane knowledge that's spread around various pieces of documentation (and maybe I'm just stupid, but also a lot of trial-and-error).
Once you've made a package, you either try to get it into Debian, or put it in your own repository, both of which come with additional challenges.
I've done some Debian packaging before - both to go into Debian and to put in PPAs. I've given it up: the effort was too much and the rewards too little.
If any aspiring hackers are around, I suggest taking a look at Arch and its makepkg/PKGBUILD tools. Pretty much a simple shell script to define a package, and a uniform tool to build it.
Similarly, if you want a package up something without a Makefile, you may as well go home. My first packaging attempt was basically to install a tarball at a location (in-house use only), and it wasn't clear that I had to make a Makefile to do this first. And then you get introduced to the bizarre world of make with all of it's idiosyncratic rules and behaviours.
Then you get to play with all the dpkg control files, and if you're using debuild, you have to avoid the lintchecker - because you can override lint rules... but the lint profile then disables overrides for some useful ones (the "install to /opt" rule gives me lots of spam). So now you need to learn about the lint-checker so you can do your own profile (or just disable it).
Then, as takluyver says above, you get to add it to a repo so systems can access it...
Once you know all this stuff, it seems pretty simple, but getting over that hump is difficult.
Meaning that the package manager balks at having lib 1.0 and 1.1 installed at the same time, even though ld can handle it just fine, unless you play sleight of hand with the package names (meaning that you get something like lib0 and lib1 for 1.0 and 1.1 respectively).
This in turn leads to a bunch of busy work regarding backporting fixes, maintaining manager specific patches, and recompiling whole dependency trees on minor updates.
The primary reason this is usually not done is the work required: it means you need to support two versions in stable, two (or more?) versions in unstable, &c. This quickly becomes a great deal of work :)
That is where the extra workload is coming from, the need to juggle effectively two package trees, and the patches for each.
If instead the manager was properly able to handle multiple versions, they could all be named foo, and the manager would be able to traverse the tree and see if foo 1.0 could be discarded or not based on the dependencies listed in other packages.
You get something of this nature in Nix (though they take it one step further) and Gobolinux, by using the directory tree as the final arbiter.
On Gobolinux you have /Programs/name/version, thus installing foo 1.0 would end up in /Programs/foo/1.0, while foo 1.1 ends up in /Programs/foo/1.1.
Then as long as you have one or more programs installed that need 1.0, you can leave it in place, and remove it when no longer needed.
For sake of compatibility, Gobolinux also have a /lib (along with the rest of the FHS) that contains symlinks back to the individual lib files in /Programs, using SONAME to its full potential.
1. tarballs (slackware)
2. Ports (FreeBSD)
3. Portage (Gentoo, Calculate Linux)
4. pacman (Arch, Archbang, KaOS, Manjaro, Antergos)
5. building from source (Linux From Scratch, Most old installs eventually turn into this)
6. rpm (Redhat, Mandriva, Scientific Linux, UnitedLinux)
7. deb (Debian, Ubuntu, Mint/LMDE, Hanna Montana Linux)
Why Debian and Redhat-derivatives are the most popular, I'll never understand. I have _way_ less headaches maintaining custom Calculate Linux chromebooks than I ever did with fleets of Redhat or Ubuntu laptops.
Intel haswell/sandy bridge chromebooks with custom hard drives. I think you're greatly overestimating how difficult it is to maintain all of this on Portage. I invite you to try a distribution that isn't Debian or Redhat-based. You might never go back.
I try not to specifically invite people to "install gentoo" thanks to /g/. they'll find it on their own. There's a lot of learning before quickpkg makes an install take 3-5 minutes.
The Gentoo Handbook is an amazing tool to learn linux. By the end, you have an expert's understanding of how Linux works, and how to install it without a GUI (or even a package manager). I learned it during the "Stage 1 era", when the install started with bootstrapping, then building your compiler before installing the rest of your system. Nowadays, you partition disks, format, chroot in, set timezone and encoding, untar the stage 3 tarball, emerge --sync, build your kernel (which is the hardest part), install a bootloader, install wpa_supplicant, and you're done. It's about 30% more difficult than an Arch install, becuase you probably will screw up your first kernel configuration and make a nonbooting kernel, booting to a black screen. But unlike Other OS's and distros, when it breaks in Gentoo, it's probably your fault.
I have way less headaches, because chromebooks are braindead to deploy this way. flash a bios, and then install the binaries once booted off a LiveUSB. Quickpkg allows you to make binaries of your existing system (built for Sandy Bridge, until I decommission the C710's, then I'll build for haswell) with all your custom flags already set, so VLC and other apps are far more robust than their Ubuntu/Redhat versions. Gentoo is also the only distro other than GalliumOS that actually has working Elan touchpads for many chromebooks (thanks to Hugh Greenburg of GalliumOS's patches that I'm maintaining for newer kernels), as the patches have not been included in Arch or other distributions's kernels.
I run a Poettering-free install (JACK instead of PA, wpa_gui instead of NetworkManager, OpenRC instead of systemd), so having one master install I can re-deploy in the amount of time it takes to brew coffee is pretty handy, especially considering de-poetterizing a new Debian/RHEL/Fedora/Arch install is painful, if not nigh-impossible, and at the very least time-consuming.
Haha. That is pretty evil given my experiences starting with command line Linux back in the day. Far as learning, I've heard that before about Gentoo. Thought about doing it at some point. Right now, kind of mixed between learning BSD or Linux as the security enhancements I create might need to temporarily stay proprietary to support their development. BSD vs GPL you know. I mean, I'd give back bug-fixes or feature developments where possible but differentiator has to be secretish.
"But unlike Other OS's and distros, when it breaks in Gentoo, it's probably your fault."
I'm avoiding that right now but fine with the concept. Reminds me of Mentor's Hacker Manifesto we used to quote to justify such responsibility.
"so having one master install I can re-deploy in the amount of time it takes to brew coffee is pretty handy"
That is handy. I've always especially liked how the source-based distros were customized to one's own hardware. That drastically increases the initial deployment time but significantly reduces many issues along the way.
Calculate basically takes Gentoo and precompiles it with certain defaults.
This means that the Calculate maintainers have made certain choices for the user, while Gentoo proper would have left them up to the user.
Contrast that with creating yum/dnf repos: run 'createrepo' on the directory of RPMs, and you're done! Signing RPMs is much better documented, as well. There were like three or four (conflicting and mutually incompatible) processes that come up when you google signing debian packages and repositories, and I still don't know with certainty what the "right way" is.
I don't know why Debian never got this part of things right; apt/deb has been around for about two decades, you'd think someone would have looked at createrepo and said, "Oh, hey, we should do something like that so it's easier for people to distribute package for Debian/Ubuntu."
All that said: I'm rarely a fan of "new package managers". I like having one standard package manager on my system (I don't really care whether it is apt or yum/dnf), and I strongly believe everyone ought to be using the system standard package manager across all of their systems even for custom software. I don't like the proliferation of language specific package managers, in particular. npm, gems, crates, PyPI, eggs, pear, pecl, composer, etc. Shit's gotten ridiculous. Even on my own systems, I can't keep up with what's installed from where. And the problem has gotten much worse over the years; I used to package all my own stuff as RPMs, and all of the CPAN or Python dependencies, as well, using automated tools for building RPMs from Perl or Python libraries (the old cpanflute2 and the dist_utils RPM target; there are better ways in both cases, now, I think, for those languages). But, now that I have a much more diverse set of systems with more languages and more dependencies, that's become more difficult. And, it's also become the norm for people to document only installation with oddball package managers (npm, in particular, is the way everyone documents everything these days), with no good mechanism to package for RPM or deb.
I dunno, I think we're going down a risky path with all these "package managers" that only work for one little niche and I believe security issues will be the major negative side effect. I mean, I have great tooling to let me know when my RPMs or debs are out of date across my couple dozen servers; it's much harder to know when packages installed via all those other systems are out of date; it becomes a manual process in some cases, and it's also risky because dependencies are not always very reliable in those systems. I do a lot of, "Will this break some other thing when I upgrade this library?" Which I don't worry about with RPM updates from CentOS (because of the compatibility assurances of that system), or from my own company repos (because we do our own testing and understand the parts we're distributing).
In short: Yes, it's hard to package and distribute for RPM or deb. But, not as hard as dealing with a half dozen different package managers, containers, and a wide variety of other things being updated via a wide variety of other mechanisms. The former is hard but quantifiable and manageable. The latter is a nightmare for security and reliability.
Aptly is a relative newcomer to the scene that makes repo creation/management much easier. It's actively developed.
1. Downloaded the app, opened Nemo and double clicked on the app. Another Nemo window popped-up and the app didn't start. Opened terminal and checked permission to find that it wasn't executable. The point of the ease-of-use is kind of lost as user would be puzzled and give up.
2. How do I uninstall the app? Is it as simple as deleting the file? What if doing that leaves orphan files (that I don't know about) that double-clicking on the appimage file could have created? The website doesn't mention how to uninstall files?
PS: regarding #2 above, I found that right clicking on the Mint Menu entry for the app shows Uninstall option. Clicking on Uninstall removes the entry from Mint Menu. The .appimage file needs to be deleted manually separately (which kinda makes sense). I just hope it hasn't left orphan files.
Sure, but literally the first thing on the landing page is...
> As a user, I want to download an application from the original author, and run it on my Linux desktop system just like I would do with a Windows or Mac application.
In fact, just above the download button for the GP's download, it tells you to make it executable before running.
What if doing that leaves orphan files (that I don't know about) that double-clicking on the appimage file could have created?
Edit: Come to think of it, considering how many applications out there haven't had any updates in years... that might not be such a good idea.
Each app has it's own copy of libssl etc embedded into the prepackaged "binary" which is executed.. That's enough to know it's going to lead to all sorts of suffering when you actually try and rid yourself of $CVE of the month.
What percent of AppImage's in the wild have shipped an update? Of those, how many have updated previous stable releases rather than just the latest version? I suspect very few.
[EDIT - Typos]
You could also just not update, but then you'll have massive security holes in your computer.
There's a reason Linux adopted the shared library model.
Yes, I have a problem with that. I trust I can count on Debian/Ubuntu/RHEL will ship a new package for every critical CVE promptly, without forcing me to upgrade to the latest upstream version. I have zero faith upstream maintainers will do the same - which leaves me with two choices
1) Pretend there is no CVE
2) Use the latest app version, bringing with it all new bugs, workarounds, incompatibilities and so forth - and hey, the latest version might not even have the fixed code in it.
There was an article just a couple of weeks ago pointing out that distributions frequently don't fix security issues:
No doubt it's better for high-profile applications, but there are far more applications that people want to use than distros have the resources to issue security updates for.
If distros can't do security maintenance for the software people use, claiming that they shouldn't use that software at all is no kind of an answer.
As a comparison: WordPress is at 4.4.2, Debian 8 has it a 4.1 and Debian 7 at 3.6.1. You can't use those, you must use the one from the developers.
Same thing for example with Ruby: Debian 8 is stuck with 2.1.5 (November 2014), which is not even the latest one of the 2.1 series. The current version of the language is 2.3. And Node.js? 0.10.29 (June 2014). Again, you can't use the packaged versions and you must go to the developers, maybe through rvm, nvm or other language specific package managers. I really don't understand why they bother packaging those languages. They should have tiny packages that explicitly tell that you are on your own and redirect to the appropriate package manager for the language.
I agree that Debian is too accepting of packages. We don't recommend that anyone installs Jupyter/IPython with apt, and I wouldn't mind if Debian removed it from the repos. But if there are all these packages that shouldn't be distributed through Debian, we need some other way for users to get them conveniently. We often use language package managers like pip, but they're not great for desktop applications - they can't easily set up a launcher entry, for instance. That's where something like AppImage comes in.
For the packages you've mentioned, live active projects with maintainers who can produce this distribution format but can't deal with apt, maybe the giant ball of wax is okay. I won't be installing anything this way, but clearly I'm not the target user. It seems certain, however, that in future we'll be hearing about lots of vulnerabilities attributed to AppImages linking in old library versions.
The Windows / Mac distribution model feels more like a necessity on a closed system than something that is ideal.
EDIT: I realize that comment sounds more critical than what I intended. While I have a preference for package management, others clearly don't. Also, I might be inclined to use something like this for fast-moving big apps where I want the newest features. I could see trying Guile-Emacs or QGIS this way.
As for sandboxing, this is definitely an area which we would like to add to AppImageKit, e.g., see https://github.com/probonopd/AppImageKit/issues/77 - thoughts and pull requests welcome!
And despite all the things I hate about systemd, it does make packaging with a unified initialization script. There have been projects that have tried to do just that piece as a drop in replacement (uselessd) but have gone unmaintained.
This is the correct URL: http://installbuilder.bitrock.com
There's really no excuse for this.
If you don't want to exclude older browsers, you need a dedicated IP address, or you need a system to manage putting multiple names on one certificate. Let's Encrypt is a great option for multi-SAN certificates, as long as you don't care about Windows XP users.
If you have any kind of redundancy, doing perfect forward secrecy gets much harder. The open source approaches to scaling TLS along with PFS are bleeding edge, poorly documented, and may involve writing some code.
I agree, TLS everywhere is a worthy goal. But I think it's easy to underestimate how complicated it can get, especially at scale.
This is a website for AppImage. I doubt they're targeting XP users.
> If you have any kind of redundancy, doing perfect forward secrecy gets much harder. The open source approaches to scaling TLS along with PFS are bleeding edge, poorly documented, and may involve writing some code.
This is simply a brochure website, so this does not apply.
For more complex applications or websites, there is a certain degree of engineering required to support HTTPS-by-default. But in today's world it is a necessity.
This is a backwards step, if anything.
The Linux community is missing the forest for the trees.
If it's indeed an alternative, why would anyone choose this rather than xdg-app? xdg-app being backed by fedora/freedesktop/gnome folks might mean more traction and maintenance, doesn't it?
EDIT: okay, saw https://github.com/probonopd/AppImageKit/wiki/Similar-projec... mentioned below by the author, sorry for the noise.
Ports-like tools work very, very well on Linux. Binary distribution works fine too. Also, the shared libraries of each application don't have to be (and usually shouldn't be!) bundled with the application.
Finally, I'm concerned about the MIT licensing, with all the GPL code floating around in that repository.
"As an application author, I want to provide packages for Linux desktop systems, without the need to get it 'into' a distribution and without having to build for gazillions of different distributions."
So you are a developer. You don't want to build gazillions of packages, but you want to target lots of users. Suppose you build a package for one distribution. Good luck convincing all your users to switch to that distribution.
I guess you are using Arch. Suppose the developer's opinion is that Gentoo's package manager is better, so he only bothers making a Gentoo package. He then tells you to switch to Arch if you want to use his app. Will you seriously do that?
Don't steal my code, please. Don't bundle it with your proprietary app without my written permission.
Furthermore, the application seems to include parts of Linux and certain other GPL'd code files while trying to maintain an MIT license.
This is rife with licensing concerns, only one of the many concerns I have with it.
If I'm feeling especially up for it, instead of running the install step immediately, I throw together a PKGBUILD so I can use Arch's package manager to manage it (like this one: https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=therm...). Gentoo has a really similar system, so I could just about copy the dev's script wholesale. Then instead of running the commands myself, I run "makepkg && sudo pacman -U *.pkg.tar.xz".
Oh, you mean for "regular users". Yeah, if you want to support those, you need to do some legwork. Regular users who have no intention of learning how to use Linux usually aren't (and shouldn't be!) using Linux. Mac OS and Windows have support teams and wide adoption. Why use Linux if you don't have to?
Please note that I'm not trying to sound elitist. But why would you use an operating system that has a steeper learning curve as a regular user?
I'm actually happy that Linux lacks a central concept of package manager because it encourages experimentation. Everyone gets to figure out what kind of package ecosystem they want to see most. We get to choose instead of just getting whatever comes with the computer.
I don't see the lack of stable APIs elsewhere as an actual problem. The biggest problem is that it makes life harder for proprietary software developers - it's pretty much mandatory for them to setup automated CI & release processes if they want their software to actually be usable.
There's a huge upside to not caring about a huge stable userspace API: it's much easier to continue to evolve if you just stop caring about backwards compatibility. And often, backwards compatibility means giving up your future to hold on to your past. The difficulty of getting anything done grows exponentially more complicated, often with few significant tangible benefits.
All things UI in Linux distributions go through so much insane thrashing that very very few application developers want to bother. There are no tangible benefits to UI thrashing; people use systems for software, not window chrome!
I'm not up on the latest in Gnome-land and don't know the background on the issue you linked, but it looks like the benefit is that they get to stop maintaining a bunch of code for a UI feature they've decided isn't actually a good idea. That will make it easier for them to continue to build reliable software, as there will be less legacy code to consider when writing future code.
The UI churn you talk about may be because the "window manager" concept is probably fundamentally flawed. Creating a coherent and sensible UI when you have to target a whole suite of different window managers which may use entirely different UI paradigms is... probably a nigh-unsolvable problem which might not even be worth working on.
As long as you go with the Windows thinking of having the menubar inside the relevant window, your UI will work across just about every WM out there.
I really don't see why the UI world is so hell bent on moving away from the menubar. That is unless they are stuck up on turning every UI into a touch UI...
You're such an optimist.
I do think Linux could make smarter use of static linking in places. Weird, esoteric and tiny libraries with unstable APIs make terrible shared libraries and are a constant source of irritation.
Static linking has it's place, and so does shared libraries.
It cannot actually do this however, because in order for an application to actually run you need to ensure that it's dependencies (in particular shared libraries) are installed on the OS in a version that's compatible with the application, and the documentation is essentially hand-waving the problem away:
Gather suitable binaries of all dependencies that
are not part of the base operating systems you
are targeting. For example, if you are targeting
Ubuntu, Fedora, and openSUSE, then you need to
gather all libraries and other dependencies that
your app requires to run that are not part of
Ubuntu, Fedora, and openSUSE.
xdg-app actually solves that problem with its Runtimes - you create an xdg-app application by building it against a SDK that corresponds to a particular Runtime, which won't be updated in ABI breaking ways. Application authors know exactly which libraries they can rely on and which they have to bundle.
Another benefit of shared libraries is security fixes and this becomes more and more important.
The only solution I see is that distros must preserve the role of shared library managers and support cohabitation of many library versions. The shared libraries have been designed to support this cohabitation. Any app that doesn't support this cohabitation should be fixed or rejected.
Shared libraries should have a version release and patch number. The apps should only depend on version and release number. The patch number is for bug fixes.
Users ahould have specific permissions to add new software, which can be enforced by write access to the shared library directory. Users should not be allowed to install crappy and insecure software/libraries on computers shared with other users.
My question to anyone who knows the answer: would it be possible to write an app (in Linux of course) which access the /dev/tty.FORESP8266 and writes/reads raw blocks, such that it could be bundled into AppImage and run on, basically, OSX and Windows as a Linux app - and still have the raw i/o access it needs to perform a firmware update?
If so, I'm willing to expend the time to learn how to use it and build this app .. it seems to me to be a more interesting approach than using, for example, Qt to build a cross-platform serial i/o app ..
Anyway, I'm using Qt5+QtSerialPort myself inside AppImages, and works beautifully.
what? no, man... this is the win9x "freeware" application install.exe, with my machine fill with crap
What if you want to install the latest version of VLC?
BTW: normally I don't care if I don't use the latest latest version of $program. Do you REALLY need to always get the latest tip of git of every program you use?
oh, and who said to you the latest version of $program is packaged?
As for "it just works", I do wonder how long the rest of you have spent trying to get Postgres and psycopg2 reliably working together on a Mac. (Yeah, Postgres.app "just works"...)
It's a one-command, ten-second install on my Debian machine.
Just need something in the file manager to recognize container images and run them like an app.
This "drop this in and run it" seems a lot more promising for that use case. Similar to freezing a python app, I suppose.
Has anyone ever said such a thing? My guess would be the exact opposite: users like the comfort of installing via the OS package manager, rather than hunting for binaries on the internet.
Why not take the opportunity and package app in a way that isolates them like SubgraphOS is trying to achieve? :)
Obviously it's terrible for getting security patches. Not entirely sure what to do about that. It would be nice to make package management more consistent across distros but I digress.
Special support for "portability" is an indicator of a design flaw in your OS.
Please pull over. I want off this ride.
Why the hell are we regressing to shipping around hackily built binaries?
But even if distros adopted some kind of half-rolling-release model so applications could be updated quicker, developers either have to manage N different packaging systems (apt, rpm, etc.) or rely on other people to deal with packaging for different distros, and lose control of a crucial part of their user experience.
Then there's dependencies. Linux distros go to one extreme (no bundling), while the popular approach to developing applications is to bundle most libraries so that you know exactly what version you're working with. There are arguments for both approaches, but it's another source of friction between developers and distros.
Distro packaging works for computational infrastructure that's widely used and not changing much, but it's a poor fit for what users and developers expect for actively developed end-user applications.
There's even a way to make an ebuild for nightlies, although you can get build breaks with those.
When I install software from a distro, it's been vetted and I can safely assume the software has a sane reproducible build process, or at a minimum, has had a sane reproducible build process added by the package maintainer.
If a new standard wants to solve that process, I'm all for it, but I demand reproducibility (or somebody to hold financially reaponsible, in the case of closed source)in my software builds.
Normal people just want to click a link and have their app install and run.
And that is true.
I like the approach taken by e.g. Nginx and MariaDB, where I add additional vendor repositories, but this workflow is probably neither user-friendly enough for my mom, nor have all vendors the resources to maintain several repositories for different distributions and their respective versions.
The "every application gets the same release cadence no matter what" approach appeals to me, and I'm a normal user.
I could be erroneously attributing the intended use case, but I believe this isn't designed for people who know what they're doing; it's for people who don't know, like grandma.
I would say that most things should use the old model, but there are scenarios where this could be preferred.
Building from source? Please try to imagine this from the point of view of users who aren't so technically expert as you are. They just want to, for example, run the latest version of VLC.
Something like this is probably more useful in the Linux desktop advocacy areas and not so much for hackers. Sometimes suspension of disbelief should be applied to projects like this, as it's target market is not hackers but instead helpful in converting desktop users.
That being said, I totally get why Apple, MS, et al are doing this. Unsavvy users installing random binaries causes no shortage of headaches or damage to your reputation. I just worry about the long term implications.
>Just download the application, website, make it executable, and run!
You want me to download some mystery program and run it with all my privileges? No ... that isn't going to happen...
There's a big difference between:
(1) A user being able to run buggy programs if someone is able to hack the secure distribution chain or if someone is able to get the user to give root permissions (or any permissions other than those of their ordinary user account) to a piece of malware;
(2) A user being able to run buggy programs because the default app model on their system dumps all app binaries into their ordinary user account's data area, with that user having write access to all the files, so any random piece of malware running as that user can hose them.