Hacker News new | past | comments | ask | show | jobs | submit login
Single binary executable packages (volution.ro)
177 points by feross on March 2, 2022 | hide | past | favorite | 180 comments



We don't have single binary releases, but we do have single folder releases that are guaranteed to work out of the box on any given windows server. For me, the number of actual files is not of consequence. Just being able to run something and know its going to run is the key bit.

The new Self-Contained Deployments feature in .NET made our lives infinitely easier as a B2B software vendor. We no longer have to have sweaty-hands conversations with the customer about which .net frameworks are installed or even allowed (?!) in their environments. I literally click my mouse... 4 times in visual studio and have a zip file containing everything we need to send to the customer.

We also moved to SQLite recently, so that one zip file is literally the entire kingdom. You could grab a blank VM from Azure and our product would startup and run on it without any extra bullshit.


There's something nice about the ergonomics of a single file compared to a single folder though. A single file is trivial to distribute as-is; you don't even need to put it in an archive. You can put multiple single-file apps in a directory on your PATH, no futzing with symlinks etc.

There are certainly tradeoffs involved and it's not always the right decision, but these days I usually try to publish software as a single file until I can't.


NeXTSTEP and (through its lineage) macOS got it right. You put your executable and data files into a directory with a specific structure and then to the user it operates as if it's a single file. You get the benefits of a single folder distribution with the ergonomics of a single file distribution.


What happens if you try to, say, send that executable as an e-mail attachment?


Well, first of all, you don't do that, because what email server anywhere is going to accept mail with an executable attached, without thinking it's some kind of attack?

But in general, you put it in an archive.

(Yes, that's inconvenient; but when you're coming from an OS where you already had to do that — e.g. macOS 9, where you had to use StuffIt Expander to distribute executables in order to ensure they retain their resource forks — switching from StuffIt to zip was a wash.)

These OSes (NeXTStep, macOS) usually integrate some support for mounting disk images, though, and so executables packaged for distribution are usually packaged by putting them in a disk-image. To further decrease the friction (i.e. avoid having to say "copy this thing out of this directory into your /Applications dir", they usually make it so that if you run the app from inside the image, it copies itself to your /Applications dir, and then relaunches from there. Sometimes it even unmounts the image when it does that.

Given that most people get programs from websites, though, there are also special techniques built into web browsers to make downloading executables on these OSes more transparent and less requiring the abstraction of a disk-image. E.g. Apple's https://en.wikipedia.org/wiki/.XIP format, where any .XIP downloaded in Safari will be automatically+transparently integrity-verified and unpacked, with the result being that it seems like you just "downloaded a folder" somehow.


> There's something nice about the ergonomics of a single file compared to a single folder though

Agreed. We are going to move to proper single file (or as close as we can get) as soon as any sort of meaningful AOT compilation is available:

https://github.com/dotnet/runtimelab/tree/feature/NativeAOT

https://github.com/dotnet/runtimelab/issues/248

There is a "PublishSingleFile" option, but that is just a zip file in disguise.


> There is a "PublishSingleFile" option, but that is just a zip file in disguise.

I don't quite agree. These days (as of .NET 5), if you publish a single-file app all .NET libraries live in the executable and do not need to be unpacked at runtime. Native libraries still need to get unpacked at runtime (if your application uses any; many don't) but I haven't found that to be a big deal in practice.


> and do not need to be unpacked at runtime

Thanks for the correction/clarification. I had not realized this was a thing now. Will give this option another look.


Yep, that is good enough. What makes Linux so horrible of an experience for many desktop users is so much of the software is built with a server mindset, with a vast array of dynamic linking that often does not work. If the software is an up to date part of your particular distro's package manager, great. But when it isn't, total nightmare.

And your end user desktop application just doesn't need to be built that way!


I'm surprised that this is now possible with .NET. It's amazing what Microsoft can achieve in 20 years if they put their hearts and souls into it.


I'm reminded of Joel Spolsky's request for this:

https://www.joelonsoftware.com/2004/01/28/please-sir-may-i-h...


For people who aren't in the know about modern dotnet, you can also do single file deployment on top of the self contained deployment. There's even a ReadyToRun thing that precompiles things that would otherwise be JIT compiled.

Experimenting with Uno framework, I was able to build a Skia Linux GUI that built to only 2 files, the program and a skia library.


Is this the same approach as portable apps?


The convenience of binary executables is pretty clear in the DOS and Windows ecosystems, where games and native utilities have been distributed this way for decades. I have Windows binaries I've been carrying around for twenty years that still work on Windows 10, and old DOS programs run just fine in DOSBox. A key characteristic of such programs is that they include most of the libraries they use in the installation, with only a few common dependencies like the C runtime or Win32 left to the rest of the system. FFMPEG is a great example.


Well, you can do the same on Linux. The Linux kernel system call ABI doesn't usually break support for old software, and as long as you have a statically linked binary you can run even software that was compiled targeting Linux 2.4 (if I recall correctly).

The fact is that is usually done in the Linux ecosystem: if you have access to the source code, what is the pro? It's just simple to recompile the software targeting the release of the operating system you are using. By dynamically linking system libraries you will get smaller executable and save RAM since they are shared (not only on disk, but also in memory, that is what shared means!)


Well, the problem is that it's not simple to recompile. There's a number of different build systems. Dev libraries you might not have installed, and that might no longer exist. Warnings that turned into errors in your version of the compiler, and all sorts of other nastiness.

I dread every time I need to install a (older) package from source. I think around a quarter of the time it turns into a multiple hour adventure of frustration, only to discover the library it was missing is called $NAME-dev on my distro...


I have been personally and my whole professional life on Linux for 15y. And I have the exact same feeling when I have to compile something.

The waste of time to compile anything is staggering. And more often than not I give up on failure after 2h. Because one of the recursive depency is impossible to build. Or I got tired of git cloning or wget'ing 50 differents stuff.


As far as I know gcc doesn't turn warnings into errors from one release to another. In fact the default is that no warning is considered an error, unless you use options to force a different behavior. This is something that Apple seems to do (last time I used a mac I've seen that the Apple compiler that is a proprietary build of clang have some warning considered errors enabled by default), but I never seen in any Linux distribution.

For code, you can compile by specifying the correct C standard. Unless the build system is badly written and doesn't add the correct -std=cXY to gcc of course (but it's trivial to fix).

Regarding libraries, that is a real problem. Because to compile and old software you should use old version of the libraries, and they are usually not easy to get (and then it's a pain to compile them, and put the correct environment variables to make the software you are building link to that libraries and not the one in the system).

I'm always able to compile a software, even an old one, on my system, and it never took me more than a couple of minutes. But I recognize that I'm a pretty experienced Linux user, so sure for the average user having a binary package is more easy.


I'm guessing that they're talking about -Werror. (Which should never be specified in the build config you distribute, but is fine for tightly controlled environments like your DEV and CI environments.)


Can’t agree more. Especially when the source code you want to build has a lot of build dependencies, and they have version conflict problem with your environment.

Sometimes it will ends with: XXX is missing, required version mismatch, install correct version, your computer crash.


> ...only to discover the library it was missing is called $NAME-dev on my distro...

You lost me at this part: if the dependency is easy to install and the issue is you just don't know that all of the headers are in -dev packages this seems like the easiest problem to never have again. Really: right up until there I was rooting you on.


That's really not a great idea on Linux. A ton of stuff is handled in userspace. There's really no better way to distribute Linux apps than source code.


Solvespace (CAD program) comes as a single executable for windows: https://solvespace.com/index.pl

I dread the day we have to put together an installer for it. If we ever decide to use GTK on all platforms we'll have to since the GTK developers don't seem to think static linking is a useful thing for them to support.


GTK on Windows is one of those things that irk me to no end. I've installed several tools that come with the entirety of GTK (and even a dbus instance that tries to start on boot) in the program files folder for that specific program.

The GTK folks oppose static compilation, and that's a perfectly fine point of view because they can support their opinions quite well. However, without an official "Microsoft visual c++ runtime" equivalent, distribution of GTK apps is just so annoying on Windows! You can get the entire GTK library suite just fine on most Linux distros but when it comes to non-Linux platforms, you're left to your own devices, and that means every platform comes up with its own solution, incompatible with the rest.


solvespace is fantastic tech, yes


Meanwhile, Apple has gone through two deprecations of CPU architectures: first PowerPC and then Intel. Macs are nice, but I wish they had the same commitment to backwards compatibility.


Well, 3 actually, you missed the 68000 architecture.

I'm not sure it really matters, in the end. How often would people want to run PowerPC-era Mac software on modern hardware natively if they could? The HN audience will include a disproportionate number of people who do, so you guys aren't included ;-)

I haven't used a Mac in several years but it seems like the whole Apple mindset is more appliance like, that if you bought old software you just keep running it on the original old hardware, and given the problems with Microsoft's eternal backwards compatibility approach (applications and the OS bundling forever ancient code and keeping ancient interfaces around forever, and ending up with a disproportionate number of API calls numbered foo2, foo3, foo4ex), I'm not sure either way is strictly superior to the other. There are trade-offs with both.


There are some universes where old software or hardware is still heavily used or valuable. One area where I dabble is electronic music. Sure there are lots of new things, but a 50 year old guitar is still a viable instrument, so why not a 30 year old synthesizer? So in this universe, some people go to great lengths to maintain software that supports these instruments. I have a Yamaha VL1. Released in the mid 90s, it was Yamaha’s flagship “tech preview” for waveguide physical modeling software. You could play the instrument and tweak some very basic parameters without a computer, but if you want to actually edit the modeling, you need the “expert editor” which requires MacOS System 7 and a direct serial MIDI port or Apples long gone “midi manager” software. I keep an old 68k laptop around for that purpose alone.

Korg released an audio DSP playground in a PCI card called the OasysPCI which never got OSX drivers, so I have a MDD PPC Mac to run that. There are probably better things running natively today, hey instruments are things that shouldn’t be obsoleted, since they all have their own sound. Most of the rest of the ancient software I run is run under Wine (which worked quite well up to Mojave, but became difficult when Apple killed 32 bit support) because Microsoft has done better at retaining backwards compatibility. So now I have a laptop permanently stuck on Mojave.


> How often would people want to run PowerPC-era Mac software on modern hardware natively if they could?

I think some creative people, e.g. writers, might like to be able to continue to run their favorite old word processors if they could. Every so often you'll hear about a writer who's still using Wordstar or WordPerfect or some other old software because they know it inside out and it does what they need.


Oh, definitely. I'm just wondering how many Mac people would want to run said software on new hardware, as opposed to using their old hardware.

Of course, using old hardware has plenty of disadvantages (the increasing scarcity over time as parts become impossible to acquire) but even now you can find power macs for cheap. Your average ebay listing is pretty ridiculous but I snatched a G4 ibook there 2 years back for $14 USD, essentially in like new condition.


Not PowerPC, but 32-bit programs are no longer supported since Big Sur - since they decided to no longer ship 32 bit libraries. There's software on the Mac that's 4-5 years old and cannot be run.

And it's not an academic one - a large percentage of somewhat older Mac-native Steam games don't even start.


> I'm not sure it really matters, in the end. How often would people want to run PowerPC-era Mac software on modern hardware natively if they could?

It includes much more recent software too. It seems like half the mac games on Steam are 32-bit binaries published before October 2019 that no longer run on modern versions of macOS.


4 if you count 6502 (Apple II).


Hell, iOS will refuse to run binaries compiled as little as two years ago


Microsoft does it because they cater to the enterprise far more than Apple. From that perspective I'm happy with the direction Apple takes.


You missed deprecating and dropping support for x86-32 binaries, then moving from x86-64 to ARM (with Rosetta 2 for now).


That decision looked a little bit more understandable once they announced the transition to ARM.


For me, close to 30 years as some small utilities I wrote when I first started Win32, I still use regularly. Most of them are below 100KB and link to MSVCRT, the system C library.


I've argued this stance once before, but I feel it needs reiterating on this topic: I for one am 100% convinced that whatever the (FOSS) software future has in store for the world MUST have strong support for dynamic linking.

If you look at something like `apt-cache show podman | grep ^Built-Using:` (Output: https://paste.debian.net/plain/1225449) on Debian 11, you will see why. Imagine a few of those components shared between tens of packages, and security problems discovered im some. It's got to be any package maintainer's worst nightmare. "Traditional" distros run by volunteers and in participation of small-ish companies will not be able to cope with that kind of need for rebuild churn - and yet they will live amongst us for many years, if not decades, to come.


I've argued the opposite stance numerous times, because shared dependencies are far rarer than they might appear to be while requiring custom packaging for different distros is a burden I don't want to bear as a package author. I want to have one build for all distros, not custom packaging for every one that has a slightly different definition of how software is supposed to be packaged.

In my experience, non-developer users don't care about package managers (they want it to be invisible) but do care about one click install. The only way I've found to satisfy those users is to distribute applications outside the distros' package managers and statically link everything as much as possible.

It saves me and my users time and money. Package management with inverted dependency trees is user and developer hostile, and it's why shit like podman and docker have to exist in the first place.


Creating packages for various distros is not your job as a software developer, and it never was intended to be your job in the first place. At least part of this stance is based on a misunderstanding of how Linux package manager style software distribution works.

You most likely don't have experience creating and maintaining packages for every distro that exists, expecting you to be able to do this would be silly. Typically the users or developers of the countless different distros will be who handles packaging your software though, not yourself!

As a developer you just need to provide the source code, use a standardized build system such as make, cmake, meson, cargo, etc, and a list of libraries that your software depends on. If you do these things, creating a distro specific package for your software will be totally trivial for whoever feels like doing it and contributing to their distro's repo. Most distros have decades worth of tooling accumulated to make packaging software that uses standard build systems absolutely trivial!

Optionally, you can provide a flatpak image (or similar) for users who want to circumvent the system package manager.


Yes, it is my job as a software developer to distribute software to my users. Having distributions pick-up my software and distribute it themselves is a plus, but until my application has thousands of users that won't happen.

And in order to reach that "thousands of users" my users do need to be able to actually use my application (open source or not); this means either I or them has to do all that work...

So yes, one needs to "have experience in creating and maintaining packages for every distribution that exists"; or as I've proposed, just skip all this hassle and provide single binary executables for the platforms (not distributions) that one supports.


I haven't tried submitting any programs I wrote to Linux distributions, so I don't know how easy it is to find people in Debian, Fedora, and possibly OpenSUSE to package an app with no users yet, or to write your own OpenSUSE or Ubuntu PPA or Arch AUR package to distribute your app. Nonetheless I hear horror stories like https://lwn.net/Articles/884301/ saying that new packages have been waiting for up to 11 months to be reviewed. (Right now, https://ftp-master.debian.org/new.html has 55 packages down from 208, and 9 of the 55 packages have been waiting for over a month to be reviewed.)


You seem to have misunderstood what I was saying. I never said its not your job to distribute your software to your users in general, I said it is not your job to distribute via distro specific packages and repos. Even if you wanted to do this you can't since you don't have the necessary permissions to contribute code to most or even any of the repos in question.

I am not against providing a static binary or image (or whatever) by the way, this is probably the best thing that you can do!

I was just trying to clear up the misunderstanding about distro specific packages and who creates and maintains them since most people don't have much experience with the process.

It's also worth nothing that this isn't really a one way or the other deal, you can provide a portable method and people can package your software eventually. I think the two styles work very well with each other.


This is divorced from reality. You're right, I do not have the experience to package it on every distro. I do however get bug reports that "$app does not work on $distro" that I have to fix as the software developer of $app, because "just wait for another software developer that understands $distro to package it for you" is not an acceptable workaround for users.

Even something that seems innocuous like, "oh sorry it's not available for that by default, here's a tarball" is too much friction for user applications.

Flatpak is the closest to a solution we have, but it has its own issues. It's infinitely superior to "just use the standard build tool like $(N different build tools) and list your dependencies, then pretend it's someone else's problem to solve!"


maybe we can also mention that semver is more of a rough guidance than anything else. you dont really know if an X+Y that runs fine will really continue to run with X+Y' unless you tests that cover that workload and you've actually run them.


First, can this not be reduced to an automation and compute problem? A new version of a library with a bug fix or security fix is released, so rebuild a bunch of packages. Where's the issue? Statically linked binaries can be produced from dynamic libraries, so the "rebuild a bunch of packages" can be further reduced to "re-link a bunch of packages". Optimize bandwidth consumption by building locally and validating against a reproduceable build transparency log.

Second, shouldn't the majority of the time rebuilding packages be dedicated to testing each application with the new library? Would you just skip that step if you used dynamic libraries?


If traditional distos don’t scale then perhaps we really should be looking at alternative approaches? We need systems that serve the users ahead of the maintainers.


At the moment, given that the majority of open-source software is hosted on GitHub, perhaps GitHub releases (as in downloads from the releases tab) might be one such possible "alternative software repository". It's only missing an official "installer"; but there are a few unofficial alternatives like `wget` and `curl`. :)

----

I'm not endorsing GitHub, I'm quite neutral about it, however at the moment it does offer a "standardized" experience. So much so, that whenever I see a project announced on HN and the link takes me to a landing page, I immediately look for a GitHub link and switch to that.


Well said. Dynamically linked packages managed by a package manager and created and distributed by maintainers is the Linux Way of doing things for multiple very good reasons.

In fact, the article brilliant describes one of them:

> Modern software distribution: AKA "actually free with no-ads apps store" of the Linux and BSD worlds.

That's it. That's the advantage summed up in a single description that wasn't even intended as a description of the advantages of the current way of doing things!

The article has a list of "cons", but it utterly fails to consider the possibility that this overwhelming advantage of what it calls "modern software distribution" is simply and inescapably tied to the maintainer-oriented approach that currently underlies it!

Attempts to create modes of distribution that are even slightly different from maintainer-distribution (think Google Play) suck ass. For all its faults, F-droid is a vastly better platform because it insists that the software must be built and distributed by an F-droid maintainer. Stepping even further away into the realm of developer built opaque binaries is begging for chaos and misery.

As you say, it's ultimately a security concern [1]. The article claims that a change is necessary because of "increased usage of languages and run-times that don't fit the current build model", "some of these new projects have large numbers ... of dependencies", but these are themselves problems with modern software development. Even putting aside distribution, bloated nodejs dependency trees create security vulnerabilities. The inability to develop software without pinning exact versions of your dependencies (which then need to be manually upgraded by the developer) creates security vulnerabilities and fragility. These are problems, not good reasons for changing our current way of distributing software!

I'm convinced that there are some (like me) willing to die on this particular hill. Come what may, even if half of the developers out there switch to Go and only ship static binaries, we're going to continue working on and using traditional Linux distributions with maintainer controlled software. (To be clear: Go and static builds are warranted in many cases. For example, closed-source rarely updated software like games, and software that is "deployed" rather than installed. But these are not the base case for Linux distributions.)

Further reading: http://kmkeen.com/maintainers-matter/

[1] https://blogs.gentoo.org/mgorny/2021/02/19/the-modern-packag...


> Attempts to create modes of distribution that are even slightly different from maintainer-distribution (think Google Play) suck ass.

I'm not sure Google Play is a great comparison here. Package managers like Homebrew and Scoop are probably better ones; when I just want the latest version of a CLI tool, they make that experience much better on macOS and Windows than on Linux (I know Homebrew sorta supports Linux, but it's still early days).


As I see it, part of the drive behind tools like Scoop is to overcome the limitations of the binary-shipping strategy common to Windows developers. They are successful at this, I agree, but only partially successful. They come from the tradition of programs like Ninite, which were explicitly built as ways to make the binary approach suck less than it did before.

I see the success of these programs as essentially stemming from the insertion of user interests in the form of a maintainer-like process. Sure, they're still working with the binaries, but the actual process of installing and managing these binaries is controlled by users, for users: https://github.com/ScoopInstaller/Main/tree/master/bucket

This means that you get moderation and in many cases modification to the behavior of the program. In a freeware environment like Windows that's full of shitware, at the very least you can in many cases strip out the ads. That's absolutely not nothing, but at the end of the day it comes from a group of user-maintainers stepping up and saying to developers that no, you cannot simply do whatever you want on my system with your software. That's ... sort of the whole point of a software distribution, in the Linux world!

When I want the latest version of a CLI tool on Linux, I simply `pacman -S package`. That's it; one command. I don't see how it could be any simpler or better than that, and on top of that I'm getting the benefits of moderation and integration with the rest of my system. Perhaps you are emphasizing latest version here, and hinting that you don't get that on Linux distros? That depends entirely on the distro; a software distribution is (roughly) a collection of user interests. An Arch user wants (and gets) the latest versions of all upstream software. A Debian user does not want this or see constant updating to the latest version as an advantage, so that's not what they get.


> An Arch user wants (and gets) the latest versions of all upstream software.

My understanding was Arch versions often lag behind that of the latest binaries published on GitHub (based on only mucking around with Arch once); however when I checked the AUR just now everything I use was up to date. Cool.

> A Debian user does not want this or see constant updating to the latest version as an advantage, so that's not what they get.

I'm not sure I agree. You can value Debian's stability and also want to install the latest versions of some tools; that's where I'm at, and package managers that just download statically linked binaries work for me nicely.


> Rust -- say goodbye to cross-compiling, but if you stay away from OS provided libraries, you are kind of covered;

What? Rust can cross compile perfectly fine.

> C / C++ -- certainly possible, but you'll need on retainer a UNIX graybeard, or at least an autotools-foo master;

If you write C and don't know how to cross compile your code you shouldn't be writing it.

> Python -- you're already covered, just read the zipapp documentation;

Python actually can't cross build (reliably).

> Java -- possible, but you'll need a startup script to just call java -jar some-tool.jar; (also not a good fit for short-lived tools, mainly due to startup times;)

Use GraalVM.


> If you write C and don't know how to cross compile your code you shouldn't be writing it.

This statement is the programmer equivalent of manspreading.


I'll admit you have a point.

I didn't mean to come off as elitist, but I have never met anyone I'd trust to write a C program that didn't have big obvious bugs that didn't know how to cross compile/statically link a C executable. If you haven't, that means you haven't spent enough time actually coding to be able to feel safe about what you're writing. Which is fine. But then use a language you know or spend more time learning about your tools before complaining. C is simple, everything around it is difficult.


I've written C on and off for decades and have not once needed to cross compile. Many of the best game developers spend their lives using C without ever setting up cross compilation. It just seems like an absurd yardstick.


Last time I tried to cross compile RISC-V with Rust it just didn't work at all. I even know how linkers work and tried my best with custom linker scripts and arguments, but the toolchain would not build a rv64gc. Perhaps they have fixed the issues now, though.

I would like to add that Zig is extremely cross-compilable. You can even use it to cross compile C/C++ projects.

And Nim is fairly easy to cross compile. Just output C or C++ and cross compile that. I don't know why people are saying that cross compiling C or C++ is so miserable. Yes, it's not always a one-liner, but that depends on the toolchain. For example, there is crosstool-ng. Also Linux distros tend to have cross compilers for many architectures right on the box these days.

Here is how to cross-compile to 64-bit RISC-V on my Ubuntu machine:

    sudo apt install gcc-10-riscv64-linux-gnu g++-10-riscv64-linux-gnu

Debugging RISC-V and a heap of other architectures:

    sudo apt install gdb-multiarch


> Last time I tried to cross compile RISC-V with Rust it just didn't work at all.

YMMV; my experience was actually the opposite, approximately an year ago. I was actually really surprised of being able to build na executable for RISC-V (that I then copied to a VM, and run) all by adding 4 lines to Cargo's config.toml.


Never done RISC-V, but for ARM, this is trivial, one just have to rustup the right target stdlib, and then pass --target to cargo and done. Well, at least that's that easy if what you try to compile don't have C dependencies. For C dependencies, there is cross <https://github.com/cross-rs/cross> which I had good experiences with.


> If you write C and don't know how to cross compile your code you shouldn't be writing it.

Cross-compiling in C is always miserable to set up, especially around the area of libraries. And it gets worse if one of the compilers is from a different vendor.


I don't think people realize just how much work tools like crosstool NG do to take away the pain of cross compiling. I remember trying to cross compile stuff in the early 2000s before all that tooling existed and.... oh boy, it was nightmare fuel. A good docker container setup makes it even easier these days.


I think clang (https://clang.llvm.org/docs/CrossCompilation.html) even makes crosstool largely irrelevant these days. Just specify your desired triplet on the command-line and off you go cross-compiling (mostly) effortlessly without having to build an entirely new toolchain. It can even masquerade as the MSVC compiler if you so desire.


> Python -- you're already covered, just read the zipapp documentation;

pyinstaller --onefile will create a signle file executable with an embedded python runtime and libraries (no need to install python) https://pyinstaller.readthedocs.io/en/stable/usage.html


I came here to say this but instead I am upvoting this. Pyinstaller is awesome.

Another thing to add, if you use run pyinstaller in docker with an old base image and copy the executable out, you can make a pyinstaller executable that will run on a vast range of machines due to runtime backward compatibility. I have single executable python apps that run fine on redhat, debian, ubuntu and suse.


By cross-compiling one usually understands compiling for the *same OS* but different architecture. What about cross-compiling for a *different OS*?

Can I *easily* compile a non-trivial C/C++ application from say Linux to OSX or OpenBSD, for simplicity say all of them are x86 64bit? (Bonus point if the application in question happens to use LMDB or another non-trivial library that plugs intimately into the OS API.)

About Rust, I've once tried to compile from Linux to OSX a quite trivial application (basically a glorified `md5sum`), and the experience wasn't pleasant, and it involved lots of "internet searching" and basically stumbling around... (I don't remember if I succeeded, but I definitely wouldn't try it again.)


> By cross-compiling one usually understands compiling for the same OS but different architecture.

I don't even consider that to rise to the level of "cross compiling".

Getting started with emscripten to target WASM for C and C++ is rather a chore of dependency wrangling IME. Targeting WASM from Rust, OTOH, is trivial. Targeting windows from linux with Rust is also quite straightforward, as has been experimenting with targeting consoles or Android from Windows.

Targeting a MIPS32 OpenDingux target from Windows was much more of a chore. The toolchain with libs, headers, etc. that I used is just a *.tar.bz2 that expects to be extracted to /opt/gcw0-toolchain of a linux distro specifically, and embedded absolute paths all over the place make changing that difficult. I do resort to WSL on Windows, basically only because of those embedded paths: https://github.com/MaulingMonkey/rust-opendingux-test

Acquiring the appropriate libs and headers to link/compile against for cross compiling is always an adventure, but Rust isn't making things any worse IME.


> Use GraalVM.

GraalVM can't cross-compile. If you want to compile it for a particular platform, it must be compiled on that platform.


That rust comment does sound weird. It's one of the easiest to compile across platforms. All you need is the rust toolchain and may be openssl sometimes.


The easiest to cross compile would be golang, it’s as easy as setting 1-2 environment variables. Last I used rust it required tool chain setup, which if you’re not familiar with the process and be quite difficult.


I tried cross-compiling a Rust project from ARM macOS host to ARM Linux target, compiling source files is still fine, but during linking stage it threw lots of lots of errors. And it’s a pure Rust project without any system library dependencies. Seems that the system linker also needs to support the target platform, or cross compiling won’t work.


Rust has the option of using a different linker for each target platform.

But of course, your linker must support it. I don't know where to get a linker for Linux ARM in macOS. On Debian it's apt install.


> NodeJS -- technically, I know it's possible; but I can almost bet my right arm that given today's ecosystem it's not feasible

Caxa - Package Node.js applications into executable binaries - https://github.com/leafac/caxa


pkg is another one: https://github.com/vercel/pkg

It can cross-compile too.


What's your experience with GraalVM? In mine, the need to painstakingly track every single reflection use by getting the stack traces at runtime (and never be quite sure if you got all of them) pretty much killed it for me


Have you tried the Tracing Agent? https://www.graalvm.org/22.0/reference-manual/native-image/A...

It will track all uses of reflection during a program's execution. You can then use the resultant config as the starting point for the reflect-config.json.


Oh, that's nice. I was using it through Quarkus a while ago, not sure if I missed it or it wasn't available yet. That seems like a good starting point, although I assume it can't assure no runtime errors in production

Edit: it seems to me that GraalVM would benefit from a repository of classes and corresponding reflection configs. Like DefinitelyTyped from the Typescript world


>> If you write C and don't know how to cross compile your code you shouldn't be writing it.

It would be great if GCC supported all target architectures out of the box, but what would be needed for multiple OS/platform support?


Simultaneously arguing that Flatpak isn't the future because it relies on runtimes that may duplicate resources, and then suggesting that you should statically link in all your dependencies (creating far more duplication than Flatpak runtimes) seems… odd?


In practice I'm not sure there would be much duplication. Even for tools using the same libraries they are very often using different versions.


In practice, the deduplication is pretty good. I have 78 Flatpaks installed (apps and runtimes), and here's my dedup stats (calculated from https://gist.github.com/powpingdone/001a46aa7db190b9c935f71c...):

  ===========================================
  no dedupe: 16.0 GB (17228518418 B)
  dedupe:    11.7 GB (12594871766 B)
  singlelet: 8.1 GB (8677649129 B)
  orphan:    1.8 GB (1889889290 B)

  ===========================================
  deduplicated size ratio: 73.10
  singlelet space usage:   68.90
  singlelet file ratio:    63.56
  orphan space usage:      15.01
  orphan file ratio:       2.63


> Languages suitable for single binary executables

> C/C++: certainly possible, but you'll need on retainer a UNIX graybeard, or at least an autotools-foo master

I don't understand this. The "C/C++ is black magic!!" meme just never seems to go away.

They both support static linking and always have. It mostly comes down whether you dependencies will work with it, and setting appropriate compiler flags. In other words, not terribly different than anything other supporting language.


Creating a portable single-executable in C/C++ on Linux is not particularly easy.

The dominant libc for Linux, glibc, says "static linking is highly discouraged", see [1] among many other sources for the subtle problems it causes.

There are solutions like [2], [3], and [4] but they all have downsides and don't compare at all to e.g., the go (or even rust) single-executable story.

---

[1] https://stackoverflow.com/questions/57476533/why-is-statical...

[2] Link glibc anyway and try to avoid the pitfalls.

[3] Link glibc dynamically but everything else static, and build your exe using an ancient toolchain (or better, a new toolchain built for an ancient libc) so that it works on most distros.

[4] Give up on glibc and statically link an alternative libc like musl.


Or just dynamic link to libc and static link the rest as much as you can. If a binary dynamically links to libc, but is otherwise statically linked, all good no?


Yes, that was option [3]. The problem is that glibc symbols are versioned and update fairly regularly, so your binary likely won't run on anything with a much older glibc, and glibc versions vary a lot across distros (e.g., some are much slower moving so use very old glibc).

To work around this you need to try to build on an old of a distro as possible, so you use only old symbols... but you don't want to build on an ancient compiler since it might not support the language version or compiler flags you need, so you first build a new old toolchain on the old distro against the old libc, or find a docker container from someone who has already gone though this pain.

Of course, this also means you can't use new libc features: libc is more than just the C runtime, it's also the shim layer between userspace and the kernel, so you might miss out on newer system calls (or have to write the wrappers manually, etc).

It gets even worse for C++.


Pretty good. Glibc does update occasionally. I've hit issues in the past trying to run old proprietary binaries that required libc.so.5

But yeah that's pretty rare tbh.


The major library version number (e.g., 5 vs 6) isn't enough: the symbols even within lib.so.6 are versions and some may update every glibc version, so in general you may have problems building against a new libc 6 then running on a system with an old libc 6 (you'll get a runtime error from the dynamic loader complaining a missing versioned symbol).


Yay for "semantic versioning" lol.


*Edit: I was wrong. Just checked go 1.7 and it made a static binary; no dynamic links. Don't know when that changed.

> [3] Link glibc dynamically but everything else static

Go does this. Zig links musl [4] by default but gives you the option to link glibc. I don't know about Rust.


Rust links glibc dynamically, and can use musl for some platforms. By default most (all?) of the musl targets are linked statically, but they should be able to use dynamic linking for it as well.


The good news is that there has been a long time since glibc had breaking changes, and the formal policy of the kernel reflects on it and makes future breaking changes unlikely.


It depends what you mean by breaking changes, right? Do you mean they try to avoid breaking ABI in a forwards compatible way? That is, building against new glibc then runing against old glibc should work?

I don't think so: they only make it ABI backwards compatible (compile against old glibc, run against new) and API compatible (recompile your source against different versions should work).

I just checked by glibc and it had > 200 symbols versioned 2.34 which is the current glibc version. If you use any of those you can't run against older glibc.


> Do you mean they try to avoid breaking ABI in a forwards compatible way?

Ok, I can see what I have written that could read this way, but the context is completely about backward compatibility. Forward compatibility isn't a concept that appears often in software.


It's not about how you wrote it, it's that the "single executable" concept needs both directions of compatibility if you dynamically link libc on any old distro.

E.g., you might use a relatively recent distro in your GitHub action (or whatever) to build this binary, but then anyone running CentOS or whatever can't run it because they ship an older libc version and required symbols aren't available.


If libc is backwards compatible, you immediately have a compatibility matrix that says where your code will run. Quite like you have for any kind of release on any environment.

You seem to want some way to have your code run on any platform whatever features you use in it. That you can't have. Of course if you want to support CentOS 6, you have to avoid any new symbol.


The problem isn't new symbols, it's that existing symbols can get the new version depending on what version you compile it against. On my glibc ~200 symbols have the newest version (2.34) but these are largely all methods that have been around forever. A binary built here won't run in most places.


> The "C/C++ is black magic!!" meme just never seems to go away

It's true, though. Proof of that is how many tools are made basically because no one wants to deal with Makefiles and autotools. I'd argue that even Docker fits this category of tools made to hide C/C++ builds away


> It's true, though. Proof of that is how many tools are made basically because no one wants to deal with Makefiles and autotools.

no one wants to deal with Makefiles and autotools because as of 2022 they don't support spaces in paths and the most used desktop OS on the planet happens to have spaces in the default path for installing software


IIRC Microsoft specifically added the space in the default path so that developers know they need to handle file paths with spaces


Yeah you just...link to static libraries. Which have usually already been built and installed alongside the shared libraries.

Also why would you be using autotools in 2022? CMake has been the de facto standard for about a decade now.


Came here to say this, anyone still using autohell in 2022 either is a stubborn grey beard or is seriously masochistic. Building static or shared libs/programs with CMake is treated simply as an option when calling the linker. I personally don't know anyone sane who wants to deal with the mental gymnastics needed by libtool. And don't get me started on the stupid permissions games played by 'make distcheck' where a simple git --archive would do just fine.


Using CMake is like trying to do one-handed pushups to setup your build. Use https://mesonbuild.com/ and make your life comfortable as a couch potato.


One big problem is the global symbol namespace when statically linking. If I'm not mistaken, Rust and Go are able to work around this by mangling their symbols. But it's way too easy for a C or C++ library to pollute the global namespace. Granted, on Linux, the symbol namespace is also global when dynamically linking, but that's not the case on Windows; I'm not sure about Darwin.

Combine that with the multiplicity of build systems, and it's easier to just use dynamic linking. Go and Rust solve that by having a standard build system.


Hmm. I wrote C and C++ for 20 years and while I haven't used either in some time, I don't remember this ever being a problem, except when trying to port Unix code that expected errno to be global to Windows (but that's the opposite problem of the one you're talking about).


Darwin also uses heirarchical namespaces by default; On Linux you can do this to a limited extent with dlmopen() but not as much as you may want (e.g., to link in a library linked to another libc -- but this is unique to Linux, most platforms only have the single libc)


It’s probably more realistic to have a single folder executable like .app “files” on OSX.

If the app has assets it gets a little crazy quite quickly.


There actually is/was two standards for this in Linux: GNUStep Application Bundles[0] and ROX-Filer AppDirs. Currently AppImage is trying to keep the dream alive, but it bundles the files in a squashfs image attached to the executable.

[0] Which of course came from NeXT Application Bundles, which is also where OSX Application Bundles came from.


AppImage does pretty much that out of the box.


AppImages are so good. They 'just work'. They're super easy to build and distribute too. I think more Linux distributions should embrace them as the de-facto app standard to replace snap and flatpak.


BeOS/Haiku appends assets to the end of the ELF file with a ToC and some easy-to-use APIs for accessing them.

I always wish that’d taken off.

I guess AppImage isn’t dissimilar in that it appends a squashfs image to be mounted with FUSE.


Windows also does this. Linux doesn't, preferring to KISS because with demand paging there's really no reason you can't just put them in your rodata section.


I agree 99% with this article, but the part about packaging C++ is wrong. I'd say use "vcpkg" (yes, it's Microsoft but also works on Linux) and then CMake. CLion is a nice IDE for that kind of stuff and it's reasonably easy to setup cross-compiling or GitHub pipelines with it.

The result:

C++ source pushed to git => GitHub pipelines => freshly compiled binaries for all platforms

That said, I have also re-written some things in Go just for its ease of packaging and cross-compilation. The fact that libraries "just work" is like a Zen experience to me. Also, "go generate" really makes it easy to merge small resource files into your one-file binary app. BTW, I come from python and Ruby, where "WTF did mkmf.rb crash?" is a daily thing...


What about a fully self contained folder? No need to statically link everything, on Windows and on Linux you can dynamically link to local files in a provided folder. Plus the folder provides a place to store local/temp files as well. This is how I deploy software on Windows and Linux, and I find it saves a lot of headaches.


Lack of compression (or at the mercy of the file system) tends to be an issue. That is why people choose to use squashfs and then have a writable overlayfs on top of it.


> if possible, especially important for Linux with it's glibc variation across distributions, try static linking; works flawlessly with Go (including cross-compiling), somewhat with Rust / C / C++, and most-likely it's already fulfilled with interpreted languages like Python / Erlang / Deno / Java;

it's not possible for GUI apps as you would have to link against the OpenGL drivers statically (and thus your software would only work on one brand of graphics card)


The author doesn't seem to understand linux containers.

> docker run just-another-2g-vm-image-running-a-fully-fledged-os-with-all-the-systemd-glory-masked-as-a-container;

Docker doesn't run a "vm" image and it definitely doesn't run another kernel (the kernel isn't included in docker images because it's designed to run on the host kernel). It's more closer to chrooted process (if you just consider the process) than anything... combined with some resource isolation.

> that certainly won't lose any data when you purge it by mistake

If you don't understand what you are doing, you can "purge data by mistake" doing other things as well.

Docker images come with ALL the pros listed in the article with fewer cons. For example, they are isolated from the get go by design, so fewer security concerns than running a binary on your system a your user without namespace isolation (namespace isolation is what's happening with docker). You can choose what files and devices process running in docker has access to.

The download size may be bigger than a statically linked single executable... but if you are running a lot of different processes in docker, they may share the underlying layers which may mean that you end up using less disk space overall.


> The author doesn't seem to understand linux containers. > Docker doesn't run a "vm" image and it definitely doesn't run another kernel [...]

Yes I do understand them!

In fact, in a former life (i.e. during the long forgotten time when Linux containers were just a novelty and LXC was seen as the "future") I was a big proponent of containers.

What I was not a big supporter of is containers, that instead of being as you well say "more closer to chrooted processes", instead they are almost stand-alone VM images lacking a kernel. In fact Fly.io does exactly this -- they take a Docker container, add a Linux kernel, and run everything in a Firecracker "micro" VM.

----

> that certainly won't lose any data when you purge it by mistake > If you don't understand what you are doing, you can "purge data by mistake" doing other things as well.

Yes, butter-fingers are a thing... But with Docker it's so easy to delete a container by mistake (that although shouldn't have state it does); on the other side, if you are about to issue `rm -R -f .` and you are inside `/var/lib/mysql` you do have quite a few opportunities that perhaps you are doing something stupid (you have to explicitly write `rm`, add `-R` and manually type `/var/lib/mysql`)...

----

> The download size may be bigger than a statically linked single executable... but if you are running a lot of different processes in docker, they may share the underlying layers which may mean that you end up using less disk space overall.

OK, say that for "server applications" (that happen to use tens of micro-services) this makes sense.

But how about a "tool application", say a static-site builder; does it now make sense to use Docker to run this tool?


I would think if you depend on data stored in a container being persistent, it's an indication you really should be mounting a volume to the container and persisting the data there. Then container restarts won't matter. Best practices are generally for containers to be "cattle" instead of "pets", and data persistence usually has a different solution.

Regarding using a container for distributing static site generators, does anyone really do that? I think you may be building up a straw man here, I haven't seen anyone recommend this workflow. Could you elaborate on how this relates to the parent comment's mention of using less space when running many containers? What the parent comment mentions is, in fact, highly relevant to some of the main use-cases of containers, which is running many containers of the same service on one or many hosts. Space can be saved there with layer caching.


> Regarding using a container for distributing static site generators, does anyone really do that?

Is Sphinx a good enough example? <https://www.sphinx-doc.org/en/master/usage/installation.html...>

----

> Could you elaborate on how this relates to the parent comment's mention of using less space when running many containers?

Could you elaborate on how does "running many containers" relates to the topic of distributing and running *a single tool* (that was the context of the article)?


> Could you elaborate on how does "running many containers" relates to the topic of distributing and running a single tool (that was the context of the article)?

YOU may be distributing just a single tool but your users most likely use the computer for more than one tool. If they use many tools under docker, it is likely that many of those tools share the same underlying layers.


> instead they are almost stand-alone VM images lacking a kernel

How are docker images even close to a VM image? Docker image is closer to shipping your application in a zip file with all the dependencies than it is to a VM image. In a VM image, you need a "minimal OS" which has to include the kernel, drivers, required OS binaries + everything that would be in a container.

Docker processes are literally running natively on host machine making linux system calls just like any native executable would... except they only see the resources you want them to see. Docker processes don't need the docker daemon to be running. In contrast, if you were to run a VM, you'd have to have at least minimal OS kernel with all the required OS processes and drivers and the application on top of that... not to mention the hypervisor itself.

> But with Docker it's so easy to delete a container by mistake

You do have to write "docker rm" to remove a container. It doesn't just disappear. The only way I can imagine someone losing data is if they think they are writing to the host filesystem when inside a container (without using volumes as one should) and delete the container thinking that their data exists in the user filesystem.

> But how about a "tool application", say a static-site builder; does it now make sense to use Docker to run this tool?

This is how you can run a "tool application":

    docker --pull run --network none --rm -it -v $(pwd):/src klakegg/hugo:latest
The above will only give the hugo process access to the current working directory and it has no access to your filesystem outside of that, not to mention it has no access to your devices, your network or many other things a host process would have. It makes sense to run it under docker just for that. Plus, you can be sure that it will never have any library incompatibility because you updated something else, even if something basic like glibc comes with breaking changes, will always run the up-to-date version (if that's not what you want, remove --pull or use a fixed version tag instead of "latest" tag) and you never ever have to worry about what programming language your tools are written in or how they are distributed. You also don't have to rely on the original author distributing docker files (in the case of hugo for example, the docker images are maintained by a third party). As long as they have one way to build and run it, you can create docker images yourself and make it available for others too.


> In a VM image, you need a "minimal OS" which has to include the kernel, drivers, required OS binaries + everything that would be in a container.

A docker image and a VM image can be the same.

For example say you have a simple web server that just listens on port 443 and does it's thing; if you compile it statically, and it doesn't need any other files, you don't even need a VM image.

You can simply create an initramfs that contains that executable (nothing else, no drivers, no configuration files, no distribution, nothing).

Just boot the Linux kernel (granted, a custom build that uses built-in modules that the hypervisor actually uses), use proper kernel arguments to either statically or dynamically configure the network stack, and instruct the kernel to use the server executable instead of the `init`. Your server doesn't even know it's running in an "empty" VM.

(Just read about how AWS Lambda implements everything by using Firecracker.)


What about things like man pages or other offline documentation, shell completions, init service files (ex. Systemd units), .desktop files, mime type files, etc.?


Good point!

I'll add to that READMEs, LICENSEs, SBOMs (Software Bill of Materials), example configuration files, etc. How to supply all those files when all one gets is a single binary executable?

Simple! Bundle everything in the executable.

As a bonus, because the tool outputs these files, it can now generated them dynamically. For example instead of a bland configuration file, with all the possible integrations commented out, it could either try to auto-detect where it's running and what's available, or present the user with a question-answer session to fill in the details.

----

For example, a pet project of mine <https://github.com/volution/z-run>:

z-run --readme # shows the README with `less` (if on TTY) or to `stdout`

z-run --readme-html # for the HTML version to be opened in `lynx`

z-run --manual # or --manual-man or --manual-html

z-run --sbom # or --sbom-json or --sbom-html

It even gives you the source code:

z-run --sources-cpio | cpio -t

----

So, does your tool need a `.desktop` file? Just create a flag for that.

Or, if there are too many such extra files needed to be placed wherever, just provide an `--extras-cpio` and dump them as an archive. Or if placing them requires some work, provide an `--extras-install`, but before `sudo`, kindly ask the user for permission.

Granted all this requires some extra work, and increases the bulkiness of the executable, but:

* all that extra code can be extracted into a common library; (I intend to do that for my software;)

* if all these are compressed, especially being text-only, they are a fraction of the final executable;

----

I am especially proud of the `--sources-cpio` option. Is something broken with a particular version of the tool that you rely on? Great, instead of bumbling around GitHub to find the particular commit that was used to build this particular version, I can just get the sources from my tool and use those. All I need is the build tools, which in case of Go is another `.tar.gz`.


[I'm the author of the article in question.]

I want to highlight a few ideas that perhaps I didn't cover very well in the article:

* static linking -- the concept of static linking appears in total 7 times in the article, once in the summary, once qualified with "if possible", three times as comments on a few example projects that do happen to employ it, and once discussing Go's advantage; the article is not about static linking!

* dynamic liking -- it is completely fine, just don't depend on non-mainstream libraries, and don't require the version released yesterday as most stable distributions won't have it (for the next couple of years);

* packaging and dependency minimalism -- as I've noted on Lobsters (<https://lobste.rs/s/adr60v/single_binary_executable_packages...>), the article is mainly about the simplicity of building, packaging, publishing, deploying; try to read it both from the point of view of the developer and the user;


I enjoyed the article, thanks.

I think you could add an entry to the "Languages suitable for single binary executables" section; these days, .NET (C#, F#) single-file applications work very well. This is a fairly recent development (single-file support has been good for ~1.5 years now) but I've been very satisfied. https://docs.microsoft.com/en-us/dotnet/core/deploying/singl...



"Java -- possible, but you'll need a startup script to just call java -jar some-tool.jar; (also not a good fit for short-lived tools, mainly due to startup times;)"

Technologies to look at:

* Warp Packer -- https://github.com/dgiagio/warp/

* Liberica Native Image Kit -- https://bell-sw.com/pages/liberica-native-image-kit/

Warp Packer bundles my JavaFX desktop application, KeenWrite, into single binary executable files:

* https://github.com/DaveJarvis/keenwrite/releases/download/2.... (Linux)

* https://github.com/DaveJarvis/keenwrite/releases/download/2.... (Windows)

* https://github.com/DaveJarvis/keenwrite/releases/download/2.... (JVM)

The start-up time for the first launch of the .bin or .exe is slow because it unpacks to a user directory. Subsequent starts are fast enough, even when running from the command-line as a short-lived task. Here's the script that creates the self-contained executable files:

https://github.com/DaveJarvis/keenwrite/blob/master/installe...

To create a release for all three files, I run a single shell script from a build machine:

https://github.com/DaveJarvis/keenwrite/blob/master/release....

I could probably generate a binary for MacOS, but not enough people have asked.


1. Not every library can be statically linked. Graphics drivers come to mind, but most libraries for the linux desktop expect to be dynamically linked (e.g. wlroots, libinput). So not only is there a technical limitation, but a practical one as well, considering that all of these projects would have to support it. These projects rely on dynamic linking as a way to support system-wide configuration.

2. Being able to patch dependencies is important. This isn't so much a feature to help developers, but one to help users. You could technically do this without dynamic linking, but have to make it easy to recompile a project with an updated dependency (which most package managers don't). Not being able to patch a package's dependencies puts the developer accountable for your system administration issues, which is not scalable.


Not mentioned yet seems to be C#, which has the dotnet core "single file" packaging option. You need to read the small print on that, though, as it (a) unpacks into a temp dir at runtime and (b) last time I tried required a couple of .so files alongside the executable, missing the point entirely.


This is a little out of date; the .NET single-file situation is pretty good as of .NET 5 (which came out in 2020).

These days, if you publish a single-file app all .NET libraries live in the executable and do not need to be unpacked at runtime. Native libraries still need to get unpacked at runtime (if your application uses any; many don't), but you just set the IncludeNativeLibrariesForSelfExtract build property and everything else happens automatically for you.

I'm working on getting IncludeNativeLibrariesForSelfExtract set by default, upvote here if you think that's a good idea: https://github.com/dotnet/sdk/issues/24181


I wholly agree, although single directory releases are also acceptable.

Last shop I worked at was ... well, they were of an age and came from an era when installation meant "drop some files in a directory, set up an icon on a desktop, what do you mean 'dependencies' and 'framework' and 'installer'?" I did my best to accommodate, because that seems more reasonable to me, but it certainly wasn't simple.

I do not dislike the concept of the Registry, it is an interesting tradeoff against the skillion dialects of .ini files and other configuration, but much further than that, it feels like installation of software is akin to a drop of ink in the water -- you can't really get it back out again.


I found myself wanting this for pandoc while trying to install it to a docker container. It would be nice if there was just a single binary I could download, but it seemed like all the installation methods used package managers or massively complex install steps?


What are the golang- packages about in OpenSUSE? Go doesn‘t support dynamic linking (except for non-go-pieces like glibc or via cgo). Do they contain source-code? Or pre-built .a‘s? Both would have a hard time not conflicting with go‘s own build tooling, right?


I've just installed `golang-github-burntsushi-toml` that should be the well known Go TOML library; asking `rpm -q -l` shows mainly `*.go` files, thus I assume they are mainly used as build dependencies?


Hmm… That makes sense. Though if you are building from source, the only gain you have over regular go tooling is downloading from your favorite mirror rather than github&co. I hope it doesn’t install these source-packages for prebuilt binaries…


I believe they don't allow internet access during the build (so everything needs to be vendored, or things stuck into GOPATH); those packages are probably for the second option.

My impression is that OpenSUSE go packaging is a bit behind though. But I'm not an active packager so this may have been fixed.


The *nix application packaging style was biggest disappointment when starting to play with Linux back in the 90s.

I know there are many arguments to support this style, but I simply dislike it from an aesthetic point of view, too messy.


How is even Nim (!) and Zig (!!) included there but .NET with a major developer market share and that is now cross-platform and explicitly supports self-contained executables not even mentioned in the article?? If he strongly dislikes or can't use .NET as an option he should write that and explain why.

(with some limitations, but the .NET stuff with .NET dependencies including the entire .NET Runtime should at least be in there, and you can compile resources like icons or other assorted data embedded into the executable as well)


αcτµαlly pδrταblε εxεcµταblεs are this and more

https://justine.lol/ape.html


Oh god. I wish people woukd just stop trying to be cool. Actmally pdrtable ehecmtable? This is prime https://www.reddit.com/r/grssk/ material.


I love how the unconventional name lets me know which people think that names are what matters.


It’s an extremely cool concept. I don’t really see any reason to use it in prod though.


It would be nice as an all in one installer, if you have an app you need to run on windows mac and linux in your enterprise, you could have one task in your ansible playbook, or whatever you use. without having to make sure you copy the correct one.

Unfortunately it would also would be really good for viruses.


We vastly prefer single binary executables on baremetal embedded. Not need to fight against linux or some other kernel, a libc and systemd. Updates are trivial and automatic. It's trivially secure and formally verified.


Auditable and build-able from source

  ■ curl -O http://harmful.tool/all-source-code.gz

  ■ Audit ALL of the source code

  ■ Build the "reproducible build" in a sandbox

  ■ Test / Run in a sandbox

  ■ Delete!


May I suggest the utility Eee (http://www.erikveen.dds.nl/eee/index.html).... Combines multiple files into one executable. Works great!


This is what I do. All my programs are single file exe. The worst case if something else is needed (like assets) that single exe retrieves it from the web and it's been like this forever


That's terrible. Your website will disappear and your executables will be useless.


Nope. Once installed it does not need the Internet. If there is a connectivity it will use it, if not it will still work (well except the collaborative functionality).

You are knocking wrong door. Try SAAS instead for your critic.


You can also integrate the assets into the binary itself using something like gresources.


Not quiet single binaries but https://portableapps.com/ has been a great resource over the years


> have it stowed in your .git repository so that everyone in your team gets the same experience without onboarding scripts

This is ironic. Do people really version their binaries in git repos?


I really hope I can create executable packages, but when some bulky math libs in Python are involved, it really becomes very hard.


You can include all the bulky math libs in the executable. That way if you want some version of Tensorflow and some other executable wants some other version of Tensorflow, there isn't a conflict.

IMO it is no problem if your software is several hundred megabytes in size, I have 8 TB of hard drive. Storage is cheap as long as you aren't using Apple.


I've got a 256GB SSD as my boot drive. Storage is not cheap.


Until your bundled version of Tensorflow isn't compatible with your installed version of CUDA.


So have both CUDA and non-CUDA versions bundled, along with the specific CUDA and cudnn version it needs. Still not that large. Maybe a couple gigs at most for everything, and a LOT of headache saved in return which I think is a fine deal.

I mean, very often the Tensorflow version a piece of software wants doesn't actually have a version available for my currently-installed CUDA version. I'll have CUDA 10.1, 11.0, and 12.6 already on my system for other pieces of software, and your software is hellbent dependent on Tensorflow 2.3 and CUDA 11.1 and I'll end up having to install 11.1 in addition anyway just to run your software.


Until your bundled version of CUDA isn't compatible with your installed graphics card.


If so you wouldn't be able to run it anyway and you'd have to go to CPU only, whether or not you bundle stuff.


I think the author raises some excellent points, but it still feels somehow wasteful to me.


I think the opaque and esoteric nature of packaging, which makes is so damn hard to build packages like this, actually prompts incredible amounts of waste when companies need to deploy their own code on their own machines. You get ridiculously overpowered solutions - container schedulers and the like - when all that was really wanted was binary packaging + init scripts or service units.


I'm in the camp that uses Docker almost entirely as a cross-platform (cross-distro, at the very least), very clean (i.e. everything ends up in one place and the location of all data & config is very well documented and easily configured in a standard way for pretty much all packages) package manager.

I could probably get similar benefits from learning, say, dpkg and systemd really, really well... but then that'd only work on dpkg-using Linux distros. Meanwhile, docker commands, docker-based shell scripts, and docker-compose files work just about identically ~everywhere, and can even work nearly-transparently on systems that don't actually support linux containers, like macOS and Windows, via virtualization and some command shims.

I hardly even care that it uses containers. The parts of it I like don't actually require that—though they do encourage people packaging for Docker to document things well and to cut through a certain amount of various packages' config bullshit. Take Samba—it's like 100x easier to configure & manage for common use cases using Docker than most distros' default packages & config files. There's no actual reason that needs to be the case, it's just that the process of containerizing it brought that as a side-effect, but that's one of the main reasons I'm using Docker, not the containerization per se.


I really wish Microsoft doesn't discontinue wsl1 and instead develops it further to make it the "Wine" equivalent (for running Linux apps on Windows without emulation or virtualization).

Wsl2 relies on a hypervisor (i.e. it has to run a Linux VM underneath) so running docker apps on Windows comes with all kinds of problems from poor performance to file permissions being messed up.

A mature wsl1 would be able to run almost any Linux app on Windows natively.


> I don't think a rich GUI application like LibreOffice, Firefox, Thunderbird, Slack, Discord, Skype, GMail, etc., can be delivered as a single executable.

The old Delphi-based Skype for Windows was a single executable. At least, most of it was in that big .exe.


Right, but applications can treat win32.dll as essentially a system-call layer. Windows also gives you a strong compatibility guarantee with those libraries.

Userspace Linux is different in that not only are these libraries typically installed separately, they all have varying degrees of compatibility between versions.


Yeah, anything can be delivered as a single executable, it just requires some tools to package it as such.


glibc, which prevents being linked against statically spoils this for much of the Linux world. I know about alpine, thank you.


I need an exec cheat sheet for this one.


This is basically what docker does and it’s a godsend.


>basically what docker does

With the added hassle of installing Docker and learning and managing all of its details, and what you get in the end only works on Linux.

Meanwhile every OS in existence support running its executable format with zero other bookkeeping.


Yeah, that’s the crux of the issue. Windows mostly does installers to get around its problems with single binaries. Only OSX has a reasonably decent solution for copying apps between machines.

On Linux the only reasonable solution is an install/executable script and/or docker.


> Only OSX has a reasonably decent solution for copying apps between machines.

Just as long as they're signed. Unsigned apps built on one machine will fail to start (pass audit) on another.


How do you ensure that the client has docker though? Make a cross compiled binary that downloads and installs docker and then runs your image for them? Run it through a browser on another server? The starting point there leads to similar problems.

Also, docker seems to be on the way towards non free (or already in a?) /non-foss , so will probably soon be completely out of favor with everyone. Perhaps other tools like jails,bhyve, etc will rule the space soon.


Well, the vast majority of people aren’t running operating systems that require the use of docker or even really support it. So now you’re looking at 5% of users who would need to install docker.

Of those users 90% are running Debian. So you insert apt install docker into your script. The remaining .4% are using red hat. So you insert yum install docker into your bash script.

Then you add a line to start your image, now put the bash script on your website, voila, single file program that works anywhere.


I'm really confused by this statement. The vast majority of people are probably running: Windows, Mac os, Ubuntu, arch, (then others). All of those support docker - the only ones I can think of that don't are freebsd (mostly people's NAS OS?). Then there's mobile, but I usually ignore this, since most of my dev is in the biology in ML with large-ish data (few TBs?) side.

Most users here using the software may not understand anything about software dev, so they need it to work on a laptop with an external hard drive - no cloud compute or cloud storage (to keep cost low for them) - but it can't have any long lists of commands to get something working.

One command to install and run docker seems to be a decent solution, but the program to do that needs to be accessible from any OS without gaurantee of docker being installed.


Except the version of docker that ships with Ubuntu apt is out of date. The snap install doesn't add your user to the docker group.


So is the kernel…

Once you have root just do whatever you want. replace the binary, install another version, upgrade the kernel, and then run your app.


1) Bundling dependencies into a blob is a disaster for security and maintainability.

The blob might be a statically-linked executable, a flatpak, a docker container and so on.

All this methods go against OS-based packages and prevent end users to update vulnerable libraries.

It is well proven that things like containers on docker hub age like fish, leaving many unpatched images around.

This is why dynamic linking and Linux distributions exist! Stop reinventing the wheel in worse way.

2) Network bandwidth for updates is far from abundant for less wealthy 4 billion people on the planet.

3) Same for drive space. A lot of phones/SBC/IoT devices have limited storage.

4) Same for I/O bandwidth. (E.g. even on a new PinePhone bulky applications take many seconds to start.)


-3 downvotes? Amazing what kind of crap the HN community has become.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: