
Warp – self-contained, single binary applications - GordonS
https://github.com/dgiagio/warp
======
open-source-ux
It's fascinating how self-contained binaries are now coveted as such a
desirable feature. Dozens of languages (if not more) were capable of this
decades ago. In fact, some of them could generate self-contained binaries that
allowed you to bundle app assets like icons and images inside the executable -
no additional add-ons needed. And they were native executables, not
complicated wrappers with layer upon layer of different technologies.

This is not to take anything away from this project, just making an
observation. The programming field often feels like it's in a giant loop
constantly re-discovering what went before.

~~~
weberc2
> The programming field often feels like it's in a giant loop constantly re-
> discovering what went before.

It might feel that way, but this is driven by economics. Basically disk,
memory, and bandwidth is far cheaper than it used to be so who cares if you
waste a GB or so copying the same libraries all over if you don't have to
solve for dependency hell?

~~~
Koshkin
> _disk is far cheaper_

Not in the cloud.

~~~
weberc2
To be clear, you're asserting that the price of disk in the cloud is
comparable to what it was decades ago?

~~~
Koshkin
Things are more complicated than that. (Depending on the pricing scheme, you
might end up paying per gigabyte per hour, for example; which may or may not
include transfer/storage of actual data; a faster "drive" could be much more
expensive than a slower one, etc.)

~~~
weberc2
Sorry, I'm not following. Granted pricing schemes have changed, but adjusting
for that, are you claiming that disk is not dramatically cheaper than it was
~20 years ago? Not trying to be a dick, just not following your point.

~~~
Koshkin
Point is, "disk" in the cloud is not the same thing as the disk in your PC.

~~~
ecnahc515
No, the point is it doesn't matter where the disk is, the overall price has
gone down.

------
tgtweak
Latest .net core 3 apparently supports single-bibary packaging.

It has been a often-requested feature for at least 2 years.

[https://github.com/dotnet/corefx/issues/13329](https://github.com/dotnet/corefx/issues/13329)

(The last comment referring to warp is indicative)

~~~
wokwokwok
.Net Core 3 wont be out until next year.

The current .Net Core 2.x runtime has had this feature since it came out
though (last year?).

(Hello world + runtime is like 100 MB and several hundred files)

~~~
MarkSweep
For what it's worth, the IlLinker can get Hello World down to 20MB and as many
files:

[https://github.com/dotnet/core/blob/master/samples/linker-
in...](https://github.com/dotnet/core/blob/master/samples/linker-
instructions.md)

If you use some extra options to disable crossgen and link
System.Private.CoreLib, you can get the size down to 10MB, at the expense of
startup time:

[https://github.com/AustinWise/IlLinkerExample](https://github.com/AustinWise/IlLinkerExample)

------
w8rbt
Very neat. Would like to see more languages targeting single binaries. This is
one reason (I believe) that has made Go so popular.

~~~
jpic
I wish <my favorite language> also had this. However, containers work. Not
sure why they do java though, i thought java has been happy with jars so far.

~~~
weberc2
We're struggling using containers for our Python and Javascript applications.
I think we're using venvs and node_modules and we oughtn't (or we shouldn't
keep them in our project directory where they sometimes--but not always--get
overwritten by source code volume mounts).

~~~
TheDong
It can be a little tricky to do things right here.

There are a few things of note:

1\. The .dockerignore file can be used to prevent use of your `node_modules`
folder during `docker build`, even if you have something like `COPY . .` in
your Dockerfile.. This can let you create a new `node_modules` folder from
your lockfile as part of the docker build process to create an image for
testing/deployment.

2\. You can maintain separate Dockerfiles and such for development and for
release, so e.g. for development you might use volumes and not copy things in,
but for release you wouldn't use volumes for source code.

I can't tell from what you said, but it seems like one of those two tips might
be relevant.

It's okay to have a development setup which doesn't use containers and then
use containers for deployment (so long as you have an integ or staging
environment that uses containers) as well. It's quite reasonable to have a
venv during development, but inside the docker image to not use a venv at all
since things will already be reasonably isolated inside the container's fs.

~~~
weberc2
Thanks for the suggestions! I had to check, but we do both of those things. To
make sure that node_modules aren't overwritten, we mount an empty named volume
to the node_modules directory. I guess mounting the named volume directory
somehow causes the node_modules directory from the base image to appear inside
of the bind-mounted directory. We do the same thing for venvs as well.

We use venvs inside of the Docker container because we use pipenv and
apparently pipenv's support for installing to the system is buggy and/or
idiosyncratic.

~~~
TheDong
Named volumes are hacky to no end and you shouldn't be using them to make sure
it's empty; you should be using `.dockerignore` or being careful about what
you copy in.

The pipenv bit is probably more reasonable, and I'm afraid I haven't used it
enough to be sure what rough edges are likely there.

------
TheDong
This sounds like a much less general binctr [0].

I don't see how warp handles the most trivial case of a dynamically linked
executable; I see no references to patchelf or other such tricks to ensure it
looks within the application's cache directory for dynamically linked
libraries, nor do I see any mount namespaces.

Does warp do that? Why would I use warp if it doesn't when there are more
generic solutions that can handle such a case, namely binctr, docker, guix
pack, nix....

[0]:
[https://github.com/genuinetools/binctr](https://github.com/genuinetools/binctr)

~~~
sagichmal
> This sounds like a much less general binctr.

binctr is Linux-only, and appears to need Docker as a runtime requirement?
This has neither constraint, apparently?

~~~
TheDong
> appears to need Docker as a runtime requirement

binctr does not require any runtime component other than a modern linux kernel
with unprivileged usernamespace support.

Targetting OSX / Windows does make warp distinct from what I mentioned, though
I don't use those OSs so I admit I didn't think about that possible issue.

Anyway, I thought this was a solved problem on both Windows and OSX since they
both have conventions for self-contained applications, while linux does not.

~~~
sillysaurus3
_Anyway, I thought this was a solved problem on both Windows and OSX since
they both have conventions for self-contained applications, while linux does
not._

This is mistaken for Windows. Even if programs are run from Program Files,
they will often source their DLLs from other locations.

------
metaphor

      tVQhhsFFlGGD3oWV4lEPST8I8FEPP54IM0q7daes4E1y3p2U2wlJRYmWmjPYfkhZ0PlT14Ls0j8fdDkoj33f2BlRJavLj3mWGibJsGt5uLAtrCDtvxikZ8UX2mQDCrgE
    

Anyone know what this _magic_ is all about?

Honestly not feeling warp-packer downloading executable blobs during runtime
either.

~~~
dgiagio
It's a placeholder in the final binary that gets replaced by the target
application executable file name so Warp knows what to run.

This is a known and established technique. I don't have the link right now
(mobile) but .NET Core does something similar for their native deployment, for
example.

------
marcoperaza
From the README, it looks like this unpacks the application and dependencies
into a local cache in the user's home directory on first run.

Is it possible to instead execute the compressed application code directly and
present to the resulting process a virtualized file system that decompresses
the dependencies on the fly too? Then you could run the single binary without
giving it any filesystem access. One way to do it would be to have have the
parent process provide some routine that the kernel can call for reading a
particular "file". Or perhaps just have standard embedding and compression
methods that the kernel supports, so you would just provide the offsets to the
kernel.

I guess what I mean is: do any platforms provide the necessary syscalls to
pull that off?

~~~
jordwalke
I asked this question just yesterday. The challenge (the one I was interested
in) was if this can be done in a general purpose way for arbitrary programming
languages so long as they used libc to go through the file system. I actually
think it might be theoretically possible using LD_PRELOAD and some equivalent
on OSX. I don't know why I'm compelled to want the executable to leave no
trace on disk, but it seems much cleaner. However, if you _write_ to the
virtual file system, that will need to change the packed binary itself which
might result in problems (the binary would grow/shrink as you pack more / less
files into it at runtime by writing to the "disk").

[https://twitter.com/jordwalke/status/1050277143085608960](https://twitter.com/jordwalke/status/1050277143085608960)

Maybe it's just not worth it and a temp directory is not so bad. I just worry
about little changes to the way `mktmp` works across OS updates etc. A totally
self contained executable that leaves no trace, with virtual file system
support is about as isolated and reliable as you could ever hope. I just don't
think anyone has been determined enough to achieve it.

~~~
marcoperaza
That’s a good idea, doing it by shimming libc, but it feels kind of fragile.
Go, for example, does not use libc and makes syscalls directly instead, so
this would not work for any fs-accessing Go code either in the application or
its dependencies. Even for things that do go through libc, wouldn’t you end up
in a cat and mouse game to add shims for any new file operations added to the
kernel? I suspect even shimming the _existing_ interfaces is less
straightforward than at first thought.

I don’t think you can have a truly robust solution without kernel support.
Maybe the FUSE kernel-side code already supports, or could be modified to
support, per-process ad-hoc virtual file systems? If done right, you might
even get existing fs caching and other optimizations for (closed to) “free”.

I also can’t quite explain why I find this so desirable. It just feels
intuitively cleaner and more robust to not have to spray files elsewhere into
some cache.

------
dmitrygr
Based on the readme, this is basically an SFX archive, which runs a binary
upon completion of extraction to temporary folder, much like winzip & the like
have created for ages.

~~~
paavoova
Haven't read the link, but that's quite terrible, and the title is misleading.
No idea why it's getting buzz in this thread. It means if tmpfs/equivalent
isn't available, it will cause disk writes each application launch. Or if it
caches the unpacking, it's susceptible to injection. And suppose the binary
doesn't have write access to the filesystem?

~~~
GordonS
It's the latter - it caches and is susceptible to injection.

------
sandGorgon
The best of breed here is zeroinstall -
[https://0install.net/](https://0install.net/). The software catalog is here -
[http://0install.de/catalog/](http://0install.de/catalog/)

comes with its own SAT solver for dependency resolution. zeroinstall is what
Canonical was considering before it created Snap Packages.

~~~
pbhjpbhj
Any idea why they went with snap instead? There seems to have been a recent
proliferation of new packaging systems -- which is fine. But, from an Ubuntu
perspective I don't think they should be included if they're not integrated
with apt (perhaps it is and I missed the memo?).

Digikam moved to AppImage, which sounds cool but adds a further system one
needs to update, and the automatic update never worked for me (I think their
package didn't include the necessary metadata?).

Having all application updates available in one place, and official packages
be recent, is the 'killer feature' for me. I used to be a Slackware user, but
when I migrated to slapt-get I realised I was done with rolling my own
packages and checkinstall-ing them and wanted system where things were more
managed - not having half the apps installed ad-hoc.

When you have to install a new system to get just a few apps it feels like I
might as well go back to ./configure;make;[check|make ]install.

Seems we need an xdg install record with a unified app that calls the sub-
system (pip, 0install, AppImage, shell script, dpkg, whatever) to abstract all
that away.

------
oscargrouch
Its funny. Im creating a multi-os application platform based on Chrome
runtime, and thats how im packaging my multi-target binaries.

Im using a SQLite btree file as backend, working basically as a key-value
store. On the header i define what target triples are supported (the ones the
dev built the binaries), and on the runtime open, see if the current host is
supported and unpack the bynary payload always checking the hash before
running it. (If its on the filesystem already see of the hash matches with the
one saved before on the DB as a record)

Giving its a db the owner of the public-key signature(the creator of the
file), can modify and add others binaries latter.

Im not disclosing it now, because its just part of a bigger platform, and you
dont distribute only the apps, but a whole "container"(not in a Docker sense,
but as a multiplatform container more focused on final users instead of cloud
backends).

I hope i can launch it soon here on HN.

------
angst
I hope it can use downloaded executable for warp-runner. Downloading
executable from windows console doesn't work too well in my corporate
environment. Currently placing windows-x64.warp-runner.exe to same directory
as warp-packer doesn't help. I really like the idea though. Simplest 3rd party
solution I've encountered so far for the purpose.

------
chriswarbo
Reminds me of
[https://github.com/solidsnack/arx/](https://github.com/solidsnack/arx/) which
I came across recently. I've not used it, but it sounds like it does a similar
decompress-then-run approach, but using a shell script as the runner rather
than a native binary.

------
EdSchouten
And for Python there is Subpar:
[https://github.com/google/subpar](https://github.com/google/subpar)

------
fiatjaf
I don't understand a thing, is this for any kind of application or Rust? How
will this thing know how to run my bizarre app in a bizarre language?

~~~
treve
It downloads the runtime for your bizarre language, assuming it's supported.

------
billconan
It doesn’t seem to package my app correctly. I didn’t use dynamic loading, but
it still say can’t open dependencies.

------
sigmonsays
so bash self extracting tarball of the 2000s??

[https://www.linuxjournal.com/node/1005818](https://www.linuxjournal.com/node/1005818)

------
mychael
What problem does this solve? How does it differ from Docker?

~~~
tomc1985
This solves some of the app packaging pain that goes into building desktop
apps. Not sure how this works but it probably bundles dependencies into the
binary a la static linking. Docker mocks the OS and virtualizes, its a very
different approach

I'm stoked for anything that eases the pain of desktop deployment that isn't
Electron. Right now the cross-platform story is... complicated

~~~
solarkraft
But it seems like it continues one of the pain point of Electron: Shipping
hundreds of MBs in the same dependencies for every single app.

~~~
tomc1985
Indeed, but it's either that or you deal with the dependencies at deploy, or
you write your software to not have any dependencies.

------
matthewbauer
I think it’s funny to use java as an example when jar files have been around
forever. Still there are some use cases for other things where statically
linking is not possible.

I wonder what benefits this has over other solutions? Originally when it said
multi-platform i thought it meant that one binary could run on multiple
platforms like a fat binary. But i think all of the examples require you to
have a specific target in mind.

~~~
monocasa
There's something to be said for the fact that these are native binaries. Like
under windows you'd be able just call CreateProcess with the file. Jars never
did that and intrinsically needed an interpreter to run them.

~~~
rictic
This. A .jar file could be anything. Is it a library or an executable? When I
run it, do I need to make sure its dependencies are on my classpath? What're
the java command line flags I need? Do I have the right version of java on
this machine?

There's something to be said for an entirely self-contained native binary, at
least as an easy on-ramp for users.

~~~
GordonS
Unfortunately, it turns out this is not a completely self contained binary -
it's more of a packer. So if you pack a Java app, you're still going to need
the JRE, and if you pack a .NET Framework app, you're still going the .NET
Framework, etc.

~~~
haxton
So I don't know what this is doing. But I know that microsoft is developing
WPF to run in a fulled contained .NET Core binary. Meaning that there is no
requirement of having any preinstalled software when running it. I would
assume this technology to be translated into most any .NET core app.

~~~
GordonS
Yes, any .NET Core app can already be 'published' as a self-contained app.
That basically means it ships along with the .NET Core files (there are a lot
of them), so you don't need to have a globally installed .NET Core to run it.

What Warp does is act like a packer, creating a thin, native executable that
compresses and embeds all the files required to run your app, then
decompresses them to disk the first time the app is executed.

------
ianl
Beautiful

