Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: dockerc – Docker image to static executable "compiler" (github.com/nilsirl)
374 points by NilsIRL 9 months ago | hide | past | favorite | 135 comments



This is really great.

I have been trying to get my docker to be more distributable. Right now it's just a simple python script in a python env inside a docker container inside a QEMU container to automate a click and then netcat something.

Pretty sweet. It's only like 20GB, so pretty lightweight by modern standards.


/s this please, people might get some ideas otherwise...


So, someone else mentioned this could be sarcasm, but I've got a real application that could use this. It's a complex multi-domain modeling tool incorporating multiple C binaries, python and julia code. We currently deploy with Docker, but there are too many parameters user could set incorrectly in the Docker Desktop interface. We'd love to ship a static binary of this behemoth to cut down on the user/sysadmin deployment error.


Yeah it was/is a mix of sarcasm and genuine use case. I do have a thing (almost) like this, but use it because I want the isolation of network, filesystem, and so on. I definitely hate that it requires so much (not 20G, but still big) for so little. There's probably a way to do the thing and get a 1Kb binary in some Go with namespaces solution or something, which would be much better, but I don't trust myself to implement it correctly even if I spent the time to learn it. So I'm stuck with a horrible solution of a several GB Debian docker because musl/Alpine doesn't cut it


Oh yeah, we feel this pain point as well re: large and unwieldy containers. The best approach we've been able to come up with is ship a reliable if large container image and slowly factor out or reduce our heaviest and least reliable dependencies. But it's a looooong process.


Debian slim should be around 60mb. Looks like there's something else wrong.


You jest, but python is the poster child for, works on my machine.


Unless you're on Windows 11. You open the console, type "python", for some reason the Windows App Store launches and goes to a "python app". You think to yourself you should probably install Python from the source, so you go to pythons website and download and run the installer, you select "add to path", you run it and it STILL goes to the Windows App Store. So at this point you Google It. You find that you need to use the Windows search bar to find "Manage app execution aliases" and unselect the aliases using python. You then try calling "python" again in CMD, and its not found. You open user environment variables GUI, and find that the PATH was not modified. You again Google "where is python installed" and then you manually append that to PATH and it works. This is the wonderful experience of installing Python in Windows.


Or the PATH was updated, but you forgot that you need to restart your command prompt to refresh the environment variables...


Or you just use the app store version and it all works fine?


I absolutely love how we are going full circle to portable executable binaries but with embedded OS. Taking the whole "it works on my machine" to a whole new level of troubleshooting hell.

Awesome project though.


Every decent application should be a portable executable binary!


— UEFI authors


But not like that.


I wonder, how does this approach differ from a unikernel?


I can't wait for people to start sending me dockerfiles that run these things, made from docker containers that run more of these things...


The modern ELF. Bash but instead of running executables and piping stdin and stdout your primitives are docker containers.


I think that already exists in the form of buildpack..


In the past I've used and recommended nix-bundle¹ or its first-party counterpart `nix bundle`² for this. That lets you skip the step of managing the creation of the Docker image, which is nice.

I suppose dockerc could be convenient when you already have a Docker image on hand, especially if it was a PITA to create or its creation is a lost art.

Besides fat executables, `nix bundle` also lets you create Docker images, AppImages, and images/executables in a few other formats.

--

1: https://github.com/matthewbauer/nix-bundle

2: https://nixos.org/manual/nix/unstable/command-ref/new-cli/ni...


The Reddit screenshot is hilarious. But it reflects my feelings whenever I want to install a product made by many of these self-proclaimed "open source" companies that bend over backwards to make it almost impossible to install the open version (...and often to discover the crucial features are missing anyway).


That was Sourceforge really I suppose wasn't it - I enjoy GitHub for the opposite reason really (that I am a 'smelly nerd' and do want code more often than an executable, well actually they said '.exe', and I never want that) but I've definitely come across repos, even maintain one^, where they have that kind of real world leaking in, not really realising GitHub's not for them and then getting frustrated by it or just doing strange things.

(^I'm a collaborator on awesomecv, a latex CV/resume template, and we get a lot of PRs from people trying to edit their own information & experience into the example. I keep meaning to set up a template repo of how to use it properly as a latex class, maybe include rendering in Actions, the way my own use of it is basically, and more strongly point people towards that in the readme, maybe even remove the example. But I've been saying that for years, since before 'template repo' existed in fact.)


I had to look it up because I couldn't believe it was real but... https://old.reddit.com/r/github/comments/1at9br4/i_am_new_to...


I made the mistake of looking at the comments of that post’s author. All I can say is quite a few people would probably be glad to know that the author didn’t figure out how to install that OSINT tool.


Interestingly, the OP also doesn't appear to have changed their tone since then. They were even active within the last week.


There is some great cosmic irony here. A section about never needing to build, install, etc… just give me an executable. Followed immediately with an incantation for zig to build this project


That's for the developers to build the binary, not the end users. But yeah.


Nice idea! How does this actually work? I'm guessing it's wrapping a custom loader + docker + the image into an executable binary - or something like that?


Yep pretty much.

The executables bundle crun (a container runtime)[0], and a fuse implementation of squashfs and overlayfs. Appended to that is a squashfs of the image.

At runtime the squashfs and overlayfs are mounted and the container is started.

[0]: https://github.com/containers/crun


Apologies for my ignorance, but how does squashfs fit into this picture? It seems I'm missing some pieces of the puzzle.


Ok, I've checked code a bit. We use squashfs to pack/unpack OCI Bundle, and for lower layer in OverlayFS.


Sounds like universal appimage


This is awesome Nils! So happy to see the progress on the project since we chatted on the AGI house :) (I'm Syrus, from Wasmer)

dockerc works by using: Zig + crun + squashfs/overlayfs. Nils (the author) posted a bit more in this thread for anyone interested: https://news.ycombinator.com/item?id=39621573


So what does this mean? That I can finally ship portable Ruby executable without requiring the end user to install Ruby?


Yep! It works with any docker image!*

*: https://github.com/NilsIrl/dockerc/issues/6


You should also be able to do that with Cosmopolitan/APE.


I've tried to compile Libcello with cosmopolitan with little to no success, I don't think the Ruby runtime will be any easier sadly.

Could also be a skill issue.


Would you be able to expand on this, my impression is that these tools build a custom c program instead? Thanks


They provide a compiler and bundled userland that can be used to compile many C programs (even, for example, a slightly patched CPython) for a multi-OS binary distribution. Ruby shouldn't be that different.


You can already do this with these forks of ruby-packer and traveling-ruby:

https://github.com/ericbeland/ruby-packer https://github.com/YOU54F/traveling-ruby

ruby-packer is what I use to distribute a paid CLI, although on macOS only because my product is specifically for customers on macOS.

The advantage of ruby-packer is that it is much simpler, but you need to have access to each OS where you want to distribute your executable. OTOH, with traveling-ruby, you can build executables for all OSes from the same machine.


Had no idea about the actively maintained fork, thanks!


You'll still need different ones for different architectures...

At this point you might as well compile statically and include a virtual filesystem. Pretty much what Sun created in the 90s..


Awesome use of that rant pic.

Next rant pic- When I RUN a F*ING EXE it should open a Window with the application in it you smelly nerds!!!!


I know it feels like being pedantic and missing the joke, but "just give me the EXE" is a terrible bug, not a feature request. Distributing unauthenticated, untraced OS-level binaries is just dangerous in the modern world. Safe app distribution requires either elaborate sandboxed runtimes (browsers) or carefully curated and maintained lists of known-safe binaries (app stores, distro package repositories).

We can't be doing this "GIVE ME THE .EXE" anymore. Those days are gone.

Even building from source is questionable, but at least there it requires that the installer be part of (or at least adjacent to) a community of developers who can be expected to have the expertise to notice and recognize bad actors pushing code.


> We can't be doing this "GIVE ME THE .EXE" anymore. Those days are gone.

We can and should still do that for people who want it (i.e. most people). The security conscious can decline to use them, but that doesn't mean the rest of us should have a worse experience.


If you're considering "just give me the exe" as multiple sharing between people, I wholeheartedly agree that it's a mistake, but the context here is of a person wanting to download the binary from the author themselves.


How does an average user authenticate "the author themselves"? Again, you or I understand how github projects work and can figure out within a minute or two whether or not this is the right group or a legitimate project.

But if you're just a "GIVE ME THE .EXE" person, how do you know the binary you're looking at is a legitimate network scanner or keyboard mapper or game cheat or whatever? You don't. You can't. You just followed a link from someone else who thought it was.

The basic point is that software in the modern world is too complicated to require regular users to validate. They can't do it. And so we need to have trusted authorities like distros and app stores to do it for them, even (especially) when they demand we JUST GIVE ME THE .EXE.


I wish we could have something like exes with permissions. Similar to browsers. So I could run an arbitrary executable, but the OS level APIs would be blocked unless the user allowed the given permission.


That would mostly be a browser, though. Changing the language used for the API to C (or whatever) from Javascript is mostly cosmetic, existing interpreter/JIT engines are extremely optimized, you can target basically anything to wasm, etc...

The problem isn't the technical hurdle, it's that sandboxed apps really aren't what we want in a lot of cases. There remain a lot of use cases for native apps the interact directly with the hardware in ways that are hard to abstract safely. Games need the whole GPU, backend middleware needs the raw network stack, you want to set up routing tables or a custom NAS, etc...

Those requirements don't go away even when "most" stuff can be done in a browser-equivalent sandbox. And... you need to rely on your Linux distro for those things still, or at least compile from an active github project. You can't just get raw binaries from whoever and expect to be safe.


What's wrong with flatpak for that? Honest question as I don't know that space really


it should not be the flatpack who decides its own appropriate permissions, but the owner of the OS where the flatpack runs. Even when--especially when--, the permissions profile disagrees with that requested by the developers. Whatever permissions the flatpack "requires" should be irrelevant. Only those granted by the user will be given.

Flatpack and snap and other systems that conflate packaging and permission management get it totally wrong. Permission management is an OS-issue, not a packaging issue. Thus, distributing a plain static executable or a python script should be just as safe as a "safely packaged" app.


apks are like that


Im surprised no one mentioned that the program the unstable ranter wants is essentually a social media stalker app to find people on all social media sites.

Perhaps some barriers to use are good sometimes.


If they don't want people to find their social media profiles, then they shouldn't have social media profiles.


Unfortunately to access Lynda, you have to make a LinkedIn account


Who is Lynda?


She is a really good teacher of Actionscript 3.0. at least last time I checked.


Nostalgic


I know, she was asking for it right?

Nice victim blame.


"Victim", as understood by most people, implies that someone did something wrong to them. There's nothing wrong with seeing someone's profile online, so your analogy doesn't apply.

Thought exercise: Am I being a "victim" of you, because you posted a comment disagreeing with me?


There is no crime here is true. I really hope that's exactly what you mean.

However this person that is unstable or at least very aggressive and is clearly trying to track someone. I really hope you don't mean that because someone is publicly available that it is ok to do what you want to them.

IE "asking for it" because a person was around to commit a crime against so its their fault for not staying behind locked doors with a shotgun their whole life.

You are very close to simply stating that having an online profile makes it ok to do things to someone because you have dictated they should not have one if they don't want to be a victim of a crime in the future. You are also sort of imposing your view on everyone in the world that social media is not a necessary part of life nowadays. (that is a different debate)


This isn't an EXE, though.

The next step is to repackage this inside an EXE that runs a Linux virtual machine.


Weeell... There's a feature list with the following pending entry:

> MacOS and Windows support (using QEMU)


Let me start by saying this looks like a fun project to work on and, honestly, that's reason enough for doing it.

As a solution to the problem of app distribution, I do have some concerns, though:

How do you deal with resource sharing? This starts with just filesystem mounts, but also concerns ports, possibly devices, and probably many other things I'm forgetting. Is this somehow configurable?

How does this compare to AppImage? IIRC that also puts everything into a squashfs.

If a user without CAP_SYS_USER_NS executes one of the binaries built by dockerc, do you handle that gracefully in any way?


> How do you deal with resource sharing? This starts with just filesystem mounts, but also concerns ports, possibly devices, and probably many other things I'm forgetting. Is this somehow configurable?

I'm not too sure what resources you're talking about in general. Mounts are in a temporary location so they shouldn't conflict. Each container uses 2 when it is running. In terms of ports, you won't be able to have multiple applications using the same port (whether they are built with dockerc or not). As for devices I don't think there's any issues there.

> How does this compare to AppImage? IIRC that also puts everything into a squashfs.

It's very similar to AppImage in spirit. I haven't looked at the AppImage implementation but I suspect a lot of things are similar.

The difference with AppImage is that this makes it trivial to convert existing docker images into something that can run as an executable. It also offers stronger hermeticity guarantees as the application runs inside of a container.

> If a user without CAP_SYS_USER_NS executes one of the binaries built by dockerc, do you handle that gracefully in any way?

It's not something I've paid much attention to. This falls back to the container runtime which currently outputs "clone: Operation not permitted" when ran with `sudo sysctl -w kernel.unprivileged_userns_clone=0`.


What about Cosmopolitan and WASM? ;)

Cosmopolitan libc allows compiling of a single binary that runs on multiple OS platforms without modification – maybe Dockerc could use this to create a more universally portable container binary?

WASM binaries could run in the browser, or another WASM VM, including securely sandboxed environments that spin up fast.

I understand that newer Docker builds can use WASM under the hood so WASM in WASM would be funny, it seems if that were the case, maybe extracting the WASM with a more thin wrapper would be better?


Unfortunately cosmopolitan wouldn't work for dockerc. Cosmopolitan works as long as you only use it but container runtimes require additional features. Also containers contain arbitrary executables so not sure how that would work either...

As for WASM, this is already possible using container2wasm[0] and wasmer[1]'s ability to generate static binaries.

[0]: https://github.com/ktock/container2wasm

[1]: https://wasmer.io/


That's interesting. Thanks for clarifying how it works and pointing to container2wasm and how it can be used. I guess if you had a Cosmopolitan WASM VM runner built into Dockerc or similar that bundled the WASM images with a cross-platform binary with the VM and image?


As someone who's not overly familiar with Docker, how big are these executables in the end?

Edit: This was already answered here https://news.ycombinator.com/item?id=39622184


OK that's a cool idea. I did try the example in the README but I get an error right away (Ubuntu 22.04)

-----

$ ./dockerc --image docker://oven/bun --output bun

FATA[0000] Error loading trust policy: open /etc/containers/policy.json: no such file or directory ⨯ open CAS: validate: read oci-layout: invalid image detected Cannot stat source directory "/tmp/dockerc-fbObho/bundle" because No such file or directory error: FileNotFound

-----

Btw does this also solve the last line in the original user's complaints?


Thanks for the bug report! Just pushed a change that fixes it (and made a new release with the issue fixed: https://github.com/NilsIrl/dockerc/releases/)


It's interesting. According to the source code, it uses FUSE to mount the container's internal filesystem. This means that the compiled binary will either need root privileges to run, or the user must have configured FUSE to allow non-root mounting. Not ideal, but there's not much of an alternative either.


I thought it was an acceptable trade-off given that AppImage has the same limitation.

An alternative is to extract the image to disk but that has quite a bit of overhead.


This is incredibly cool!

Currently using docker as a way to easily distribute and run an open project[1], this would be great to use on top of docker

Will this run ok on a Mac? (I see the feature is pending, any tests done yet?)

[1] https://github.com/nicobrenner/commandjobs


Thanks!

> Will this run ok on a Mac?

I've managed to make it work but unfortunately not in a way that produces portable binaries. I just need to figure out how to selectively statically link some of the QEMU dependencies or write a runtime that uses Apple's VirtualizationFramework.


For Windows, I recommend looking into running in WSL2 instead of trying to get QEMU working in a performant way. It's pretty easy to install a custom lightweight WSL2 image and run it.


Could you use something more lightweight than qemu (e.g. firecracker)?


Really impressive! Amazing work, thank you for sharing :)


I think a straightforward way to achieve this would be to place a Docker image as a binary directly into a statically compiled podman (https://github.com/mgoltzsche/podman-static).


This can be useful for certain people, but I am surprised the documentation (there is just the Readme) doesn't mention what happens when you already have a docker daemon- esp. what happens to all the networking/firewall tricks docker is using. I see potential for quite a number of operational issues.

Nice to see Zig in action, btw. What about using Zon for deps, instead of a git submodule? Just curious if the author tried it, I honestly didn't have time to use Zon deps yet


I don't expect this to interact with the docker daemon in any way. With networking executables generated by dockerc behave in the same way as a native application running outside a container.

> What about using Zon for deps, instead of a git submodule?

I couldn't find documentation quickly enough (dockerc was initially during a hackathon and was my first time using Zig). I plan to fix this eventually.


Great! Thank you for dissolving my doubts.

About Zon, I was just curious if you had any issues with it. I also need to finally start using it in my small projects.

Good luck with Zig!


I moved zig-clap to zon. Was alright. I appreciate that absence of a package registry.


Very cool project! Can’t wait to see its future development.

I’m glad that I know the author. He won 2 tracks of this year’s Stanford TreeHacks with it:)


Might be worth calling out that it supports any OCI images, which seems to be the case from a quick skim throught the repo!


Curious about the choice of Zig, any specific reason for using it?


The main reason Zig was chosen is because the project was started at a hackathon[0] at which there was a prize for "best use of Zig". Beyond that there were also other reasons: 1. I have been wanting to try out Zig 2. It fit the requirement of being a so called "systems language".

However having written it in Zig I have a few retroactive reasons for why Zig was a good choice (if not the best choice):

* The build system allows to make the runtime a dependency of the "compiler". I don't think any other language has that.

* The interoperability with C/systems calls is amazing (in comparison to anything but C/C++)

* The ability to embed files

* Makes it incredibly easy to make static binaries

[0]: https://treehacks.com/

[1]: https://github.com/NilsIrl/dockerc/blob/68b0e6dc40e76c77ad0c...


Probably because it has a super nice build toolchain, especially when you consider cross compilation.

There's quite a few folks using zig _just_ for the toolchain, while still writing C.


This sounds really cool, but, doesn't work for me, on version 0.2.1:

  $ ./dockerc --image docker://docker.io/pivotalrabbitmq/perf-test --output yourmom
  $ ./yourmom
  2024-03-07T03:10:54.145333Z: chown `/dev/pts/0`: Invalid argument


Sounds like what my mom would say, tbh


Yep, this is a known issue. It also existed in the past with normal docker. https://github.com/NilsIrl/dockerc/issues/6


Have we come full circle? Docker was made to create a stable environment for an executable to be run in. Now we're making executables out of the stable environment... should we run that executable in a docker image too?


Damn I would really love to distribute ArchiveBox this way, been looking for a solution that does this for years. Crossing my fingers for arm64/macoOS support someday!


I'm going to try this to make my app an exe. As great as Docker is, most users do not care to install Docker just to run my app.


Ahhh, this is one of those situation where "you can does not mean you should". One does this likely because he want to distribute the binary, but turning whatever inside a docker container in a gigantic blob can gives more trouble down the road.


This is so freaking insane! I love it!


> Usage: Install dockerc from the latest release.

I love the juxtaposition with the image directly above this


Okay, that is interesting and cool, though I assume the binaries will be pretty large


Only if you don’t use multistage builds


Nice, but what's the difference between this and AppImage?


They have the same goal but achieve it differently.

dockerc allows you to re-use your existing docker images without having to spend time packaging something up.

The applications running from dockerc-generated executables run inside a container so they are guaranteed to be hermetic.


Same thing as Apptainer/Singularity? Or a different approach?

I always remember having issues using the Singularity outputs for anything that needed to interact with the filesystem.


I hadn't heard of Apptainer/Singularity before but it doesn't seem to provide the ability to create standalone executables.


Singularity absolutely produces standalone executables, been building tools this way for years.


How? I've used Apptainer pretty extensively for a couple years and as far as I know the machine where it runs has to have Apptainer installed, even when executing sif files like a normal app.


I stand corrected, yes, you need run-singularity to be installed to execute the image, i completely forgotten.


Kind of reminds me of a unikernel.


lol I feel that man's rage. I've shouted the same lines while trying out a new tool


I think it’s a great idea, but what kind of file size range should I expect?


It will depend heavily on the docker image you're trying to ship. For example with macos-cross-compiler[0] the resulting binary is over 2GB. With python:alpine[1] it's only 25MB.

Because the image isn't copied whether the image is 2GB or 25MB the startup time will be nearly instantaneous for both.

The runtime adds 6-7MB of overhead although I expect that this can be reduced to less than 3MB with some work.

[0]: https://github.com/shepherdjerred/macos-cross-compiler [1]: https://hub.docker.com/_/python


Can it be done for docker compose (in theory) too?


don't give me ideas ;) (yes)


XDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD


Folks will literally shove a container runtime, FUSE, and operating system image into a binary to avoid going to thera... learning Nix.


Learning nix results in needing more therapy, though;)


Omit internet tropes.

Please don't post shallow dismissals, especially of other people's work.

https://news.ycombinator.com/newsguidelines.html


But Nix is the best way to build images for dockerc.


Nix can generate fat binaries for you all by itself; you can just skip dockerc. It's a one-liner:

    # nix bundle nixpkgs#hello
    # ./hello
    Hello, world!

https://news.ycombinator.com/item?id=39632011


Can I run this in a container?


dockerc can, the produced executables cannot, at least not without some tweaking.

Even if you path through `/dev/fuse` and `/usr/bin/fusermount3` (using -v) it fails the mount the fuse filesystems with the error message "Operation not permitted".

It might be possible to make it work if it falls back to extracting the container when failing the mount the fuse filesystems.



That's really cool. I didn't know about that. Thanks for sharing.

There's a few things that seem to make it unsuitable for the intended use case of dockerc:

* The container extracts itself which means there is quite a bit of overhead for large images. dockerc-built executables have the same startup time whether the image is 2.2GB or 25MB.

* The executables produced by enroot don't seem suitable for running standalone. At least the example doesn't seem to suggest so.


I think this is a cute side project but as an actual piece of a company’s stack this just screams antipattern


Yeah, I don't think this makes that much sense for internal distribution. Especially if you use kubernetes.

The use case is for distributing software to end users.


Care to elaborate?


Sure. If you’re going through the trouble of containerizing your program and then compiling it into an executable, then you might as well just compile your program to an executable directly to begin with. Also, a system-specific executable is literally the antithesis of containerization. I know that over the years the programming landscape has evolved to the point that Docker has become yet another commonplace tool that newer devs don’t think much about and just see as a means of distributing code, and to be fair not a lot of people really groked it in the early days either, but this is truly unnecessary.


It's much nicer targeting Linux only from a developer standpoint than all 3 popular operating systems and all of the security vulnerabilities and toolchains and quirks for each. If you have the luxury of having a robust build pipeline for all of your targets with absolute confidence you're convering all of your bases then I applaud you. But from a developer and an end user point of view, if all I need to pursue is one download, regardless of platform, and that executable bootstraps itself and works, we would all be happier.


The README indicates that this tool will only support Windows and MacOS through emulation. I find that odd.

Let's face it, if you're using Linux, you're comfortable typing some stuff into the terminal to install software. Or if you aren't comfortable with it yet, you will be soon. That's just the reality of using Linux. Even ignoring that, snap and flatpak apps provide a generally awful user experience, and I fail to see how this tool would do a better job.

That leaves Windows and MacOS users as the primary audience for software packaged using a tool like this. It would make sense that a tool like this would prioritize MacOS/Windows support above all else. Even the angry redditeur shown in the README clearly mentions .EXE files.

Why would QEMU even necessary? Docker runs fine on Windows. Maybe it's to avoid requiring the user to install Docker? Either way, asking the user to fiddle with Hyper-V settings is bad UX.


> It would make sense that a tool like this would prioritize MacOS/Windows support above all else.

Yep, unfortunately I've not had the time to make it work well on those platforms. I got an initial demo working on MacOS but I'm currently facing the issue that I'm unable to statically compile QEMU on MacOS. I've also started writing a VirtualizationFramework[0] based backend.

> Why would QEMU even necessary? Docker runs fine on Windows.

When docker runs on Windows/MacOS it's actually running the containers in a Linux VM. Containers rely on features provided by the Linux kernel.

> Maybe it's to avoid requiring the user to install Docker?

The main reason to use dockerc is to avoid the user having to install Docker.

> Either way, asking the user to fiddle with Hyper-V settings is bad UX.

Yep I don't think that would be nice. I expect the experience to be transparent to the user.

[0]: https://developer.apple.com/documentation/virtualization


I'm aware of the WSL dependency for the Windows version of Docker - I dealt with it 5 days a week for 4 years before I switched to Ubuntu as my work OS this January. When I said "Docker runs fine on Windows," I meant that Microsoft already ships the necessary runtime to host a Docker container.

I can't comment on MacOS as I haven't used it regularly in several years, and even then I only used Docker briefly on MacOS.

I can see how this approach would result in reliably cross-platform applications, but it immediately raises a couple of concerns: binary size, and interoperability with the underlying OS.

Docker on its own can already be very large. Shoving it inside of QEMU adds even more largeness. Are binary sizes a priority at all? If so, how do you plan to keep them reasonably compact?

I'll assume that user-facing software is the main target for this tool, which means that GUI support is really important. By hosting an application inside of QEMU and Docker, you create a very long and complicated path that stuff must travel through in order for a dockerc'd GUI application to interact with other programs. It's pretty easy to plumb an X server into WSL, but there are limitations when you get into more nuanced interactions with the rest of the machine (ex: clicking and dragging files into the window). Docker adds yet another layer that could cause trouble.

I wish you luck. I tried to make a similar tool for a game engine a while back, and it was absolutely hellish.


> Yep, unfortunately I've not had the time to make it work well on those platforms. I got an initial demo working on MacOS but I'm currently facing the issue that I'm unable to statically compile QEMU on MacOS.

How static are we talking here? There's no reasonable way to not link dynamically against libSystem. Then again, that's obviously present on all Macs, so shouldn't be an issue.

> When docker runs on Windows/MacOS it's actually running the containers in a Linux VM.

True on macOS, but only partially true for Windows. There are actual Windows containers, running natively on Windows and runnihng Windows inside the containers.

But do you even want to distribute Windows binaries? Or are you looking for a way to transparently ship a Linux binary to Windows users?

> Yep I don't think that would be nice. I expect the experience to be transparent to the user.

Does this include automagically mounting filesystems?


> How static are we talking here?

Enough for the executables to run everywhere. So I'm happy for system libraries to be dynamically linked.

> But do you even want to distribute Windows binaries?

That's what I'm imagining. A windows binary that starts a Linux VM in which the container runs.

> Does this include automagically mounting filesystems?

Yep, inside of the Linux kernel. Here's what PID 1 looks like: https://github.com/NilsIrl/dockerc/blob/non_linux/src/init.z...


Hey, I appreciate the reply!

> A windows binary that starts a Linux VM in which the container runs.

I'm afraid my wording was somewhat ambiguous here. I meant to ask "do you aim to wrap Windows apps in a single Windows binary", but I suppose you answered my question anyway. You want to distribute Linux applications to Windows users.

Running on Windows/macOS was also the context in which I meant to ask about filesystem mounts. I understand this is not something that's implemented yet, but I'm wondering about your goals. Things obviously get much trickier than they are on Linux. On Windows I'd probably include a Plan9 server for file sharing.

The much larger hurdle I see for Windows support is that I don't think you can setup virtualization without Admin privileges in the general case. If Hyper-V is not already present and enabled you'll need to install some hypervisor. Even QEMU needs Hyper-V for proper virtualization.


> Running on Windows/macOS was also the context in which I meant to ask about filesystem mounts.

That's not an issue because the mounts are within the Linux VM. At least as long as you're not trying to implement volumes.

> I understand this is not something that's implemented yet, but I'm wondering about your goals.

Make a better alternative for all the projects that suggest using `docker run` in their READMEs. Something that's easy to setup (re-use existing Dockerfiles) and something that doesn't depend on pre-existing setup from the user (which docker does).

> The much larger hurdle I see for Windows support is that I don't think you can setup virtualization without Admin privileges in the general case. If Hyper-V is not already present and enabled you'll need to install some hypervisor. Even QEMU needs Hyper-V for proper virtualization.

That's good to know. I didn't know that. I guess it will need to use emulation in some cases then.


Both the standard Windows and Mac OS versions of docker use VMs (Windows uses WSL2 and IIRC Mac OS uses Virtialization.framework) as the OSes don't provide the a linux kernel for the containerized userspace. If it didn't do this, it'd have to host Windows and Mac OS environments, which would defeat the portable nature of Docker.


The way container runtimes on Linux work is fundamentally different to MacOS and Windows. You need virtualization (albeit, lighter weight since they can use the host kernel) to run containers on Windows and MacOS.

QEMU is kind of overkill because MacOS provides the VzVirtualMachine API through the Virtualization Framework, which can initialize a VM with the host's kernel. On Windows you can use Hyper-V, which is iirc how docker on Windows gets this done.

If MacOS and Windows had pid/mount/network namespaces, overlayfs, and allowed for unrestricted chroot (MacOS requires disabling SIP for that) then you could do the same thing on all platforms. Today, you need virtualization.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: