I use nix-shell, and mostly I love it. But it’s important to be aware that the above means “Go get the latest(*) versions of python, pillow and ansicolor and run this code in an environment where they’re available.” It doesn’t do any version-pinning of your dependencies. That might be what you want, but maybe not: it’s frustrating when a script that worked yesterday won’t work today, or will only work after some big download.
My own rule of thumb is that nix-shell is great for quick one-offs and for sharing environments. For local tools and anything else I’m sharing with my future self, it’s usually better to write a nix expression and install it, which gives me access to Nix’s (excellent) rollback system, and lets me upgrade on my schedule, not upstream’s.
* - ‘Latest’ according to whatever Nix channel checkout currently applies. Which you can change, of course, but the point is it’s external to the script.
You can "pin" Nixpkgs with this style of invocation as well, see https://nixos.wiki/wiki/Nix-shell_shebang#Pinning_nixpkgs. But I agree that if you're writing a shell script (or small Python/Ruby scripts) that you'll be running often, it's better to package it (e.g. with writeShellScriptBin) and install to profile.
The readme compares it to the cross-architecture Cosmopolitan libc, but Docker is anything but cross-platform. On any other platform besides Linux it requires a Linux VM.
Linux containers are great (and I run Linux as my desktop OS), just pointing out the not-so-efficient nature of considering this cross-platform.
OCI image manifests can specify platforms and architectures. From the end user’s point of view it can be all the same invocation.
Docker natively supports Windows, and it is low lift to make native Windows images for many common programming environments.
Does anyone use it? No not really. It makes a lot of sense if you need Windows stack stuff that is superior to Linux, like DirectX, but maybe not so much for regular applications.
There is also macOS containers, a project that has a decent proof of concept of a containerd fork that runs macOS container images. In principle there is a shorter path of work for so called host process containers, but fully isolated exists for macOS, it could work with e.g. Kubernetes, and people want it and it makes sense, and it sort of does exist.
The difference between cross-platform and “cross-platform” as you’re talking about it is really having some absolutely gigantic company, like Amazon or Google, literally top 10 in the world, putting this stuff into a social media zeitgeist.
I really like what this script is doing - it's specifying system level dependencies, a database schema, an interpreter, the code that runs on that interpreter, the data (on disk!) required by that code, and an invocation to execute the code all in one script. That's amazing, and this is an excellent model for sharing a stand alone applications with non-trivial dependencies!
However, Docker is an OS-level virtualization. Docker natively supports Windows in the sense that there is a native app. That native app spins up Linux virtual machines, so the container is "native" to my Intel CPU with their virtualization extensions, but it is not native to Windows. I use it, which I say with no animus toward your original message.
edit: I was ignorant of native windows containers. I'm old and my brain still maps docker to lxc I guess. Apologies to OP - the DirectX line should have caught my attention.
Docker Desktop aims to provide the same experience across Mac and Windows and as such those use Linux VM's, yes. However Docker most definitely supports Windows containers.
Sorry, that's right. You can probably guess that all of my Windows Docker use is with Linux images. This particular script wouldn't work as there is no node image for a native windows host (unless there is? again, I'm ignorant of native windows containers).
chroot requires disabling SIP on MacOS, so any kind of "container" that shares the kernel but has a mostly isolated userspace is never going to happen on MacOS. If you want an isolated host environment on MacOS the bespoke approach is to use VZVirtualMachine. The whole point of containerization is to not require virtualization, so it kind of defeats the purpose.
I really think people who "want" containers on MacOS don't understand containers or what problem they solve, and if they think they need them should consider why they aren't already running their dev environment in Linux.
The main problem, I think, with Windows containers is that they are only really supported on Windows Server - which most developers don't have access to.
You can run them through Docker Desktop, but then why not just run the same containers you will be deploying on you server (which is most likely going to be linux based?).
I would love for MS to make containers the way to deploy programs to Windows, but that requires them to make the runtime part of the default install and to make it available on all the OSs.
Windows containers can be built on Windows 10 pro and windows 11 pro. All you need is the hypervisor from Microsoft installed under windows Settings->Apps and Features->Additional windows features.
Windows 2022 containers work on Windows 11. Docker Desktop uses a shim for Windows containers. “dockerd” a single binary for Windows statically compiled is all you need to run Windows containers with the familiar Docker commands, you could also use PowerShell.
They are supported all the same. IMO the main issue is that this feature is poorly marketed.
Its extremely poorly marketed, since I looked up the MS documentation when I wrote that comment and it only still only said windows server.
Still unless it works on Win10 home, it won't be the default way to install software for windows - which sucks, since its a better way than the current one.
Is that because Microsoft is good at selling it or because it is actually a good piece of tech? We recently had to set up some Microsoft platinum partner test automation software, and the money we spent on SQL Server and Windows instances (on Azure of course) alone could've funded a fleet of Linux servers or a junior dev writing Playwright scripts all day.
(Not to mention it produces unactionable output by default, and if I love one thing, it's "this page didn't work one out of 100 times, must be infra problem" incidents)
Systems engineer here, I haven't worked at a company that pays for Linux support in 12 years and this was at scale (10K+ servers). You don't need IBM or Canonical to get patches or a heads up about major vulns. Several ways to go with this but I get up to date patches for free with Debian. And I can count on my hand the number of times that any org I've been part of needed a kernel engineer or access to one. Support contracts for OS AFAIK aren't worth the money any more unless you really don't have anyone who can do system support.
Freetards? Do you make money from an OS, compiler or similar infrastructure? If not, then your employer would in my opinion be making a mistake to send $$$ to a vendor of same unless they're in a very special niche.
One of the only enduring lessons from IT history is that there's always going to come a time to move on from some technology or vendor. And IBM isn't doing wrong by trying to capture some cash but it's very late in this game and its a losing battle.
I'm guessing that it's going to be the 'legacy' cloud vendor's time soon. The markup is way out of whack.
It somehow never actually happens this way, but I would happily spend twice as much for any open source product simply because I get more control, predictability, and utility out of it.
If you want to pay rather a lot more than merely twice as much you can get source from MS too, and still not get all the utility because it comes with ndas and no ocean of other user hackers who want the same obvious things you do.
Spending time on an open tool is an investment that you do because it pays off. Spending your own time, or paying a developer (hired in house or consultant), or paying license fees for a closed product are all just things you spend to get the result.
It has nothing to do with your time being worthless. If your own time is too super valuable to spend directly building, then the choice is not "pay MS to do it or do it myself", it's pay an employee to do it one way or pay an employee to do it another way.
You pay an employee 100k and MS 100k, or you pay 2 employees. You get 10x more value out of two humans producing work that you 100% own and get to have every important detail exactly how you want it, and then it works for as long as you want it. Even with the churn from security updates and popular fads, anything you invested in building, you still get to use forever if you want. No serial number ever expires, no activation ever blocks your ability to make backups and hot spares and parallel extra capacity. And those humans actively solve new weird problems in a way no piece of software or software licence ever can.
The reason not to pay MS is not because it costs money, it's because you get shit for it.
> Linux is only free when our time isn't worth money.
that's funny, because having rotated through all 3 major cloud providers in the past 5 years now (at different places), Azure support is the most time-wasting not worth it even if it was free, and i'd much prefer if i could waste my time reading documentation that makes sense, but Azure doesn't have that either.
Azure doesn't happen to be an outlier in Microsoft products, right?
> Playwright is developed by Microsoft, by the way.
And I'm happy the people there get to make things that work outside the eldritch horror that is Windows Server.
I had issues with GCP too, including bricked projects, but I appreciate more about it than Azure, the permission model in particular is great if you want to limit the damage fully independent teams (with... an inventive spirit) can do while making resource sharing much easier across boundaries than AWS accounts or Azure subscriptions. And even though the first answer to a ticket is always useless, at least you get something useful within 24 hours, whereas Azure support feels like I'm talking to ChatGPT sometimes, inventing issues in services I didn't request support for that sound similar (VPN Gateway turned into API Gateway) and then going silent for a week.
> Linux is only free when our time isn't worth money.
Oh, man, not this shit.
Linux saves time. Windows servers are an endless time sink, that costs more on its hardware, and have added license costs. And license costs are also mostly the time you spend managing your licenses, the actual money you send to Microsoft is peanuts.
Windows only costs its price if your time is worthless.
This. When it comes to serious use I wouldn't trust anybody who doesn't bother to invest a lot of time to actually learn the system anyway. IMO Windows servers are not easier to learn than Linux servers, quite the contrary actually, but I'm a bit biased since my personal computer runs Linux too.
In my experience the choice is usually made by the familiarity rather than ideology. Windows server can definitely be a better choice if they already successfully use it. And that's the case even if a Linux based server would be actually better for their use case based on some arbitrary metrics. Some are so ideologically challenged that they use both and see no problem.
I guess because it gets all the games, to the point Valve needs to create a Linux distribution that pretends to be Windows to be able to sell any meaningfull games to GNU/Linux folks, not even Android/NDK games get ported over.
Yeah it's insane how downplayed windows is in some tech circles. I personally don't even like it, but that's mainly because of a skill issue on my part (I don't know how to use it well). Yet it is very clearly not a toy, and in some ways it's way more powerful than Linux due to things like active directory and everything. I wouldn't run a random server on it but that's completely irrelevant to it being a toy or not.
In reality as you said tons of effort are spent exactly trying to make Linux good at the things windows has been good at for 30 years. But there's that weird dissonance that makes people think that windows is inferior to Linux on a technical level just because it's inferior on a software license/freedom level. The two are completely unrelated. The funniest thing is people who argue that the Linux kernel is just the future compared to the "antiquated" NT kernel (lol, lmao)
DirectX is more than just Vulkan. It does sound, input, etc...
Vulkan is like Direct3D 12, a low level 3D API. Between the two, most seem to consider Vulkan the better option. However, Vulkan has the reputation of being verbose and very much not noob friendly. It is mostly geared towards advanced engine developers who want full control to make the most of the hardware.
Besides 3D, the rest of the multimedia API are a bit of a mess it seems. On Windows and elsewhere. I haven't look at it for many years though.
For the two middlewares Unity and Unreal, on real applications, DirectX 11 will have better latency (lower CPU time mostly), DirectX 12 performance will be higher throughput (greater FPS), but neither will be by very much. Like a single application on ordinary hardware, it won’t matter. But for the thing I measure, occupancy, you can get something like 3x as much efficiency with DirectX on Windows compared to the same application on Vulkan on Linux.
I explored the idea of using the scratch image with a cosmopolitan binary to get something more cross-architecture, but you need a shell to run those binaries. I'd love to see cross architecture Docker images, if someone else can figure out a trick to make it work.
Just use redbean and provide a init lua file. Or use a http://cosmo.zip provided interpreter (like python, maybe even bash).
Each ape file is also a valid zip file. Add your dependencies as if the ape was an archive:
zip -ur myape.com mydependency.anything
Also add a `.args` file:
zip -ur myape.com .args
For this .args file, put one argument per line. This will run on start. You can use `/zip/mydepencency.anything` to read from files, but if you have an executable dependency you'll need to extract it first (I use the host shell or host powershell for this).
You can do this with any software you can compile with comsocc, by adding a call to LoadZipArgs[1] in the main function.
It's easy to get started, your ideas will branch out as soon as you start playing with it.
I think parent was pointing out that you need Linux to run Docker (since it doesn't run natively on any other OS) which is different from what Cosmopolitan provides.
Edit: Ok, apparently it natively supports Windows for Windows containers and for everything else there's a Hyper-V integration. Not sure if you can write a portable Dockerfile script like that though.
Was QEMU replaced with another emulator or some kind of translation layer to run on a non-x86_64 CPU? I’m going by https://justine.lol/ape.html:
> It'll be nice to know that any normal PC program we write will "just work" on Raspberry Pi and Apple ARM. All we have to do embed an ARM build of the emulator above within our x86 executables, and have them morph and re-exec appropriately, similar to how Cosmopolitan is already doing doing with qemu-x86_64, except that this wouldn't need to be installed beforehand.
The -S / --split-string option[1] of /usr/bin/env is a relatively recent addition to GNU Coreutils. It's available starting from GNU Coreutils 8.30[2], released on 2018-07-01.
Beware of portability: it relies on a non-standard behavior from some operating systems. It only works on OSs that treat all the text after the first space as argument(s) to the shebanged executable; rather than just treating the whole string as an executable path (that can happen to contain spaces).
Fortunately this non-standard behavior is more the norm than the exception: it works at least on modern GNU/Linux, BSDs, and macOS.
Not to be negative, but is this warning of non-standardness for like, AT&T unix or something? Beyond Linux, macos, and BSDs, I'm assuming you're running an ancient mainframe or something and are not worried about trying a cool docker shebang hack (probably because docker doesn't exist on your platform anyway)
This is genius and I love how this is a whole app meta-seed in a single file! I think I have docker trauma, why did we reach a point where we need computers inside our computers just for normal stuff to work?
Container packing is cool, but is it just a security thing preventing us from using our normal hardware? Or versioning (NixOS)? Is wasm capable of doing this and is wasm still alive? I just feel like needing to run tests inception style inside and outside docker gets complicated and annoying and always try to just use Linux directly these days.
There are many reasons, but the simple idea of "containing" is a big part of it. You could run several versions of Python, database systems, etc. on a single machine, but it rapidly becomes confusing in most cases with dependency clashes, losing track of where everything is, etc. Anyone who worked on multiple projects ~20 years ago and didn't use VMs might remember how it felt.
It's like if you have a workshop and you diligently organize all of the different parts into different trays in different units so it's easier to do all the types of work you need to do. You could just have a giant box in the corner where you chuck absolutely everything.. far less complex, but it'd make your day to day work a nightmare.
I'd argue that running all of those things inside docker containers also rapidly becomes confusing. The confusion is inherent to the complexity of the things you are running.
I don't hate docker, but I find that it's just not that useful until you reach a certain scale. I stopped using it for personal projects and am much happier for it.
Nothing fancy. systemd for process management, Python for automation, and a whole bunch of shell scripts. IOW the way that I did things before Docker came along.
I still believe in containers in a multi-developer environment, but in my experience the disadvantages outweigh the advantages when your only coworker is future-you.
the computer i would run the container on. I just run my software on that. If it's complex to configure, it'll still be complex to configure in docker. But then I also need to configure docker.
Executable files (and OS processes) used to be that. Then came shared libraries, configuration files, multi-executable applications, and whatnot. It would have been nicer to extend the executable formats and OS process sandboxing, IMO.
Next thing we’ll define a new format and runtime to package and run a collection of docker images with associated configuration.
I chuckled reading this, as we're trying to do exactly what you described at my current startup, https://github.com/kurtosis-tech/kurtosis ! More seriously, I've been mulling over the idea that humanity is going through a continual process of modularization and unification:
First came machines, to perform simple "computation" tasks.
Then came the computer with instructions to represent the generalized notion of computational work.
Then we wanted a way to DRY instructions so we got functions.
Then we wanted a way to package collections of functions so we got libraries.
Then we wanted a way to manage collections of libraries so we got package managers.
Then we wanted a way to distribute collections of packages so we got containers.
Now we're in a world where instantiating and configuring a collection of containers is error-prone, burdensome, and rarely portable.
Each level adds something (containers have the benefit of being language-agnostic), but the price is complexity.
I was secretly assuming that something like that is already being done. ;)
Recursion is fine and useful. What’s detrimental is if each layer defines conceptually same things in slightly different ways and with different terminology. Make a recursive format (like a file system) and be done with it (and/or extend it so that all levels can profit from the extension).
Yep, I agree. I've been chatting with a friend about Nix (I'm a novice) and it sounds like it has the capacity to treat many things as Just Files connected in a dependency web, which is cool.
Docker containers (in practice) can be considered to be an extreme form of distributing static binaries (snaps, flat-packs, nix, fat go binaries, pyinstaller, etc).
It is less about security and more about having several applications on the same hardware without full blown VMs.
It’s a great choice for the JS ecosystem for the same reason it’s a terrible choice for the JS ecosystem: JS dependencies are a lot, and they sometimes want to do strange things at install-time that Nix frowns upon. There’s definitely an upfront cost, and a maintenance burden as well. But the flexibility and the control over what code you’re actually running could still be worth it.
The single file aspect is cool for distribution but of course not for editing.. a similar thing that is still maniacal/clever but somewhat easier to scale could use i.e. makeself
Well snap has other problems too. For me a big one is that it is pushed heavily by a single company which may or may not still exist in 10 years. Or which might decide to capitalize on its investment once enough people are locked into its ecosystem.
Cute trick, but it's not actually what the title claims.
Since this is actually env calling bash first, not docker, this should just be a Bash script. You can still feed the Dockerfile to docker build via STDIN. But you'd gain the ability to shellcheck the Bash, the code would be easier to read, write, maintain, add comments to, etc. You could keep the filename the same, run it the same way, etc. The way they've done it here is just unnecessarily difficult.
> You can still feed the Dockerfile to docker build via STDIN.
but you'd then have to work out how to "filter out" the bash commands inside this bash script to make it a valid docker file.
Unless of course, you entirely store the docker file contents inside heredocs. That works fine, but it's not as "cool" as "executing" dockerfiles as a script.
i think the kernel primitives are fine, unshare and namespaces make perfect sense to me. docker, podman, buildah, buildx, whatever... all these things with cutesy names and fatal flaws seem like the mess to me.
the feature IS the fatal flaw. after unsharing namespace you still want your network to "just work". the "quality"of the solution is directly proportional to how bad the security is.
the scale is non virtualized qemu all the way to docker which will even screw your iptables rules for your convenience. hn crowd falling in the middle as the Goldie locks we all are.
I haven't used docker since ~2017. My clusters run on cri-o, builds are with kaniko, and some of my systems just call runc with OCI container definitions. Docker (especially its API) is a giant mess, and the sooner it's replaced by smaller tools and clear standards the better.
Can attest from $Job that there are podman users. Podman is awesome for some of our RHEL-based systems and we will continue to use it. You are just gonna hear about it a lot, because it's just a runtime.
I mean I know several people who run their infra with podman. But it's for personal things, I don't know if there is any level of usage at the enterprise level.
DISCLAIMER: I work for Red Hat. I'm formerly an OpenShift Consultant and SA.
podman has underpinned our Kubernetes distribution, OpenShift, since 4.0 was released in 2019. OpenShift is a $1B+ USD business for us (https://www.newsobserver.com/news/business/article271678707....). You can search and see a sample of who uses it for Enterprise level business.
OpenShift Container Platform uses CRI-O as the container engine and runC or crun as the container runtime. Podman is only directly used for the openshift-installer, but as a container management tool uses the same underlying runtimes. This means they share the same long tenure in production when it comes to using runc. Is that what you meant?
Notwithstanding, Podman is gaining a lot of momentum, especially now with Podman Desktop. Disclaimer: I work for Red Hat on the Podman Machine and OpenShift Local/CRC teams to provide integration with Podman Desktop aiming at developer usecases
I'll chime in to say that I have started deploying podman over Docker where it's frictionless at $job as well. I'd say half (or more) of my new container deploys are podman.
At home I use only podman because my tinkering doesn't affect anyone but me.
Podman Desktop sees a lot of increased use in the last few months and the PM has spoken with many of our 'customers' about the future and how they are using podman.
Disclaimer: working at Red Hat as a (tech) manager of the OpenShift Local team, involved on the virtualization targets for Podman Machine and the integration of some of our extensions.
This is cool hacking but I really don't get this obsession with "single file". Directories exist and can contain self-contained applications without the need to pack everything into some ugly script. They are into the slightest bit more difficult to ship around to different machines.
You can create this type of thing (a self-contained single-file project) for any language or infrastructure, with or without a clever shebang. All you need are heredocs.
For example, here's the same app but packaged as a regular bash script:
Of course! Bash script is Turing complete so it should be possible to implement everything in it :)
The only upside to having an executable Dockerfile is that it's still a valid Dockerfile that you can use with docker build, docker-compose, etc. in addition to being able to execute it.
Yes I love this approach, I use this exact format as a way to get ChatGPT to work with an entire multi file programming project in a single idempotent bootstrapping script. Then ask for changes to be given as the entire file again
The upgrade files for a product I used to work on was (and perhaps still is) a .tar.gz file with a shell script prepended to them, to make a self-extracting/self-executing archive. The archive wasn't even base64 encoded or anything; just binary data with some text in front that can find the beginning of the binary.
For those wanting to go down the self-extracting executable route, I recommend arx (it generates that sort of tarball-prepended-with-shell-script you describe) https://github.com/solidsnack/arx
The `nix bundle` command can generate an arx file, which includes all of an application's dependencies. As an example, we started getting issues with an EC2 server whose image was an accumulation of changes over several years; whilst we worked on migrating to a saner setup (containers defined using Nix), as a stop-gap we got the server working again by using `nix bundle` to create an arx executable containing working versions of all the application's dependencies, which we could copy to the existing server as a drop-in replacement of the existing (broken) command.
Oh yeah, true, I've seen this pattern very often. Can be annoying sometimes when you just want to extract the files rather than run an installer script and they don't give an option.
It’s a Docker shebang.
Normally shebangs are used to define what shell to use when running a script, or in the case of a python script, to run it with “./myscript” instead of “python myscript.py”.
Here OP created a little hack for building and running a docker container by adding a shebang to a Dockerfile.
Usually it’s a two step process. You first use “docker build” to build the image and then “docker run” to create a container from it. With this little hack you just run “./Dockerfile” and it does both.
It turns a Dockerfile into an executable script, so that by executing the Dockerfile, the shebang invokes docker to build and run the file.
Pretty neat if you're using Dockerfiles, but also highly non-standard so you wouldn't use it in your company repo (unless you want to increase the "what-the-fuck" level of your repo).
It's more of a "look, this is cool" kind of a thing if you're a Linux and container user.
I can see this used to install dependencies (php composer?) to inspect code with references that resolve instead of having to spinup a whole toolchain just for that
This isn't POSIX compliant is it? I feel like I tried to do something different but trying to put arguments in a shebang and ran into trouble there a year or two ago.
Some people like their personal full-programs inside a single-file, I think the appeal is that after opening you only have to keep scrolling to continue reading the other "files", or that if you need to attach it to an email or something similar you are sure it has no dependency on other files, but yeah the trade-off is not worth it.
RUN <<EOF cat >/root/server.js
console.log('test')
EOF
However the Markdown one is better if the syntax highlighting theme makes the code fence a color that doesn't stick out - either monochrome or closer to the background color.
No, but that’s a fun idea. Most of my docker crimes involve working around the lack of REBASE or similar to transplant a layer from another stage. Instead I’m forced to abuse rsync.
Curious for what systems this does not work? I start a lot of my shebangs with `#!/usr/bin/env <app>` such that I can rely on PATH for resolving application locations.
Huh, I guess I've never given it too much thought. I was under the impression for some reason that the first argument was supposed to be an absolute path.
It might have had to be absolute on ancient Unixen ... Unices? Seems POSIX has all of this to say about shebangs:
If the first line of a file of shell commands starts with the characters "#!", the results are unspecified.
So it's basically all down to convention, but one that's been followed long enough that you can rely on it. I still don't count on shebang taking more than one argument to the command though.
I mean, apart from the hacker mindset of the thing, if you’re talking about the _outcome_: there is real value in being able to distribute something to a customer without having to worry about whether they have the dependencies or not, what version of an OS they have, whether they are running on ephemeral VMs or long running machines they don’t want to pollute etc etc.
At Google Cloud we did this on a team I was on. It was really the only way we could be sure of the environment we were handing off to the customer.
It's funny you mentioned Google's internal infra because my motivation for this was to hack together something to emulate the kind of static fat binaries deployed on Borg.
For added excitement, you could go the whole hog, and generate, build and run via docker compose. Apart from anything else, you wouldn't need the 2-step build&run.
As you can imagine, it wasn't a fun developer experience building this incrementally without build logs. This was the only way I could find to have the cake (logs) and it eat it too (sha).
> #! /usr/bin/env nix-shell > #! nix-shell -i python3 -p python3Packages.pillow python3Packages.ansicolor > > # scale image by 50% > import sys, PIL.Image, ansicolor > path = sys.argv[1] > image = PIL.Image.open(path) > factor = 0.5 > image = image.resize((round(image.width * factor), round(image.height * factor))) > path = path + ".s50.jpg" > image.save(path) > print(ansicolor.green(f"done {path}"))
Just `chmod +x` and you have an executable with all dependencies you specify!
[0] https://nixos.wiki/wiki/Nix-shell_shebang